Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Lanai, the mystery CPU architecture in LLVM (q3k.org)
287 points by todsacerdoti on March 21, 2022 | hide | past | favorite | 150 comments


I worked for Myricom 2001->2013, and Google 2013->2015. And, wow, I wish I could comment on this thread.. :)

EDIT: Scott's Medium blog post linked from this article is fascinating: https://medium.com/swlh/myricom-an-hpc-story-and-lessons-lea...


> The CEO capped the full time headcount at 49 people so he wouldn’t be subject to California law requiring him to provide his employees with a health plan.

Can you imagine being this stingy and short sighted? The employees are largely PhDs and he is willing to cripple his company to avoid paying benefits.


I'm not sure that part of Scott's article was accurate.

I worked remotely, and I'm pretty sure that I was offered benefits. I declined them, and used my wife's benefits because they were less expensive, and meshed better with local health care providers.

EDIT: And I know we had a 401K


I worked there and have evidence that this statement in the blog is not true, and I'm being down-voted?


Downvotes are fickle things. Unrelated, why turn down the health care? With double health care you can tell you're health provider you've got double coverage and they will bill the primary first, and then the secondary for what is left (and if the secondary is designed to be a primary program it will cover all that is left so you end up with zero out of pocket, even for co-payments).


Because it was CA focused, and I was remote. So most local providers were out of network. And it was not free ... Its been almost a decade, but I seem to recall that it was cheaper to add me to my (then) wife's coverage.


It’d be pretty annoying to have a Kaiser Permanente focused insurance plan and live in Ohio.


American health care is ridiculous, episode #193993.


We are starting to copy it elsewhere, it's going good in the same direction in baby steps.


I don’t know if it’s illegal, but basically every health care plan will deny you if they know you have other coverage. This is bad advice.


You’re wrong. It is perfectly common to have double coverage. I had it on multiple occasion. You can read about it if you google “coordination of benefits”.


You’re right. I don’t know if this varies by state, but for sure I’ve had to sign contacts stating I didn’t have any other insurance in order to start various plans. So I don’t think this is universally ok but clearly there are situations that allow it, seemingly most common in conjunction with Medicare.


This is funny, because basically every insurance here in Sweden states that if you are covered by another insurance, then this one is invalid. In practice though what Iv'e heard is that the companies will always just be kind and sort of split the responsibility between them instead when something happens.


Is this generally the case? If so, I had no idea...


Generally different jurisdictions have different rules, but in California it is the case. And to another point made here, if you're employer is going to deduct some of your paycheck to pay premiums month to month then it can be a net loss if you won't use it enough to offset what you would have been paid.


Thanks - I need to do some research on this!


I don't think so, your comment is black text for me, so it's either at it's original score or higher.


[flagged]


[flagged]


There’s a reason guidelines ask you not to discuss downvotes, which always fluctuate. You were adding pure noise to the thread, and you tripled down on that.


I believe that There are many things that kick in at 50 people. It's oversimplifying to assume this was due to not wanting to offer benefits.

First google hit: https://www.aeisadvisors.com/california-compliance-is-my-com...


Why 50? Isn’t that arbitrary? Why not just force benefits for all employees? At least target revenue, if a company makes X revenue it has to provide Y benefits per employee. CEOs definitely won’t be able to easily make the decision to take in less money, as that will piss off share holders.


It is arbitrary, but we have to draw the line somewhere.

It is not about money, tax is about money. This is about the extra work needed to accommodate for said obligations, it can involve several full time jobs, and the state considers that it is too much for a company of less than 50 employees.

Really, it is a lot of paperwork, and the more a company spends on paperwork, the less there is to pay employees.


Could this be improved by a government agency where they do this paperwork for you at a per-employee cost if your company is under X employees (say 3x the limit at which the paperwork kicks in)?


The easiest improvement would be to remove the employer from the equation and just provide taxpayer-funded benefits to all. The main reasons employers are still involved are inertia followed by a concern that employees won't feel as obligated to stay in shit jobs.


Sounds like socialism, though. Can’t have that!


I think there are arguments to be made in favor of a more free market approach (or a less free market one.) At this moment we seem to have the worst of both worlds.


> Why not just force benefits for all employees?

Public perception. Small business owners occupy a different perceptual space than large businesses. Lawmakers don't want to be perceived as hurting small businesses and craft laws to avoid it. It's more about maintaining political power as much as anything else.


I thought of this too, and it’s actually good system. Those small businesses should be exempt I think. It’s only the large corporations that should be forced to pay (I wouldn’t even be opposed to subsidizing health care for those businesses by taxing larger ones more.)

I think a system where we go by revenue is a lot of logical than going by number of employees. If a business makes 1Bpy but only has two employees, would it not be exempt from the current regulations?


It works out great for everyone except the 47.5% of Americans who are employed by small businesses.


Not really? I’ve mentioned in both comments that we should arbitrate based on revenue, not number of employees. I also said I’m the comment you replied to we could potentially subsidize the benefits at smaller businesses with a tax on the largest. Not saying this would work, or even that the math is sound, but I have definitely addressed your point twice.


Revenue isn't always a great way to measure the size of a business, because different businesses add different amount of value. And you can't use profit, because that's easy to game. For something like people benefits, predicating the requirement on the number of people employed makes a lot of sense. It's about as much work to setup a health plan for a two person firm as a 50 person firm, but many fewer people benefit.


I don’t disagree, I just think that using number of employees seems like one of the worst possible ways to handle company reaching X size. I came up with that off the top of my head there are surely better solutions.


Defense contracting also has special bonus's when your company is less than 50 people IIRC.


The only incentives I’m aware of are meeting the SBA size standards for small business, which are maxes from 500-1500 depending on your industry.

https://www.sba.gov/sites/default/files/2019-08/SBA%20Table%...


When you create a sharp discontinuity in the marginal cost of hiring someone, expect companies to act accordingly. It would be irrational not to. Blame the system of shitty incentives


It’s not even that hard or expensive to offer 401k, health, life insurance in a startup. Been there, done that.


That's a very strange comment to make for other people's fiscal situations where you know nothing about their particulars.


At least 10k/yr/employee.


How much revenue are those people generating? Don't you think being able to hire and retain more highly-skilled people might be worth more than the $5/hour “saved”? (For example, how much does it cost to hire someone — most places spend many thousands doing that)


IDK, I'm just show a concrete value. The person I replied to said it was "not hard". S I mentioned a minimum dollar amount that I've observed at many places seemed reasonable (it can be much much more, like if you cover 100% health for employees and family)

Seems it wasn't helpful.


My point was just that this seems more like an issue for a business with low profit margins and low wage employees — not engineers, especially for a profitable company (i.e. not some early-stage startup relying mostly on equity). In this case, if they were making $20M/year with on the order of 50 employees it seems very short-sighted.


That's all true; I also think it's a wise choice to invest in your team for long term.

My point was to simply define a quantitative-value that folk who are not familiar with these costs could see as a minimum. Because "not hard" and $10k mean very different things. Hoping to help those HN readers who aren't familiar and/or are planning on starting a company.


I can totally imagine not wanting the hassle of my company having to be a health insurance broker in addition to its actual purpose. What's wrong with paying employees a good salary and having them be responsible for their own insurance, any more than having them buy their own food, housing, and transportation?


I would be fine with this, but the job would have to pay at least the consumer market cost of healthcare plus plenty of headroom for increases more than the next best offer for me to consider it.


It’s cheapness and greed on the boss’s part.


After the ACA passed, I remember my ex getting 29 hours at the places she worked because 30 hours demanded they be given benefits like health insurance. I really think regulations with a set number like this are the result of corruption or something. No way anyone doesn’t foresee companies playing them this way.


Regarding the blog post: "Why did Myricom’s second CEO fail so fast, lack of experience, and loss of respect? If one were to attempt to explain the plummeting fortunes of Myricom during the term of the second CEO, two characteristics suggest themselves:"

It seems to me the second CEO didn't care about the company, the product or the risk the previous CEO wanted to take (more investment to bring out the new chip). Instead they proposed (and got) another round of dividends for the investors, a promotion to CEO, and ultimately an aqui-hire to Google. To me this reads very simply as putting profit and self-interest ahead of the product. Understandable, and it happens all the time. If you have a vision, the best way to achieve it is to remain in control. Somehow the CEO lost that control.


> The moral of the story is simple, expect that in a small start-up your stock will NEVER be worth anything, and get what you can in your normal paycheck.

That's an important moral.

I wish someone had told me that back then.


Correct me if I'm wrong, but Lanai was designed by Jakov. Who sadly and unexpectedly passed away a few weeks ago.


Indeed, it was quite a shock. Jakov was a wonderful person and the best supervisor one could hope to have. We're all very sad about his passing, and very sad for his family.


I'm sorry to hear Jakov passed away - he was a brilliant, kind engineer.


Ah shit :-( RIP Jakov


Yeah, that blog post was a great reference to get a general overview of What Happened At Myricom.

... and I just hope I didn't write anything factually wrong in TFA. :)


Interesting - I had never heard of Myricom. I have used Infiniband quite a bit in budget super high performance applications - the old hardware can sometimes be found quite cheaply compared to 40gig/100gig ethernet (or it was at least a few years ago). Much more fiddly to get working but the performance was impressive once it did.


For what it's worth, I've had rather more trouble with Ethernet than IB, especially given IB management facilities.


IB always seemed like the kind of thing where you loose out a lot of comfort if you go with "old hardware sometimes found cheaply" and thus don't have vendor support, have to dig out drivers and tools from random sources, ...


Obviously if you don't have driver support for the kernel you run, you can't use old stuff if you want to, but we had SDR IB running quite a long time without support. I don't have happy experience of HPC vendor support anyway, and unfortunately IB is now a Mellanox monoculture.


Probably was also difficult because I had no experience working with it, no support, and there is not nearly as many online forum posts and articles written about common issues.


When does the NDA expire? 7 years is a long time.


I just want the PUNT instruction to become a standard RISC-V extension.


Most NDAs are perpetual.


That is only enforceable for Trade Secrets -- at least in most of the US. Google would need to prove that the things this employee worked on were "essential to the commercial viability of the business".

I totally understand not wanting to poke the legal behemoth, however.

As an anecdotal, a personal friend who is a former Googler signed a 10-year NDA, and he had his fingers in quite a few pies.

So maybe in 3 more years?


> That is only enforceable for Trade Secrets

Do you have legal citations supporting this assertion? I'm fairly certain this isn't so.


Mostly based off this quick search: https://www.everynda.com/blog/duration-clauses-non-disclosur...

As I said, I definitely understand not wanting to be the one to actually set precedent - that involves getting sued by Google, which is expensive and potentially career suicide. But I think ultimately its probably not enforceable.


NDAs AFAIK typically last 5 years. Are you sure you can't comment?


This line in that post is mystifying: they peddled Infiniband, an inferior design based on bus technology, but repositioned as a point to point network


At least initially (~2005 or so?) we had customers who were given free IB for their HPC clusters ripping the IB out and replacing it with with paid Myrinet installations because they could never get their IB clusters to work reliably.

Although we had the superior product, we could not market our way out of a wet paper bag. Our CEO counted on our (then) superior engineering & expected our products to market themselves. Eventually the IB engineering caught up and surpassed ours, but our marketing never even came close to theirs, so they were the clear winner.


I build a myrinet HPC cluster or two, vague memory that it was 2.5Gbit. Worked well, I particularly liked the cables (thin and optical, way nicer than IB cables). When I went to replace them the Myrinet 10G was repeatedly delayed, and seemed to be focusing on 10G ethernet, not MPI/HPC.

We ended up going with a new upstart called Pathscale, which then was bought by qlogic, and then Intel. Our benchmarks showed a clear win over Mellanox at the time. In particular on larger jobs the Mellanox drivers seemed to consume a fair bit of ram for each client you were talking to, so even 32 nodes jobs would consume a fair bit of ram. We even built a cluster out of hypertransport connected infinipath cards in a "HTX" slow instead of PCIe. Managed I believe 1.4us latency, which is still reasonably competitive today. We built SDR (first public cluster), DDR (first public cluster), QDR based clusters. But then Intel said the new Infinipath would not be supported on AMD nodes, or even previous gen Intel nodes, I said it's a non starter and switched to Mellanox. I talked to several other HPC sites that said the same thing, months later Intel killed off the product. Sad and short sited on Intel's part.


Where'd the name Myricom/Myrinet come from? "Infiniband" is a pretty cool name.


unless I'm thinking of the wrong vendor, the myrinet devices supported programming the nic. for some applications that was a really big win over IB verbs


Yes and no. The original firmware / host driver stack was open (shared?) source called MyriAPI. It was so terrible that spawned an ecosystem of uni and research lab projects to implement replacements. I worked on one such project at Duke (Trapeze). Others included BIP, Fast Messages, Active Messages.

MyriAPI was later replaced by a new firmware / host stack called GM. I think at some point we stopped shipping firmware source. I think that point was the transition from MyriAPI to GM...


I think that's a confusing reference to the fact that InfiniBand initially was supposed to be also a replacement for system-internal buses (i.e. a PCI replacement), before PCIe took that place instead.


I wasn't there, but understood it was intended for general machine-room -- I dislike "datacentres" -- use; Wikipedia implies it was intended for ~everything...

Anyway, IB at least had lower latency than Myrinet, which is what counts in a lot of HPC. I don't remember the numbers, but I got quite close to Myrinet latency with the on-board 1GbE NICs on Sun x2200s, an un-managed switch, and Open-MX, which was interoperable with the Myrinet MX protocol. (I remember MX being slagged off by Mellanox; I don't know how similar MXM later was for IB.)


Yeah that makes sense, it's an RDMA protocol, and PCIe has wayyyy more in common with infiniband then it does with legacy PCI.


You can use a bus PHY in a p2p configuration.


But Infiniband didn't do that.


you can do point-to-point connections with infiniband, it's a pretty common homelab use-case since adapters are so cheap these days.


InfiniBand is always point-to-point, that's the point. The talk about "bus" is confusing/wrong since it never is a bus.


oh, "bus" in the context of an actual bus, like CAN. I thought p2p just meaning direct-attach. But yeah it's switched, it's not a shared bus.


Yeah that makes no sense.


I just want to know why it needs to be in-tree.


Google uses upstream llvm as their dev repo. I don't know if they do any development downstream.


LLVM doesn't have a plugin system so it's that, or maintain a fork and deal with horrible merges all the time.


Some important/big customer pays a maintainer to maintain it upstream instead of having to maintain/port it in-house. If there's two of these, it makes sense. Even with one it would otherwise require backporting. Might as well do it in public?


Plot twist: you were the CEO.


Why can you not comment? Certainly any NDA you signed must be expired by now.

PSA: If you are not getting continuing payments, any contracts you signed are void. Technically, I gather you are supposed to notify the other party that you are terminating your participation in the contract. Even if you are getting ongoing payments, you can opt out of that, too.

(This is not legal advice. Consult an actual lawyer for anything that matters.)


It is absolutely not the case that you can just quit a company, wait a year, and then dump all their confidential information.

Even if there were no legal repercussions (which there are), that's a great way to never be hired again.


Time since 2015 is rather more than a year. And, all appearances are that any signatory no longer exists, as such.

Finally, if you don't yet know about voluntary termination of contracts, you have an opportunity to learn more about it.


Never be hired because you gave away most of your value for free?


Never be hired because trust is important in human endeavors


At least the illusion of trust. Much more valuable to market your past experience then privately give away your secrets in private.


And when it's been over 5 years?

What about talking about things from 10 years ago, or 15?

The grandparent post is wrong but if OP is being accurate about "wish I could" then the NDAs or severe expectations of not talking are going way too far.


Hard to make a more wrong post (everything apart from the advice to consult a lawyer is incorrect).


I see that you have not followed the bit you approved of.


> The world's best instruction mnemonic: PUNT, to switch between user and system contexts.

That's pretty good but I still prefer PowerPC's EIEIO.

https://www.ibm.com/docs/en/aix/7.2?topic=set-eieio-enforce-...


There are many great Power mnemonics. VSX introduced one nearly as good, xxlxor (which, quite reasonably, is a logical XOR between VSX registers/FPRs). It's delightful to try to even pronounce.

The best part is that eieio's derivation is totally plausibly serious from its stated function. (Remember, IBM doesn't have a corporate sense of humour.) It's also an easy, fairly lightweight speculation barrier apart from its official usage.


My favorite PPC instruction is darn. It delivers a random number, either raw or NIST conditioned. We decided to use the same instruction name in an internal RISCV implementation, and fun times have been had complaining about the darn instruction not working.


After mumbling "x-x-l-x-o-r" my brain decided to pronounce it "excelsior". Huh.


Here's the thread on the LLVM development lists from when Lanai was proposed to be upstreamed with some good commentary about why it should be accepted even if Google isn't willing to go into details: https://discourse.llvm.org/t/rfc-lanai-backend/39874


Probably shouldn't reveal this but, Google are using it internally for their accelerated networking in their cloud, pretty much all Lanai hardware is deployed in GCP infrastructure.


You should not have posted this, least of all because it's extraordinarily misleading.

We use Lanai elsewhere, and it's in hardware powering Google's server networking networking (inclusive of GCP), but Lanai sadly does not play a substantial role in Andromeda networking.

Feel free to ping me on corp (username in my profile here) to discuss.


Cool, it went somewhere very wide and deep.

I guess the only genuinely interesting question worth asking (and hoping for an answer to) is, given the back and forth about performance and Myrinet vs IB on here... how well does the design stack up in practice to an Ebit-scale environment?


What explains why Amazon has been relatively open about Nitro but Google has never said anything about how their NICs and SSDs work? It can't be because it's secret sauce; Amazon beats them on latency etc.


We've been fairly open about how Google Cloud Networking works -- https://www.usenix.org/conference/nsdi18/presentation/dalton

There's also some more recent coverage on IPUs that I don't have a handy link to.


in fairnes, skimming the slide the name "lanai" doesn't seem to occour


Indeed.


I guess that's the "fairly" part of your comment?


Ah, no, the fairly was merely that we omitted details and some things have changed. That said, the paper completely captures the use of Lanai in GCP's networking data plane.


security by obscurity?


Lanai …. Sounded familiar, of course myri. I used these NICs to implement wire speed 10g packet capture. They were a fraction of the cost of dedicated packet capture boards and had a nice api.

These days I guess a Risc-v would be a perfect match for this sort of application, but back then it seemed every accelerator startup would implement a custom ISA.


Random, tangentially related thing I just remembered: FreeBSD used to provide its own firmware for (iirc MIPS-like) core used in very early Broadcom NICs. There also used to be a custom firmware for some Adaptec controllers, along with an assembler tool to compile it during kernel build.


I think you're talking about the Alteon Tigon-II NICs (Alteon was acquired by Broadcom). Ken Merry and I modified Alteon firmware so that it supported zero-copy sockets by doing page-flipping on receive.

See https://people.freebsd.org/~ken/zero_copy/


All LSI "SCRIPTS" HBAs had firmware as part of kernel/driver sources among the many platforms they were used. I believe linux used a simple assembler whose sources were included in kernel to generate the binaries (if not preprocessor tricks, even). *BSDs definitely had their own solution too.


Googler, opinions are my own.

Btw, the Lanai target in LLVM can be found here: https://github.com/llvm/llvm-project/tree/main/llvm/lib/Targ.... Latest commit is only 24 days ago, so it looks to still be active. Though I'm not sure how much of that is generic target updates, vs target specific changes.


> The CEO capped the full time headcount at 49 people so he wouldn't be subject to a California law requiring him to provide his employees with a health care plan.

...

words fail me


I believe this was disputed by a former employee elsewhere ITT.


> Some of my recent long-term projects revolve around a little known CPU architecture called 'Lanai'. Unsurprisingly, very few people have heard of it, and even their Googling skills don't come in handy.

I don't get this. Searching for [lanai cpu] shows tons of links on the LANai cpu architecture from Myricom, purchased by Google.


I gotta say, if I was the BDFL of llvm, I would kick this out of tree without a second thought. Why on earth should a foss project support a private architecture? How many man-hours have been wasted waiting for the Lanai backend to compile? Not to mention applying project-wide refactoring to the Lanai code.


There's more context in the thread where it was upstreamed: https://lists.llvm.org/pipermail/llvm-dev/2016-February/0951...

Some notable comments:

> I was going to mention it, but you guys know very well the drill, so unless this hardware changes fundamental parts of the middle end in ways that are unnatural to most other targets (doesn't seem that way), then I see no reason why not have it upstream.

> I see no problem with having the backend upstream with the understanding that all the normal policies apply. Getting more people working on ToT is valuable to the community as a whole and provided it's "just another backend" with plenty of tests, the cost is low.


A significant part of work on LLVM comes from Apple, which includes support for quirks of _their_ proprietary chips and platform; and the world is much better for those changes being upstreamed rather than living in a fork somewhere.


There's a big difference between "we are the only people who sell hardware using these chips" and "we are the only people who have access to hardware using these chips".


Removing architectures because they’re obsolete/maintainers can’t be found, I can see (and the llvm maintainers agree. They removed Alpha at some time, for example), but because they’re proprietary? That would mean kicking out x86, x64, ARM, AMD Terascale, IBM’s z/architecture, etc.

I’m sure clang would be very happy with that.


Lanai is more than just proprietary, it's an entirely private architecture.


> How many man-hours have been wasted waiting for the Lanai backend to compile?

Probably zero? Is it compiled by default?


Yes. Run `strings /usr/lib/libLLVM.so| grep -i Lanai` to check on your version ; mine has all the relevant symbols


Oh, wow. I'm honestly surprised. My build does not.


afaik, by default LLVM builds support for all its supported architectures, I guess you passed -DLLVM_TARGETS_TO_BUILD="X86" or something to cmake when building your llvm ?


That was my first reaction, too, but then I had a second thought. Google is going to do this work on this in-tree or out-of-tree. If it is done in-tree, then there is a possibility that it will lead to core improvements that would benefit all backends. If out-of-tree, those are guaranteed to happen. I don't think it's a slam dunk, but on the balance it could make sense. And once it is in-tree you can actually validate this hypothesis by seeing the impact (or not) of contributions from Google.


How would Lanai compare to a simple RV32I isa/core for this application area? The short description in OP doesn't really clarify what, if anything, might be specifically compelling about this arch.


Lanai is significantly older than RISC-V. These days you'd likely easily pick designing or using an existing RISC-V core for an application like this, possibly adapting it to your particular usecase.

The decision is different, however, if you happen to have the entirety of the Lanai engineering team on board and own the entire Lanai IP portfolio. :)


Reminds me of the various uses that have come of the IP of the https://en.wikipedia.org/wiki/ARC_(processor) — including inside the Intel Management Engine! — since its ignominious start as the SNES SuperFX chip :)


This seems related

https://github.com/chriseth/notes/blob/186d7ea0742336ed38e39...

"TrueBit - Off-Chain Computations for Smart Contracts"


This is https://github.com/TrueBitProject/lanai , which is in itself an interesting endeavor which I forgot to write about. I've added a mention about it to TFA.


For the link, Norton reports: "This is a known dangerous webpage. It is highly recommended that you do NOT visit this page." The "page full report" doesn't reveal any particulars. I rarely get this warning elsewhere.


This is an old domain, and once upon a time some private malware sample links hosted on it got indexed. Years later and some software still think I'm evil :).


If Google acquired it, the cynical take is that they ended up sitting on it and doing nothing or killing it.


Mentor Graphics used to buy up silicon-compiler companies just to shut them down, so that manual layout tools would have a longer shelf life. They finally had to give that up around 1990, and figure out a different business to be in. Amazingly, they did.


The fact that they developed and upstreamed an LLVM backend indicates that they probably did something with it.


The Google way: Force the rest of the world adjust to your new "cool thing", and then kill it because it wasn't that great.


Google isn't really forcing anyone to do anything in this case. Another RISC backend that was made clear would be ripped out of upstream the instant the Google maintainers stopped responding isn't really an imposition.


The relevant discussion: https://lists.llvm.org/pipermail/llvm-dev/2016-February/0951...

Seems the maintainers were more than happy to accept it, and even had policies in place for such contributions. One maintainer even mentioned it's a good policy because it brings more developers into using LLVM's ToT, which is overall good for project health.


Would something like this be accepted in open source projects that are not significantly driven by Google?

E.g. would Linux accept code for drivers / architectures that are not available to the public? I'm genuinely curious.


> E.g. would Linux accept code for drivers / architectures that are not available to the public?

Drivers is an unequivocal "yes". Here's an example -- from Google, in fact:

https://github.com/torvalds/linux/commit/e561bc45920aade3f8a...

Architectures is less likely. But the pendulum has swung away from "making your own architecture" anyways.


But hey, even with that example, the same mechanism got adapted to Chromebooks+Coreboot and now uses that same driver here:

https://github.com/torvalds/linux/commit/d384d6f43d1ec3f1225...


xcore (mentioned in that thread) is pretty obscure and still in trunk last I looked. Extra backends don't carry that much of a maintenance cost, mostly patching them up on api changes. Weirder targets hit bugs that the common ones don't so there's a benefit from having them in tree too.

I'm not sure that generalises beyond modular compilers though.


Given how ridiculously strict the Linux folks are being with the DXGKRNL stuff that MS is working on (which is public as part of WSLg), I would say definitely not.


I think that has more to do with DXGKRNL coming from Microsoft than any policy that would be applied to other contributers.


Other Microsoft contributions got into the kernel just fine.


And other thin veneers over closed source paravirtualized VM graphics acceleration pipes get in as well, even from companies that flagrantly violate Linux's license.

I think that it's a difference of subtree maintainer as to why some Microsoft code gets in and some is fought tooth and nail; you can't treat Linux developers as a monolith, and the graphics side is significantly more Microsoft adverse.


Given the timing by Google I suspect this COULD be involved in what Paul Debevec was doing there with giant camera sphere arrays


Interesting that this is being downvoted but not commented on.....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: