> The CEO capped the full time headcount at 49 people so he wouldn’t be subject to California law requiring him to provide his employees with a health plan.
Can you imagine being this stingy and short sighted? The employees are largely PhDs and he is willing to cripple his company to avoid paying benefits.
I'm not sure that part of Scott's article was accurate.
I worked remotely, and I'm pretty sure that I was offered benefits. I declined them, and used my wife's benefits because they were less expensive, and meshed better with local health care providers.
Downvotes are fickle things. Unrelated, why turn down the health care? With double health care you can tell you're health provider you've got double coverage and they will bill the primary first, and then the secondary for what is left (and if the secondary is designed to be a primary program it will cover all that is left so you end up with zero out of pocket, even for co-payments).
Because it was CA focused, and I was remote. So most local providers were out of network. And it was not free ... Its been almost a decade, but I seem to recall that it was cheaper to add me to my (then) wife's coverage.
You’re wrong. It is perfectly common to have double coverage. I had it on multiple occasion. You can read about it if you google “coordination of benefits”.
You’re right. I don’t know if this varies by state, but for sure I’ve had to sign contacts stating I didn’t have any other insurance in order to start various plans. So I don’t think this is universally ok but clearly there are situations that allow it, seemingly most common in conjunction with Medicare.
This is funny, because basically every insurance here in Sweden states that if you are covered by another insurance, then this one is invalid. In practice though what Iv'e heard is that the companies will always just be kind and sort of split the responsibility between them instead when something happens.
Generally different jurisdictions have different rules, but in California it is the case. And to another point made here, if you're employer is going to deduct some of your paycheck to pay premiums month to month then it can be a net loss if you won't use it enough to offset what you would have been paid.
There’s a reason guidelines ask you not to discuss downvotes, which always fluctuate. You were adding pure noise to the thread, and you tripled down on that.
Why 50? Isn’t that arbitrary? Why not just force benefits for all employees? At least target revenue, if a company makes X revenue it has to provide Y benefits per employee. CEOs definitely won’t be able to easily make the decision to take in less money, as that will piss off share holders.
It is arbitrary, but we have to draw the line somewhere.
It is not about money, tax is about money. This is about the extra work needed to accommodate for said obligations, it can involve several full time jobs, and the state considers that it is too much for a company of less than 50 employees.
Really, it is a lot of paperwork, and the more a company spends on paperwork, the less there is to pay employees.
Could this be improved by a government agency where they do this paperwork for you at a per-employee cost if your company is under X employees (say 3x the limit at which the paperwork kicks in)?
The easiest improvement would be to remove the employer from the equation and just provide taxpayer-funded benefits to all. The main reasons employers are still involved are inertia followed by a concern that employees won't feel as obligated to stay in shit jobs.
I think there are arguments to be made in favor of a more free market approach (or a less free market one.) At this moment we seem to have the worst of both worlds.
Public perception. Small business owners occupy a different perceptual space than large businesses. Lawmakers don't want to be perceived as hurting small businesses and craft laws to avoid it. It's more about maintaining political power as much as anything else.
I thought of this too, and it’s actually good system. Those small businesses should be exempt I think. It’s only the large corporations that should be forced to pay (I wouldn’t even be opposed to subsidizing health care for those businesses by taxing larger ones more.)
I think a system where we go by revenue is a lot of logical than going by number of employees. If a business makes 1Bpy but only has two employees, would it not be exempt from the current regulations?
Not really? I’ve mentioned in both comments that we should arbitrate based on revenue, not number of employees. I also said I’m the comment you replied to we could potentially subsidize the benefits at smaller businesses with a tax on the largest. Not saying this would work, or even that the math is sound, but I have definitely addressed your point twice.
Revenue isn't always a great way to measure the size of a business, because different businesses add different amount of value. And you can't use profit, because that's easy to game. For something like people benefits, predicating the requirement on the number of people employed makes a lot of sense. It's about as much work to setup a health plan for a two person firm as a 50 person firm, but many fewer people benefit.
I don’t disagree, I just think that using number of employees seems like one of the worst possible ways to handle company reaching X size. I came up with that off the top of my head there are surely better solutions.
When you create a sharp discontinuity in the marginal cost of hiring someone, expect companies to act accordingly. It would be irrational not to. Blame the system of shitty incentives
How much revenue are those people generating? Don't you think being able to hire and retain more highly-skilled people might be worth more than the $5/hour “saved”? (For example, how much does it cost to hire someone — most places spend many thousands doing that)
IDK, I'm just show a concrete value. The person I replied to said it was "not hard". S I mentioned a minimum dollar amount that I've observed at many places seemed reasonable (it can be much much more, like if you cover 100% health for employees and family)
My point was just that this seems more like an issue for a business with low profit margins and low wage employees — not engineers, especially for a profitable company (i.e. not some early-stage startup relying mostly on equity). In this case, if they were making $20M/year with on the order of 50 employees it seems very short-sighted.
That's all true; I also think it's a wise choice to invest in your team for long term.
My point was to simply define a quantitative-value that folk who are not familiar with these costs could see as a minimum. Because "not hard" and $10k mean very different things.
Hoping to help those HN readers who aren't familiar and/or are planning on starting a company.
I can totally imagine not wanting the hassle of my company having to be a health insurance broker in addition to its actual purpose. What's wrong with paying employees a good salary and having them be responsible for their own insurance, any more than having them buy their own food, housing, and transportation?
I would be fine with this, but the job would have to pay at least the consumer market cost of healthcare plus plenty of headroom for increases more than the next best offer for me to consider it.
After the ACA passed, I remember my ex getting 29 hours at the places she worked because 30 hours demanded they be given benefits like health insurance. I really think regulations with a set number like this are the result of corruption or something. No way anyone doesn’t foresee companies playing them this way.
Regarding the blog post: "Why did Myricom’s second CEO fail so fast, lack of experience, and loss of respect? If one were to attempt to explain the plummeting fortunes of Myricom during the term of the second CEO, two characteristics suggest themselves:"
It seems to me the second CEO didn't care about the company, the product or the risk the previous CEO wanted to take (more investment to bring out the new chip). Instead they proposed (and got) another round of dividends for the investors, a promotion to CEO, and ultimately an aqui-hire to Google. To me this reads very simply as putting profit and self-interest ahead of the product. Understandable, and it happens all the time. If you have a vision, the best way to achieve it is to remain in control. Somehow the CEO lost that control.
> The moral of the story is simple, expect that in a small start-up your stock will NEVER be worth anything, and get what you can in your normal paycheck.
Indeed, it was quite a shock. Jakov was a wonderful person and the best supervisor one could hope to have. We're all very sad about his passing, and very sad for his family.
Interesting - I had never heard of Myricom. I have used Infiniband quite a bit in budget super high performance applications - the old hardware can sometimes be found quite cheaply compared to 40gig/100gig ethernet (or it was at least a few years ago). Much more fiddly to get working but the performance was impressive once it did.
IB always seemed like the kind of thing where you loose out a lot of comfort if you go with "old hardware sometimes found cheaply" and thus don't have vendor support, have to dig out drivers and tools from random sources, ...
Obviously if you don't have driver support for the kernel you run, you can't use old stuff if you want to, but we had SDR IB running quite a long time without support. I don't have happy experience of HPC vendor support anyway, and unfortunately IB is now a Mellanox monoculture.
Probably was also difficult because I had no experience working with it, no support, and there is not nearly as many online forum posts and articles written about common issues.
That is only enforceable for Trade Secrets -- at least in most of the US. Google would need to prove that the things this employee worked on were "essential to the commercial viability of the business".
I totally understand not wanting to poke the legal behemoth, however.
As an anecdotal, a personal friend who is a former Googler signed a 10-year NDA, and he had his fingers in quite a few pies.
As I said, I definitely understand not wanting to be the one to actually set precedent - that involves getting sued by Google, which is expensive and potentially career suicide. But I think ultimately its probably not enforceable.
This line in that post is mystifying:
they peddled Infiniband, an inferior design based on bus technology, but repositioned as a point to point network
At least initially (~2005 or so?) we had customers who were given free IB for their HPC clusters ripping the IB out and replacing it with with paid Myrinet installations because they could never get their IB clusters to work reliably.
Although we had the superior product, we could not market our way out of a wet paper bag. Our CEO counted on our (then) superior engineering & expected our products to market themselves. Eventually the IB engineering caught up and surpassed ours, but our marketing never even came close to theirs, so they were the clear winner.
I build a myrinet HPC cluster or two, vague memory that it was 2.5Gbit. Worked well, I particularly liked the cables (thin and optical, way nicer than IB cables). When I went to replace them the Myrinet 10G was repeatedly delayed, and seemed to be focusing on 10G ethernet, not MPI/HPC.
We ended up going with a new upstart called Pathscale, which then was bought by qlogic, and then Intel. Our benchmarks showed a clear win over Mellanox at the time. In particular on larger jobs the Mellanox drivers seemed to consume a fair bit of ram for each client you were talking to, so even 32 nodes jobs would consume a fair bit of ram. We even built a cluster out of hypertransport connected infinipath cards in a "HTX" slow instead of PCIe. Managed I believe 1.4us latency, which is still reasonably competitive today. We built SDR (first public cluster), DDR (first public cluster), QDR based clusters. But then Intel said the new Infinipath would not be supported on AMD nodes, or even previous gen Intel nodes, I said it's a non starter and switched to Mellanox. I talked to several other HPC sites that said the same thing, months later Intel killed off the product. Sad and short sited on Intel's part.
unless I'm thinking of the wrong vendor, the myrinet devices supported programming the nic. for some applications that was a really big win over IB verbs
Yes and no. The original firmware / host driver stack was open (shared?) source called MyriAPI. It was so terrible that spawned an ecosystem of uni and research lab projects to implement replacements. I worked on one such project at Duke (Trapeze). Others included BIP, Fast Messages, Active Messages.
MyriAPI was later replaced by a new firmware / host stack called GM. I think at some point we stopped shipping firmware source. I think that point was the transition from MyriAPI to GM...
I think that's a confusing reference to the fact that InfiniBand initially was supposed to be also a replacement for system-internal buses (i.e. a PCI replacement), before PCIe took that place instead.
I wasn't there, but understood it was intended for general machine-room -- I dislike "datacentres" -- use; Wikipedia implies it was intended for ~everything...
Anyway, IB at least had lower latency than Myrinet, which is what counts in a lot of HPC. I don't remember the numbers, but I got quite close to Myrinet latency with the on-board 1GbE NICs on Sun x2200s, an un-managed switch, and Open-MX, which was interoperable with the Myrinet MX protocol. (I remember MX being slagged off by Mellanox; I don't know how similar MXM later was for IB.)
Some important/big customer pays a maintainer to maintain it upstream instead of having to maintain/port it in-house. If there's two of these, it makes sense. Even with one it would otherwise require backporting. Might as well do it in public?
Why can you not comment? Certainly any NDA you signed must be expired by now.
PSA: If you are not getting continuing payments, any contracts you signed are void. Technically, I gather you are supposed to notify the other party that you are terminating your participation in the contract. Even if you are getting ongoing payments, you can opt out of that, too.
(This is not legal advice. Consult an actual lawyer for anything that matters.)
What about talking about things from 10 years ago, or 15?
The grandparent post is wrong but if OP is being accurate about "wish I could" then the NDAs or severe expectations of not talking are going way too far.
There are many great Power mnemonics. VSX introduced one nearly as good, xxlxor (which, quite reasonably, is a logical XOR between VSX registers/FPRs). It's delightful to try to even pronounce.
The best part is that eieio's derivation is totally plausibly serious from its stated function. (Remember, IBM doesn't have a corporate sense of humour.) It's also an easy, fairly lightweight speculation barrier apart from its official usage.
My favorite PPC instruction is darn. It delivers a random number, either raw or NIST conditioned. We decided to use the same instruction name in an internal RISCV implementation, and fun times have been had complaining about the darn instruction not working.
Here's the thread on the LLVM development lists from when Lanai was proposed to be upstreamed with some good commentary about why it should be accepted even if Google isn't willing to go into details: https://discourse.llvm.org/t/rfc-lanai-backend/39874
Probably shouldn't reveal this but, Google are using it internally for their accelerated networking in their cloud, pretty much all Lanai hardware is deployed in GCP infrastructure.
You should not have posted this, least of all because it's extraordinarily misleading.
We use Lanai elsewhere, and it's in hardware powering Google's server networking networking (inclusive of GCP), but Lanai sadly does not play a substantial role in Andromeda networking.
Feel free to ping me on corp (username in my profile here) to discuss.
I guess the only genuinely interesting question worth asking (and hoping for an answer to) is, given the back and forth about performance and Myrinet vs IB on here... how well does the design stack up in practice to an Ebit-scale environment?
What explains why Amazon has been relatively open about Nitro but Google has never said anything about how their NICs and SSDs work? It can't be because it's secret sauce; Amazon beats them on latency etc.
Ah, no, the fairly was merely that we omitted details and some things have changed. That said, the paper completely captures the use of Lanai in GCP's networking data plane.
Lanai …. Sounded familiar, of course myri.
I used these NICs to implement wire speed 10g packet capture. They were a fraction of the cost of dedicated packet capture boards and had a nice api.
These days I guess a Risc-v would be a perfect match for this sort of application, but back then it seemed every accelerator startup would implement a custom ISA.
Random, tangentially related thing I just remembered: FreeBSD used to provide its own firmware for (iirc MIPS-like) core used in very early Broadcom NICs. There also used to be a custom firmware for some Adaptec controllers, along with an assembler tool to compile it during kernel build.
I think you're talking about the Alteon Tigon-II NICs (Alteon was acquired by Broadcom). Ken Merry and I modified Alteon firmware so that it supported zero-copy sockets by doing page-flipping on receive.
All LSI "SCRIPTS" HBAs had firmware as part of kernel/driver sources among the many platforms they were used. I believe linux used a simple assembler whose sources were included in kernel to generate the binaries (if not preprocessor tricks, even). *BSDs definitely had their own solution too.
Btw, the Lanai target in LLVM can be found here: https://github.com/llvm/llvm-project/tree/main/llvm/lib/Targ.... Latest commit is only 24 days ago, so it looks to still be active. Though I'm not sure how much of that is generic target updates, vs target specific changes.
> The CEO capped the full time headcount at 49 people so he wouldn't be subject to a California law requiring him to provide his employees with a health care plan.
> Some of my recent long-term projects revolve around a little known CPU architecture called 'Lanai'. Unsurprisingly, very few people have heard of it, and even their Googling skills don't come in handy.
I don't get this. Searching for [lanai cpu] shows tons of links on the LANai cpu architecture from Myricom, purchased by Google.
I gotta say, if I was the BDFL of llvm, I would kick this out of tree without a second thought. Why on earth should a foss project support a private architecture? How many man-hours have been wasted waiting for the Lanai backend to compile? Not to mention applying project-wide refactoring to the Lanai code.
> I was going to mention it, but you guys know very well the drill, so
unless this hardware changes fundamental parts of the middle end in
ways that are unnatural to most other targets (doesn't seem that way),
then I see no reason why not have it upstream.
> I see no problem with having the backend upstream with the understanding
that all the normal policies apply. Getting more people working on ToT
is valuable to the community as a whole and provided it's "just another
backend" with plenty of tests, the cost is low.
A significant part of work on LLVM comes from Apple, which includes support for quirks of _their_ proprietary chips and platform; and the world is much better for those changes being upstreamed rather than living in a fork somewhere.
There's a big difference between "we are the only people who sell hardware using these chips" and "we are the only people who have access to hardware using these chips".
Removing architectures because they’re obsolete/maintainers can’t be found, I can see (and the llvm maintainers agree. They removed Alpha at some time, for example), but because they’re proprietary? That would mean kicking out x86, x64, ARM, AMD Terascale, IBM’s z/architecture, etc.
afaik, by default LLVM builds support for all its supported architectures, I guess you passed -DLLVM_TARGETS_TO_BUILD="X86" or something to cmake when building your llvm ?
That was my first reaction, too, but then I had a second thought. Google is going to do this work on this in-tree or out-of-tree. If it is done in-tree, then there is a possibility that it will lead to core improvements that would benefit all backends. If out-of-tree, those are guaranteed to happen. I don't think it's a slam dunk, but on the balance it could make sense. And once it is in-tree you can actually validate this hypothesis by seeing the impact (or not) of contributions from Google.
How would Lanai compare to a simple RV32I isa/core for this application area? The short description in OP doesn't really clarify what, if anything, might be specifically compelling about this arch.
Lanai is significantly older than RISC-V. These days you'd likely easily pick designing or using an existing RISC-V core for an application like this, possibly adapting it to your particular usecase.
The decision is different, however, if you happen to have the entirety of the Lanai engineering team on board and own the entire Lanai IP portfolio. :)
Reminds me of the various uses that have come of the IP of the https://en.wikipedia.org/wiki/ARC_(processor) — including inside the Intel Management Engine! — since its ignominious start as the SNES SuperFX chip :)
This is https://github.com/TrueBitProject/lanai , which is in itself an interesting endeavor which I forgot to write about. I've added a mention about it to TFA.
For the link, Norton reports: "This is a known dangerous webpage. It is highly recommended that you do NOT visit this page." The "page full report" doesn't reveal any particulars. I rarely get this warning elsewhere.
This is an old domain, and once upon a time some private malware sample links hosted on it got indexed. Years later and some software still think I'm evil :).
Mentor Graphics used to buy up silicon-compiler companies just to shut them down, so that manual layout tools would have a longer shelf life. They finally had to give that up around 1990, and figure out a different business to be in. Amazingly, they did.
Google isn't really forcing anyone to do anything in this case. Another RISC backend that was made clear would be ripped out of upstream the instant the Google maintainers stopped responding isn't really an imposition.
Seems the maintainers were more than happy to accept it, and even had policies in place for such contributions. One maintainer even mentioned it's a good policy because it brings more developers into using LLVM's ToT, which is overall good for project health.
xcore (mentioned in that thread) is pretty obscure and still in trunk last I looked. Extra backends don't carry that much of a maintenance cost, mostly patching them up on api changes. Weirder targets hit bugs that the common ones don't so there's a benefit from having them in tree too.
I'm not sure that generalises beyond modular compilers though.
Given how ridiculously strict the Linux folks are being with the DXGKRNL stuff that MS is working on (which is public as part of WSLg), I would say definitely not.
And other thin veneers over closed source paravirtualized VM graphics acceleration pipes get in as well, even from companies that flagrantly violate Linux's license.
I think that it's a difference of subtree maintainer as to why some Microsoft code gets in and some is fought tooth and nail; you can't treat Linux developers as a monolith, and the graphics side is significantly more Microsoft adverse.
EDIT: Scott's Medium blog post linked from this article is fascinating: https://medium.com/swlh/myricom-an-hpc-story-and-lessons-lea...