Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the climate is such that the employees are so passionate about politics, is it at all possible that zero employees have their thumb on the scale in terms of using their position to nudge towards their desired election result?

That seems like a bigger issue. If I am an activist and I poison the enormous dataset that's being fed to a ML model, is anyone even going to notice?



They would notice the size of the ETL job necessary to do this (and tbh, I don't think anyone understands the individual level outputs of any large ML model well enough to accomplish this).


Evidence suggests the opposite: they would write a self-congratulatory blog post about it.

For example the work Google does on "de-biasing AI" is all about taking ML models and warping its understanding of the world to reflect ideological priorities.

https://arxiv.org/abs/1607.06520


That paper (and all other works in this space) are about population level inferences from the model.

My point is that the individual level outputs (which you'd need to accomplish what the OP was talking about) are essentially impossible to tune so precisely, given our current understandings of the models.


I'm reading a leaked internal Facebook document published on theverge.com where it's suggested that they build a "troll classifier" based on the use of words like "reeee", "normie", "IRL", "Shadilay", "lulz".

The have also suggested a "meme cache" - one of the memes shown is a Folgers coffee cup which says "Best part of waking up, Hillary lost to Trump".

Based on this classifier and hits to the meme cache, "trolls" would experience things like auto-logout, limited bandwidth.

Under "when to trigger this" they also suggest the period "Leading upto elections".

So on the one hand this document seems well-intentioned because there's some bad behavior in these groups like raiding, doxxing, racism, etc.

Rather than focusing on behaviour like doxxing and raids, the approach suggested seems to be directed at a specific group. Why? In the entire universe is it only this group that engages in this kind of behaviour?

It also does a broad classification that lumps anyone sharing the same memes, or vocabulary with punitive action.

Also they associate the election with this, which seems especially puzzling.


Would you provide a link to that Verge article? I cannot find it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: