"it will use artificial intelligence to analyze groups of articles on a particular story topic and identify the ones most often cited as the original source."
"Facebook will also begin to down-rank news in its algorithm that doesn't have bylines, or present information about the company's editorial staff on the publishers' website."
Okay those are the new requirements for content mills and fake propaganda outlets. How long before they adapt?
Actually vetting reporters, reportage and news outlets is really hard for a team of smart humans editors to do. Even the premiere organizations like the NYT and Washington Post with their armies of editors has failed at this from time to time. Algorithms are not ready for this task yet.
> "it will use artificial intelligence to analyze groups of articles on a particular story topic and identify the ones most often cited as the original source."
Great idea, they should give it a snappy name, maybe something that rhymes with "stage tank." Of course this does nothing WRT organizations that tend not to cite earlier reportage when it originates outside of the company.
> How long before they adapt?
Why, that would require creating a staff of fake names, so in a lot of cases it'll probably be completed sometime around close of business today. Maybe the end of the week.
Deduping blogspam and re-reporting of AP/Reuters, and using that redundancy to uprank the original source, is something Facebook should have been doing a decade ago.
It should be more akin to https://techmeme.com (or hn for that matter) where they editorially try and choose the first or best source. If a better source becomes available they swap. Facebook could benefit from this dynamicness, where a story can bump and replace an existing post.
For me this is the crux of the issue with The Platforms giving rise to "fake news".
We as a society have decided that rampant mis-information and propaganda is only worth solving if we can automate it. If we actually have to pay real people real money to fix it on an ongoing basis, that's just too expensive.
Sure there are problems having Humans doing this work too, but they are still way ahead of AI in this problem space.
How long do we wait for automated solutions while these problems impose real costs to society?
I have doubts that you can do it without heavy automation. Sure, eventually some human can decide whether something is "factual-ish" or not. But producing content is much easier and can be automated, so the attackers can flood the system.
If you want humans involved, you end up with a gate keeper, which essentially means "unless you're an accredited media organization, your content is considered fake", because you can't vet individual pieces.
I agree with you. It'd take a huge company with tons of resources at its disposal to do something like this if it's possible at all. But if anyone could hire and train the army necessary to do it, it'd be google or facebook. (Apple already does it. Apple News is edited by real apple-employed humans but its far smaller in scope).
I think real solutions are gonna require us to break out of our tech-focused approaches and find ways to get Google, Facebook, Twitter to really start to care about fixing this stuff. Unfortunately I think that means it'll have to start costing them.
>In conjunction with those changes, Facebook will also begin to down-rank news in its algorithm that doesn't have bylines, or present information about the company's editorial staff on the publishers' website.
Well this is the next logical step for AdTech. Since all of the newspapers and other media outlets have been disrupted by the rest of the AdTech industry the next step would be to allow for people to create content on the platform and then you keep your moat.