2016-12-19


The Holocaust memorial is a fact; so was the Holocaust. What’s Google’s problem with reflecting that? Photo by Alessio Maffeis on Flickr.

You can now sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.

A selection of 11 links for you. Try that for size. I’m @charlesarthur on Twitter. Observations and links welcome.

On Twitter, a battle among political bots • The New York Times

Amanda Hess: One of Twitter’s vilest subcultures is its collection of minstrel accounts, which impersonate Jews and people of color in order to mock and discredit them. These accounts steal avatars from real people, give themselves fake ethnic names and spew racism that’s then boosted by a network of tittering racist tweeters. @ImposterBuster, a bot unleashed on Twitter last month by the Tablet writer Yair Rosenberg and the developer Neal Chandra, is designed to hunt them down. Mr. Rosenberg and Mr. Chandra have compiled a database of known minstrel accounts and haveput @ImposterBuster on their trail. The bot tracks their every move on Twitter and replies automatically to their tweets, exposing racists and alerting other users to their subterfuge.

Another bot with a predatory instinct, @EveryTrumpette, is a visual variation on the @EveryTrumpDonor theme. Every few hours, it pulls up a photo from a Trump rally, then uses a facial-recognition algorithm to scan the crowd and zoom in on one person’s face. The resulting videos are scored with quotations from Mr. Trump himself. The bot’s creator has contended that its purpose is empathic connection, the bot designed to examine Trump supporters, “one by one, to try and see the humanity.” But its effect is combative, even unnerving. It implies that whether online or at a rally, supporters will not be shielded by the anonymizing cloak of the crowd.

link to this extract

How to bump Holocaust deniers off Google’s top spot? Pay Google • The Guardian

Carole Cadwalldr, who has previously exposed how Google’s Autocomplete has been captured by right-wing sites spreading false propaganda:

»

[until Friday] anyone searching for information about the Holocaust – if it was real, if it happened, if it was a hoax, if it was fake – was being served up neo-Nazi propaganda as the top result.

Until Friday. When I gamed Google’s algorithm. I succeeded in doing what Google said was impossible. I, a journalist with almost zero computer knowhow, succeeded in changing the search order of Google’s results for “did the Holocaust happen” and “was the Holocaust a hoax”. I knocked Stormfront off the top of the list. I inserted Wikipedia’s entry on the Holocaust as the number one result. I displaced a lie with a fact.



How did I achieve this impossible feat? Not through writing articles. Or shaming the company into action. I did it with the only language that Google understands: money. Google has shown that it will not respond to outrage or public sentiment or any sense of morality or ethics. It does not accept that leading people with a genuine inquiry about whether the Holocaust happened to a neo-Nazi website is grossly irresponsible or that it demeans the memory of the six million Jews who died. But it was prepared to take my cold, hard cash. A Google spokesman said: “We never want to make money from searches for Holocaust denial, and we don’t allow regular advertising on those terms.”

And yet, it has already made £24.01 out of me. (This was the initial cost – it has since risen to £289.) Because this is what I did: I paid to place a Google advert at the top of its search results. “The Holocaust really happened,” I wrote as the headline to my advert. And below it: “6 million Jews really did die. These search results are propagating lies. Please take action.”

«

Cadwalladr is fighting a terrific fight, and this demonstration that Google will take your money to say what you like, and put it at the top of the results, is an excellent jab to the ribs. (That “Ad” icon really isn’t very noticeable, is it, in the same green as the text of the link? I actually missed it the first time.)

For Google, this opens the slippery slope where all results become paid. There’s also a parallel article by Olivia Solon and Sam Levin: “Google’s search algorithm spreads false information with a rightwing bias“, which points out that

»

Google’s search algorithm appears to be systematically promoting information that is either false or slanted with an extreme rightwing bias on subjects as varied as climate change and homosexuality.

Following a recent investigation by the Observer, which found that Google’s search engine prominently suggests neo-Nazi websites and antisemitic writing, the Guardian has uncovered a dozen additional examples of biased search results.

«

It’s good that someone is still holding Google’s feet to, well, the blow heater (if not the fire) over this. My one quibble would be that the headline on the second story oversells it; we don’t really know (though there are strong suspicions). Google needs to explain itself, rather better than the boilerplate response it gives at the end of the story. (I’ll bet that there were lots of anxious requests for “background chats” and “our view” from Google to Solon and Levin.)

There need to be more stories like this from more publications: it’s important people understand that Google is not a neutral platform, and isn’t promoting truth, just rankings.

link to this extract

Uber might self-certify its own autonomous cars to carry the public • Car and Driver

Mark Harris:

»

in May, Otto carried out an unlicensed public demonstration of a driverless semi in Nevada, despite being warned by the DMV that it would contravene the state’s rules regarding autonomous testing. The truck drove on Interstate 80 near Reno for several miles with a human driver in the front seats. A DMV official called the stunt illegal and threatened to shut down the agency’s AV program, but under Nevada’s current regulations, there are currently no legal or financial penalties for breaking the rules.

Otto’s runaround of the regulations could have come back to haunt the company.

One of the DMV’s regulation documents says, “Evidence of the unfitness of an applicant to operate an ATCF includes . . . willfully failing to comply with any regulation adopted by the Department.” Another says, “The Department may . . . deny a license to an applicant, upon the grounds of willful failure of the applicant . . . to comply with the provisions of . . . any of the traffic laws [or regulations] of this State.”

Instead, the DMV granted Otto an ATCF license within days of receiving its application. The only company to have flouted Nevada’s autonomous vehicle rules is now the only company licensed to certify itself and other companies wishing to test autonomous technologies.

Jude Hurin, the DMV administrator who had termed Otto’s drive illegal, confirmed that Uber can now certify its own vehicles for public use.

«

So with California this is now two states where Uber has flouted rules to run self-driving vehicles. Amazing.
link to this extract

The media is a business and journalism is a job. Get it together. • Medium

Aram Zucker-Scharff is a developer at Salon.com:

»

Facebook has promised that the message will have “a link to the debunking post on News Feed stories and in the status composer if users are about to share a dubious link have links to the fact-checkers’ work.” Yet that message isn’t on display in the demo on Facebook’s announcement. It seems whatever footprint the links out will have, it won’t be much. Facebook doesn’t consider it even worth previewing in their demo.

More of an issue is the tendency of Facebook users to share and interact with articles without ever clicking through. If they’re not going to click on the article, why would users click on a link to something disproving the article (especially when that link seems to be two or more user actions deep)? They won’t.

There is no benefit to the news organizations that have volunteered for the endless, immense and ultimately futile job of fact-checking Facebook. Even if all the things Facebook has apparently promised were true, it doesn’t matter because the huff and puff over fake news on Facebook is flawed.

It seems likely that most fake news enters Facebook organically, so it gets posted by numerous people before it gets seen on the news feed. Even if that post is somehow blocked, that’s plenty of people who are taking in false news without ever entering Facebook.

«

link to this extract

Snapchat has had deal talks with Lily Robotics, Narrative • Business Insider

Biz Carson:

»

Over the past year or so, the company looked at a number of  startups building drones, wearable cameras, and augmented reality/virtual reality applications, according to multiple sources familiar with its M&A strategy.

For example, Snap Inc. has talked with Berkeley-based drone company, Lily Robotics, over the last few months. No deal is on the table, according to multiple people familiar with the matter, but that doesn’t mean it’s ruled out in the future either.

Snapchat also talked with wearable camera company Narrative about an acquisition, according to other people familiar with the situation. The talks also fell through with the Sweden-based company, which briefly shut down its operations before recently starting up again.

Both deal talks point to the newly-rebranded company’s investment in its new mission statement: “Snap Inc. is a camera company.”

«

Watch closely. Snap(chat) is growing while most people don’t notice.
link to this extract

DeepBach: a Steerable Model for Bach chorales generation • ArXiv

Gaëtan Hadjeres and François Pachet:

»

This paper introduces DeepBach, a statistical model aimed at modeling polyphonic music and specifically four parts, hymn-like pieces. We claim that, after being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach. We evaluate how indistinguishable our generated chorales are from existing Bach chorales with a listening test. The results corroborate our claim. A key strength of DeepBach is that it is agnostic and flexible. Users can constrain the generation by imposing some notes, rhythms or cadences in the generated score. This allows users to reharmonize user-defined melodies. DeepBach’s generation is fast, making it usable for interactive music composition applications. Several generation examples are provided and discussed from a musical point of view.

«

And enjoy the YouTube video:

Also: see if you can tell Bach from the machine-generated version.

Basically, in a year or two we’re going to have Muzak generated entirely by AI.
link to this extract

Super Mario Run earnings projections downgraded significantly by SuperData • GameSpot

Eddie Makuch:

»

SuperData updated its projections for Super Mario Run today, saying it now expects the game to bring in between $12m and $15m in its first month. That’s down significantly from the company previous first-month forecast of $60m.

SuperData said the game’s always-online requirement is “prohibitive” and added that it expects Nintendo to drop the price of the one-time payment after the holidays.

Based on the early numbers we see coming in and the response from consumers, we expect Super Mario Run to initially earn on the lower end of our forecast, around $12-15M in its first month,” it said. “Requiring to ‘always be online’ is prohibitive and the game is still a bit too heavy-handed for quick-play on a phone. Finally, we anticipate Nintendo to announce a discount after the holidays to keep momentum.”

«

Cut their forecast by 75%? Yeah, that’s an update. (None of this indicates any mistake by Nintendo – we don’t know what its internal forecasts were.)
link to this extract

AI snake oil (part 1): the golden lunar toilet • The Logorrhean Theorem

Dan Simonson:

»

What I do want to describe is how to tell if someone is trying to sell you AI snake oil—bullshit claims on what they can actually achieve in a realistic time and budget. Sure, with infinite resources, I could build you a gold toilet on the moon, but no one has that kind of cash lying around. Shit needs to get done, and the time and material for doing so is finite.

If you’re approached by someone trying to sell you artificial intelligence-related software, or you read a piece in the popular press about what profession AI will uncannily crush in the next year, these are the questions you should ask. Depending on the answers, you can determine whether they’re bluffing or that they’ve done their homework and are worth taking seriously.

I was originally going to make this one post, but it’s grown too large to fit into one. In this series, each post is centered around a question you should ask when someone wants to do something in the real world with natural language processing, machine learning, or other AI components.

«

Get on top of these.
link to this extract

Now you can fact-check Trump’s tweets — in the tweets themselves • The Washington Post

Philip Bump:

»

There was nothing illegal at play, and Donna Brazile wasn’t the head of the Democratic National Committee at the time that she leaked town hall questions to the Hillary Clinton campaign.

Weigel wrote a whole post about the issue — but people who just click through to the link see only Trump’s claim, and none of the context.

Unless, of course, they’ve installed our extension for Google Chrome.

We made a tool that slips a bit more context into Trump’s tweets. It’s still in the early stages, but our goal is to provide additional context where needed for Trump’s tweets moving forward (and a few golden oldies). For example, here’s what it shows in relation to that Trump tweet.



Still not perfect — but at least readers will see more information without having to read Weigel’s full post (though they should, of course.)

«

Neat idea: news organisations both building loyalty (install our browser extension!) and consolidating their message. (Too much to ask for a Safari version?)
link to this extract

Technology: the cause of and solution to democracy’s problems • Alphr

Nicole Kobie:

»

Stella Creasy, MP for Walthamstow, has long been one of the most active parliamentarians online, but says that in the three or four days following last year’s vote on what military action to take in Syria, she had 12,500 tweets and as many Facebook messages. “It’s absolutely impossible to engage with that level of volume – or the level of anger people were bringing to it,” she said, speaking at a Future Parliament event organised by the Hansard Society.

Sites such as 38 Degrees and Change.org that make it easier to hassle your MP via email aren’t helping. Creasy simply ignores them. “I had to stop responding to mass emails in 2011 because I could do literally nothing but [reply],” Creasy said. “The signals are no longer there, it’s just noise… I cannot deal with the volume of it.”

And that’s a problem: there’s no better way to snuff out political passion in a person than for them to attempt to get involved and see nothing come of their effort. That’s why Creasy points to the parliamentary petitions system as another “bad example of engagement”. Many people believe that if an e-petition gets 100,000 signatures, MPs are forced to have a debate on the topic. They’re not; the government need only post a text response if it doesn’t think it’s worth spending the time in parliament. “We need some honesty with people,” Creasy said.

The widely held belief that social media and digital tools let us have a real discussion is false…

…the problem isn’t only the volume of digital engagement, but the quality of it. If you want to sway an MP’s mind on a subject, online activist groups would do better to ditch the spam and instead crowdsource research reports on a topic, delivering a fact-checked package of data and suggestions to politicians and journalists to help sway policy decisions.

«

link to this extract

Evernote CEO explains why he reversed its new privacy policy: “we screwed up” • Fast Company

Emily Price:

»

Evernote is reversing its decision to implement a controversial privacy policy change on January 23rd because it “screwed up” its explanation of the change, says CEO Chris O’Neill. Originally announced Wednesday, the policy appeared to imply that Evernote employees would have unfettered access to user’s private notes on the service, something the company claims was never actually the case.

“We screwed up, and I want to be really clear about that,” Evernote CEO Chris O’Neill told Fast Company seconds after getting on the phone for an interview late Thursday afternoon. “We let our users down, and we let our company down.”

O’Neill says that the company screwed up when it came to the way it communicated and explained the new policy, and that the headlines being written about the change were “just not true.”

“Human beings don’t read notes without people’s permission. Full stop. We just don’t do that,” says O’Neill, noting that there’s an exception for court-mandated requests. “Where we were ham-fisted in communicating is this notion of taking advantage of machine learning and other technologies, which frankly are commonplace anywhere in the valley or anywhere you look in any tech company today.”

«

link to this extract

Errata, corrigenda and ai no corrida: none notified

Filed under: links

Show more