SANTA ROSA (CBS SF) — Bernie Stock and his neighbors were on a quest Saturday to find a woman who appeared out of the smoke and flames of the deadly wine country wildfires and helped save their homes. They have named her the ‘Fire Angel’ and have taken to Facebook to find and thank her. Wearing a medical mask to help block out the smoke, the angel was woman dressed in black and who wore a sweatshirt appropriately bearing the word ‘Savage’ across the front. “She came out of the smoke out of nowhere and started helping my family save their homes and their neighbors homes by bucket brigade!,” Stock wrote. “(She) worked tirelessly for over a couple of hours helping them keep the fire the beast at bay!” His neighbor Casey Mae Wells likewise was thankful. “We arrived at our house while she, along with other amazing people, were running back and forth from Elizabeth’s pool to put out the fire across the street,” she wrote on Facebook. “I would love to say thank you from the bottom of our hearts to this woman…As horrific as last Monday was, I’ve never seen so many come together so selflessly to help.” Teri Stanley wrote that the word emblazoned on sweatshirt seemed fitting. “Love that her sweatshirt is so appropriate!” she wrote on Facebook. “Savage! She’s pretty fantastic! I hope someone can her her.” Katie Rose Hanneman wrote of the angel — “Wonder Woman!” And it appeared Saturday morning that the sweatshirt may have been a lead to the angel’s identity. Jenn Caamano wrote: “OMG this is my friend Susie Savage!!!” Hopefully, Stock and his neighbors have found their angel. It’s just one of the many #sonomastrong stories about ordinary people who did extraordinary acts of courage in the community’s darkest hour.
Clickbait stories about Sidney Crosby lead to dodgy fake news sites hawking a supplement.
WASHINGTON (AP) — Senators are moving to boost transparency for online political ads, unveiling on Thursday what could be the first of several pieces of legislation to try to lessen influence from Russia or other foreign actors on U.S. elections. The bill by Democratic Sens. Mark Warner of Virginia and Amy Klobuchar of Minnesota would require social media companies like Facebook and Twitter to keep public files of election ads and meet the same disclaimer requirements as political broadcast and print advertising. Federal regulations now require television and radio stations to make publicly available the details of political ads they air. That includes who runs the ad, when it runs and how much it costs. The bill also would require companies to “make reasonable efforts” to ensure that election ads are not purchased directly or indirectly by a foreign national. The move comes after Facebook revealed that ads that ran on the company’s social media platform and have been linked to a Russian internet agency were seen by an estimated 10 million people before and after the 2016 election. Warner is the top Democrat on the Senate intelligence committee, which is investigating Russian meddling in the 2016 race, and Klobuchar is the top Democrat on the Senate Rules Committee, which oversees elections. The legislation also has support from Republican Sen. John McCain of Arizona, who is the chairman of the Senate Armed Services Committee. Lawmakers on the Senate intelligence panel and other committees investigating Russian influence have said one of the main roles of Congress will be to pass legislation to try to stop the foreign meddling. That’s in contrast to special counsel Robert Mueller, who is also investigating and has the ability to prosecute if he finds any criminal wrongdoing. Other lawmakers are working on legislation to help states detect if foreign actors are trying to hack into their systems. That’s after the Department of Homeland Security said that 21 states had their election systems targeted by Russian government hackers. But it’s unclear if Congress will be able to agree on any such legislation amid heightened partisan tensions. Warner and Klobuchar are still trying to woo additional Senate and House Republicans, who have spent much of the year rolling back federal regulations they see as burdensome. McCain, who has for years broken with many of his GOP colleagues on campaign finance laws, said in a statement that he has “long fought to increase transparency and end the corrupting influence of special interests in political campaigns, and I am confident this legislation will modernize existing law to safeguard the integrity of our election system.” Senate Intelligence Committee Chairman Richard Burr, R-N.C., has said he wants to wait until after an upcoming hearing with social media executives from Facebook, Twitter and Google before weighing in on the legislation. Late last month, after Warner first floated the bill, Burr said it was too soon to discuss legislation and that the hearing will “explore for the first time any holes that might exist in social media platform regulation or campaign law.” Another Republican member of the intelligence panel, Oklahoma Sen. James Lankford, said he has concerns about the bill, including that “there is a difference between the public airwaves and privately held fiber, basically, and how it’s managed.” He said the “idea isn’t bad,” but he wants to look at the technical issues. Lankford said he believes there will be several pieces of legislation coming out of the Russia probe, but “whether that’s the first or not, I don’t know.” Announcing the legislation at a news conference, the two Democrats framed the issue as a matter of national security. “Russia attacked us and will continue to use different tactics to undermine our democracy and divide our country, including by purchasing disruptive online political ads,” Klobuchar said. “We have to secure our election systems and we have to do it now.” Warner, who has worked closely with Burr on the intelligence panel, has said repeatedly that he hopes the social media companies will work with them on the legislation, which he calls “the lightest touch possible.” The companies have said very little publicly about the bill or the prospect of regulation. Facebook CEO Mark Zuckerberg has said his company will now require political ads to disclose who is paying for them, a move that Warner and Klobuchar said their bill would “formalize and expand.” “We stand with lawmakers in their effort to achieve transparency in political advertising,” Erin Egan, Facebook vice president for United States public policy, said in a statement after Warner and Klobuchar introduced their bill. “We have already announced the steps Facebook will take on our own and we look forward to continuing the conversation with lawmakers as we work toward a legislative solution.” Google also said it supports efforts to “improve transparency, enhance disclosures, and reduce foreign abuse.” The company said it is evaluating steps it can take. Twitter would only stay in a statement that “we look forward to engaging with Congress and the FEC on these issues.” The Federal Election Commission regulates campaign finance laws. Lawrence Noble, general counsel for the Campaign Legal Center, a nonpartisan election advocacy group, said that some foreign entities could potentially get around the legislation if it were passed, but it would make it harder for them and put more responsibility on the companies. “There is a difference between them saying they will do something and the law saying they have to do something,” Noble said. © Copyright 2017 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.
Facebook has come under scrutiny over the use of its platform to display fake news stories and advertisements designed to influence the 2016 U.S. presidential election.
Ten major news publishers, including The Washington Post, The Economist and the Boston Globe, have signed up for the trial.
WASHINGTON — Sen. Amy Klobuchar (D-Minn.) and Sen. Mark Warner (D-Va.) will introduce legislation on Thursday to require greater disclosure of the sources of online political ads, amid reports that Russian sources bought spots on Facebook, Twitter, and other platforms to influence the 2016 election. Sen. John McCain (R-Arizona) is co-sponsoring the legislation, giving it […]
Simon Fuller’s XIX Entertainment will make an English-language version of “Skam,” the hit teen drama series out of Scandinavia for Facebook. The Norwegian language version of “Skam” (“Shame” in English) has run through four seasons on Norwegian pubcaster NRK. Aimed squarely at teen viewers, the unusual production and release pattern sees new content is produced daily […]
LONDON (AP) — Silicon Valley is a uniquely American creation, the product of an entrepreneurial spirit and no-holds-barred capitalism that now drives many aspects of modern life. But the likes of Facebook, Google and Apple are increasingly facing an uncomfortable truth: it is Europe’s culture of tougher oversight of companies, not America’s laissez-faire attitude, which could soon rule their industry as governments seek to combat fake news and prevent extremists from using the internet to fan the flames of hatred. While the U.S. has largely relied on market forces to regulate content in a country where free speech is revered, European officials have shown they are willing to act. Germany recently passed a law imposing fines of up to 50 million euros ($59 million) on websites that don’t remove hate speech within 24 hours. British Prime Minister Theresa May wants companies to take down extremist material within two hours. And across the EU, Google has for years been obliged to remove search results if there is a legitimate complaint about the content’s veracity or relevance. “I anticipate the EU will be where many of these issues get played out,” said Sarah T. Roberts, a professor of information studies at UCLA who has studied efforts to monitor and vet internet content. Objectionable content “is the biggest problem going forward. It’s no longer acceptable for the firms to say that they can’t do anything about it.” How closely to manage the massive amounts of content on the internet has become a pressing question in the U.S. since it was revealed that Russian agencies took out thousands of ads on social media during the presidential campaign, reaching some 10 million people on Facebook alone. That comes on top of the existing concerns about preventing extremist attacks. This month, three men were arrested after allegedly using smartphone messaging apps to plot attacks on the New York City subway and Times Square from their homes in Canada, Pakistan and the Philippines. The plot was thwarted by an undercover officer, not technology. In some ways it goes to a question of identity. Social media companies see themselves not as publishers but as platforms for other people to share information, and have traditionally been cautious about taking down material. But the pressure is on to act. Facebook, Google, Twitter and YouTube in June created the Global Internet Forum to Combat Terrorism, which says it is committed to developing new content detection technology, helping smaller companies combat extremism and promoting “counter-speech,” content meant to blunt the impact of extremist material. Proponents of counter-speech argue that rather than trying to take down every Islamic State group post, internet companies and governments should do more to promote content that actively refutes extremist propaganda. This approach will unmask the extremist message of hate and violence in the “marketplace of ideas,” they argue, though critics see it as just another form of propaganda. Facebook has recently published details of its counterterrorism strategy for the first time. These include using artificial intelligence to prevent extremist images and videos from being uploaded and algorithms to find and disable accounts linked to pages known to support extremist movements. The company also plans to increase the staff dedicated to reviewing complaints of objectionable material by more than 60 percent to some 8,000 worldwide. “We want Facebook to be a hostile place for terrorists,” Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager, said in a statement. “The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late.” But Roberts argues the companies have been slow to react and are trying to play catch up. The fact is the technology needed to detect and remove dangerous posts hasn’t kept up with the threat, experts say. Removing such material still requires judgment, and artificial intelligence is not yet good enough to determine the difference, for example, between an article about the so-called Islamic State and posts from the group itself. In other words, taking down much of this material still needs human input, said Frank Pasquale, an expert in information law and changing technology at the University of Maryland. Acknowledging that is difficult for companies that were built by pushing the boundaries of technology. “They don’t like to admit how primitive their technologies are; it defeats their whole narrative that they can save the world,” Pasquale said. “You kill off the golden goose if you cast doubt over the power of their algorithms.” Employing enough people to fill in where the algorithms leave off would be a massive task given the volume of material posted on social media sites every day. Just imagine trying to moderate every puppy photo or birthday greeting, said Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia. He believes that moderating content is ultimately impossible because you can’t create a system that works for everyone from Saudi Arabia to Sweden. “The problem is the very idea of the social media system — it is ungovernable,” he said. “Facebook is designed as if we are nice to each other. And we’re not.” The U.S. government response has been more focused on policing than regulation, with security services authorized to sweep up huge amounts of electronic data to help them identify violent extremists and thwart attacks. Beyond that, authorities have mostly relied on the market to drive change amid fears that heavy-handed regulation could interfere with the First Amendment rights of law-abiding citizens to speak out and exchange information. European courts have had no such qualms, balancing freedom of expression against the right to privacy and community cohesion. For example, the European Court of Justice in 2014 ruled that people have the “right to be forgotten,” permitting them to demand removal of personal data from search results when they can prove there’s no compelling reason for it to remain. As far back as 2000, a French court ordered Yahoo to prevent French internet users from buying Nazi memorabilia on its sites. The European Union’s executive has been most active in matters of antitrust. This year it leveled a huge 2.4 billion euro ($2.8 billion) fine on Google and ordered it to change the way it does business, for example how it shows search results. “There’s a real cultural divide,” said Edward Tenner, author of the upcoming book “The Efficiency Paradox: What Big Data Can’t Do.” ″European governments have been more committed to incorporating the ideas of social justice and the Americans have been much more on the libertarian side.” © Copyright 2017 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.
Facebook has acquired the anonymous social networking app tbh for an undisclosed amount, and intends to continue to operate it as a standalone app. The acquisition was first reported by Techcrunch Monday, which put the acquisition price at around $100 million. tbh allows users to create polls based set questions, including “who makes you laugh […]
MENLO PARK (AP) — Facebook’s effort to limit the spread of fake news using outside fact-checkers appears to be having an effect — though that finding comes with a major caveat. Once a story receives a false rating from a fact-checker, Facebook says, subsequent “impressions” can fall off by 80 percent. Impressions count the number of times Facebook users see a particular post. But it routinely takes more than three days for a story to receive a false rating. And most impressions occur when the story first comes out, not three days later. That’s the case with both true news and fake news. The information was shared in an email from a Facebook manager sent to the company’s fact-checking partners, including The Associated Press. Facebook gave an AP reporter access to the email. © Copyright 2017 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.