miércoles, 11 de enero de 2017

Online and Scared


CreditCorinna Kern for The New York Times

And so it came to pass that in the winter of 2016 the world hit a tipping point that was revealed by the most unlikely collection of actors: Vladimir PutinJeff BezosDonald TrumpMark Zuckerberg and the Macy’s department store. Who’d have thunk it?
And what was this tipping point?
It was the moment when we realized that a critical mass of our lives and work had shifted away from the terrestrial world to a realm known as “cyberspace.” That is to say, a critical mass of our interactions had moved to a realm where we’re all connected but no one’s in charge.
After all, there are no stoplights in cyberspace, no police officers walking the beat, no courts, no judges, no God who smites evil and rewards good, and certainly no “1-800-Call-If-Putin-Hacks-Your-Election.” If someone slimes you on Twitter or Facebook, well, unless it is a death threat, good luck getting it removed, especially if it is done anonymously, which in cyberspace is quite common.
And yet this realm is where we now spend increasing hours of our day. Cyberspace is now where we do more of our shopping, more of our dating, more of our friendship-making and sustaining, more of our learning, more of our commerce, more of our teaching, more of our communicating, more of our news-broadcasting and news-seeking and more of our selling of goods, services and ideas.
Continue reading the main story
It’s where both our president-elect and the leader of ISIS can communicate with equal ease with tens of millions of their respective followers through Twitter — without editors, fact-checkers, libel lawyers or other filters.
And, I would argue, 2016 will be remembered as the year when we fully grasped just how scary that can be — how easy it was for a presidential candidate to tweet out untruths and half-truths faster than anyone could correct them, how cheap it was for Russia to intervene on Trump’s behalf with hacks of Democratic operatives’ computers and how unnerving it was to hear Yahoo’s chief information security officer, Bob Lord, say that his company still had “not been able to identify” how one billion Yahoo accounts and their sensitive user information were hacked in 2013.
Even President Obama was taken aback by the speed at which this tipping point tipped. “I think that I underestimated the degree to which, in this new information age, it is possible for misinformation, for cyberhacking and so forth, to have an impact on our open societies,” he told ABC News’s “This Week.”
At Christmas, Amazon.com taught yet more traditional retailers how hard the cybertipping point has hit retailing. Last week, Macy’s said it was slashing 10,000 jobs and closing dozens of stores because, according to The Wall Street Journal, “Macy’s hasn’t been able to solve consumers’ shift to online shopping.”
At first Zuckerberg, the Facebook founder, insisted that fake news stories carried by Facebook “surely had no impact” on the election and that saying so was “a pretty crazy idea.” But in a very close election it was not crazy at all.
Facebook — which wants all the readers and advertisers of the mainstream media but not to be saddled with its human editors and fact-checkers — is now taking more seriously its responsibilities as a news purveyor in cyberspace.
Alan S. Cohen, chief commercial officer of the cybersecurity firm Illumio (I am a small shareholder), noted in an interview on siliconAngle.com that the reason this tipping point tipped now was because so many companies, governments, universities, political parties and individuals have concentrated a critical mass of their data in enterprise data centers and cloud computing environments.
Ten years ago, said Cohen, bad guys did not have the capabilities to get at all this data and extract it, but “now they do,” and as more creative tools like big data and artificial intelligence get “weaponized,” this will become an even bigger problem. It’s a huge legal, moral and strategic problem, and it will require, said Cohen, “a new social compact” to defuse.
Work on that compact has to start with every school teaching children digital civics. And that begins with teaching them that the internet is an open sewer of untreated, unfiltered information, where they need to bring skepticism and critical thinking to everything they read and basic civic decency to everything they write.
Stanford Graduate School of Education study published in November found “a dismaying inability by students to reason about information they see on the internet. Students, for example, had a hard time distinguishing advertisements from news articles or identifying where information came from. … One assessment required middle schoolers to explain why they might not trust an article on financial planning that was written by a bank executive and sponsored by a bank. The researchers found that many students did not cite authorship or article sponsorship as key reasons for not believing the article.”

Prof. Sam Wineburg, the lead author of the report, said: “Many people assume that because young people are fluent in social media they are equally perceptive about what they find there. Our work shows the opposite to be true.”
In an era when more and more of our lives have moved to this digital realm, that is downright scary.


jueves, 26 de mayo de 2016

Google Prevails as Jury Rebuffs Oracle in Code Copyright Case

Larry Page, co-founder of Google and chief executive of Alphabet. CreditDaniel Acker/Bloomberg
A jury ruled in favor of Google on Thursday in a long legal dispute withOracle over software used to power most of the world’s smartphones.
Oracle contended that Google used copyrighted material in 11,000 of its 13 million lines of software code in Android, its mobile phone operating system. Oracle asked for $9 billion from Google. Google said it made fair use of that code and owed nothing.
The victory for Google cheered other software developers, who operate much the way Google did when it comes to so-called open-source software. Unlike traditional software created by corporations and tightly held, open-source products are released, often with some restrictions, for anyone to use and modify.
“Great news for progress and innovation,” Chris Dixon, a technology investor with Andreessen Horowitz, the venture capital firm, posted on Twitter after the verdict.
Android relies in part on Java, an open-source software language that Oracle acquired when it bought Sun Microsystems for $7.4 billion in 2010. Oracle argued that Google executives violated Oracle’s copyright by using aspects of Java without permission.
William Fitzgerald, a Google spokesman, said in a statement that the verdict “represents a win for the Android ecosystem, for the Java programming community and for software developers who rely on open and free programming languages to build innovative consumer products.”
The courtroom fight was something of a watershed for technology and could offer clarity on legal rules surrounding open-source technology, which is used in everything from smartphones and digital recording devices to the software that runs many of the world’s biggest data centers.
People who work with open-source technology worried that a victory for Oracle would have led other companies to make similar demands of open-source products.
“It does give a lot of breathing room to other companies and individuals trying to do a lot of innovative activity,” said Parker Higgins, director of copyright activism at the Electronic Frontier Foundation, a digital rights advocacy group.
While there is an expectation in open-source projects that the software tweaks of others will be given back to the community working on the software, open source often requires a license as well.
But where software licenses typically forbid touching code or sharing code with anyone, open-source licenses usually insist on sharing. They detail what can and cannot be used by other companies in their products. And they often require people to share their work with other developers.
The idea is that, collectively, people working at many companies or even out of their homes or college dorms can build better technology than what is created behind the closed doors of one corporation.
From the start, this was a trial neither side intended. Oracle first sued Google in 2010, accusing it of patent and copyright violations in Android. The outcome of that case, which was decided in 2012, was largely favorable to Google.
But in 2014, a federal appeals court found that certain parts of Java were protected by copyright, providing Oracle with fresh ammunition. When the Supreme Court refused to hear an appeal of that decision last year, the case was sent back to the lower courts to hear the copyright aspect of the case again.
In this iteration of the courtroom fight, Eric E. Schmidt, executive chairman of Alphabet, Google’s parent company, testified that Sun knew Google was using Java and approved of that use even though Google did not obtain a license. Jonathan Schwartz, who was chief executive of Sun before Oracle bought it, backed up that view, and a blog post he wrote praising Android was a major piece of evidence in the trial.
Oracle provided a series of emails and meeting documents that countered that view, suggesting that Larry Page, a founder of Google and chief executive of Alphabet, had pressed the Android team to develop the product quickly. Mr. Page denied the suggestion on the stand.
The particular areas of copyright protection in Java involved the so-called declaring code in Application Programming Interfaces, or A.P.I.s., which have become the common way that networked programs on the Internet share data.
Declaring code establishes standards and meanings by which future lines of software, the actual effects the software seeks to create, will operate. This distinction compelled the 10 jurors — eight women and two men — to hear extensive testimony by engineers and economists about the nature of code, and the copyrightable implications of this type of creativity.
Dorian Daley, Oracle’s general counsel, said the company planned to appeal. “We strongly believe that Google developed Android by illegally copying core Java technology to rush into the mobile device market,” she said in a statement. “Oracle brought this lawsuit to put a stop to Google’s illegal behavior.”
Some lawyers cautioned against viewing the verdict as a green light for the type of software development Google performed, saying that the earlier federal appeals court decision validated the idea that A.P.I.s can be copyrighted.
“I don’t think the industry can sit back and rely on this decision and exhale and say these things won’t be protected,” said Christopher Carani, a lawyer at McAndrews, Held & Malloy. “I think what you’re still going to see is a lot more attention paid to securing approval to use other copyrights before the fact.”
John Bergmayer, a senior staff attorney at Public Knowledge, a consumer rights group, cheered the verdict in a statement, but said he remained troubled by the implications of the earlier court decision. “Other courts of appeal should reject the Federal Circuit’s mistaken finding of copyrightability,” he said. “For now, though, the jury’s verdict is a welcome dose of common sense.”
Andy Rubin, who led the Android project at Google, worked at Appleearly in his career and later developed a type of multifunction phone, which had a Java license. Oracle’s executive chairman, Lawrence J. Ellison, who appeared in video testimony, was friends with Steve Jobs, who led development of the iPhone, and Scott McNealy, a founder and the chief executive of Sun before Mr. Schwartz.
While the jury may now rest, the court fight will probably continue. The case could go to the Supreme Court, though it was unclear whether the court would rule definitively on copyright, said Pamela Samuelson, a professor at the School of Law and the School of Information at the University of California, Berkeley. “They don’t usually like to go against what the appeals court established,” she said.


jueves, 10 de diciembre de 2015

A Learning Advance in Artificial Intelligence Rivals Human Abilities


Humans and machines were given an image of a novel character (represented atop each grid) and then asked to copy it. CreditBrenden Lake

Computer researchers
advances on Thursday
that surpassed human
 capabilities for a narrow
set of vision-related tasks.

The improvements are
noteworthy because
so-called machine-vision
systems are becoming
commonplace in many
aspects of life, including
car-safety systems that
detect pedestrians and
bicyclists, as well as in
video game controls,
Internet search and
factory robots.
Researchers at the Massachusetts Institute of Technology,
New York University
and the University of Toronto reported a new type of
“one shot”
machine learning on Thursday in the journal Science, in
which a computer
vision program outperformed a
group of humans in identifying handwritten characters based on a single example.
The program is capable of quickly learning the characters in a range of languages and generalizing from what it has learned. The
authors suggest this capability is similar to the way humans
learn and understand concepts.
The new approach, known as Bayesian Program
Learning, or B.P.L., is different from current machine learning technologies known as deep neural networks.
Neural networks can be trained to recognize human speech,
detect objects in images or identify kinds of behavior by being
exposed to large sets of examples.
Although such networks are modeled after the behavior of
biological neurons, they do not yet learn the way humans do —
acquiring new concepts quickly. By contrast, the new software program described in the Science article is able to learn to recognize
handwritten characters after “seeing” only a few or even a single
The researchers compared the capabilities of their
Bayesian approach and other programming models using five separate learning tasks that involved a set of characters from a research data set known as Omniglot, which includes 1,623 handwritten character
sets from 50 languages. Both images and pen strokes needed
to create characters were captured.
“With all the progress in machine learning, it’s amazing what
you can do with lots of data and faster computers,” said
Joshua B. Tenenbaum, a professor of cognitive science and
at M.I.T. and one of the authors of the Science paper. “But
when you look at children, it’s amazing what they can
learn from very little data. Some comes from prior knowledge
and some is built into our brain.”
Also on Thursday, organizers of an annual academic machine vision competition reported gains in lowering the error rate in
software for finding and classifying objects in digital images.


Three researchers who have created a computer model that captures humans’ unique ability to learn new concepts from a single example: from left, Ruslan Salakhutdinov, Brenden M. Lake and Joshua B. Tenenbaum. CreditAlain Decarie for The New York Times

“I’m constantly amazed by the rate of progress in the field,”
said Alexander Berg, an assistant professor of computer 
science at the University of North Carolina, Chapel Hill.
The competition, known as the Imagenet Large Scale
Visual Recognition Challenge, pits teams of researchers at
academic, government and corporate laboratories against one another to design programs to both classify and detect objects. It was
won this year by a group of researchers at the Microsoft Research laboratory in Beijing.
The Microsoft team was able to cut the number of errors in
half in a task that required their program to
classify objects from a set of 1,000 categories. The team also
won a second competition by accurately detecting all instances
of objects in 200 categories.
The contest requires the programs to examine a large number
of digital images, and either label or find objects in the images.
For example, they may need to distinguish between objects 
such as bicycles and cars, both of which might appear to have 
two wheels from a certain perspective.
In both the handwriting recognition task described in Science
and in the visual classification and detection competition,
researchers made efforts to compare their progress to human
In both cases, the software advances now appear to surpass
human abilities.
However, computer scientists cautioned against drawing
conclusions about “thinking” machines or making direct
comparisons to human intelligence.
“I would be very careful with terms like ‘superhuman
performance,’ ” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence in Seattle. “Of course the calculator
exhibits superhuman performance, with the possible exception of
Dustin Hoffman,” he added, in reference to the actor’s portrayal
of an autistic savant with extraordinary math skills in the
movie “Rain Man.”
The advances reflect the intensifying focus in Silicon Valley
and elsewhere on artificial intelligence.
Last month, the Toyota Motor Corporation announced a
five-year, billion-dollar investment to create a research center
 based next to Stanford University to focus on artificial
 intelligence and robotics.
Also, a formerly obscure academic conference, Neural
Information Processing Systems, underway this week in
Montreal, has doubled in size since the previous year and
has attracted a growing list of brand-name corporate sponsors,
including Apple for the first time.
“There is a sellers’ market right now — not enough talent to
fill the demand from companies who need them,” said
Terrence Sejnowski, the director of the
Computational Neurobiology Laboratory at the Salk Institute
for Biological Studies in San Diego. “Ph.D. students are
getting hired out of graduate schools for salaries that are
higher than faculty members who are teaching them.”