Hello Algorithm readers,
Today, we’re looking at a bingo game that teaches kids about AI, a trend of universities placing smart speakers in dorms, and Google’s new breast cancer screening tool. You can view our
informal archive here. Your comments and thoughts are welcome at
algorithm@technologyreview.com.
Happy new year and happy new decade! Before we dive back in, I’d like to take a moment to thank you all for being here. I started writing The Algorithm a little over a year ago and have been
amazed by the level of enthusiasm and engagement. Our readership has tripled and sprawled around the world, and it’s a delight to receive thoughtful and kind emails in response to my issues every week. Thank you again for being part of this wonderful community.
As always, you can say hello at algorithm@technologyreview.com, or follow me on
Twitter and
LinkedIn.
Now onto today’s programming.
|
|
MIT Technology Review just released our
January/February magazine issue, and it’s all about youth!
How are kids navigating the relentless, ubiquitous presence of technology? How does it provide them with new opportunities and challenge them with new perils? Below I’ve highlighted three stories about AI, including two written by me.
Play bingo with your kids to learn about AI.
In September, I wrote about an MIT Media Lab
research initiative that developed an open-source curriculum to teach middle schoolers about AI. Its goal was to help children become more critical consumers of such technology by helping them better understand how algorithms are created and how they influence
society. For this issue, we adapted the Bingo game from the curriculum into an activity that you can learn from with your family and friends.
If you have the print version, our illustrator Tomi Um and design team did a beautiful job of laying out the instructions and everything you need to know about the fundamentals of AI (shown
above). Otherwise, you can read our online version and print it out for the games to begin!
Play the game
here.
Universities are putting smart speakers into students’ dorms.
In the summer of 2018, Saint Louis University became the first school in the US to install smart speakers—about 2,300 of them—into each of its residence hall rooms. Each device was pre-programmed with answers to about 130 questions specific to the university,
ranging from library hours to the location of the registrar’s office. They also included the basic voice “skills” available on other Dots, including alarms and reminders, general information, and the ability to stream music.
The university hopes the addition will increase their students’ success and boost their overall happiness. They’re not the only ones. Schools as wide-ranging as Arizona State University, Lancaster
University in the UK, and Ross University School of Medicine in Barbados have adopted voice-skill technology on campus. Amid several looming crises in higher education—declining admissions, escalating drop out rates, and rocky financials—universities and colleges
are turning to AI as an appealing solution.
But the move is also risky. With microphones being placed everywhere that are always listening, what will this new era mean for student privacy?
Read the full story
here.
China has started a grand experiment in AI education.
Last but not least, in August, I published a feature about China’s pursuit of “intelligent education,” an abridged version of which was reprinted in the issue. It focuses on a company called Squirrel AI, which offers after-school tutoring services
across the country. Instead of human teachers, it uses an AI algorithm to curate its pupils' lessons, promising that this can achieve a new level of personalization.
In the five years since it was founded, the company has seen mind-boggling growth. It has opened 2,000 learning centers in 200 cities and registered over a million students—equal to New York
City’s entire public school system. To date, the company has also raised over $180 million in funding. At the end of last year, it gained unicorn status, surpassing $1 billion in valuation.
Squirrel illustrates the recent explosion of AI-enabled teaching and learning in China. Tens of millions of students now use some form of AI in their education through various products and services
peddled by a growing number of companies. But experts worry about the direction this rush is taking. At best, they say, AI can help teachers foster their students’ interests and strengths. At worst, it could further entrench a global trend toward standardized
learning and testing, leaving the next generation ill prepared to adapt in a rapidly changing world of work.
Read the full story
here.
|
|
Here are some of my other favorites from the issue:
-
A beautiful
essay written by the winner of our youth essay contest, Taylor Fang, about how her generation creates identity through social media
-
A behind-the-scenes
trip into the lives of kids who are trying and failing to become YouTube celebrities
-
A heartfelt
recounting of what happened when a teacher asked his students to live without their cell phones for nine days, then write about it
|
|
|
Baidu has a new trick for teaching AI the meaning of language.
Earlier this month, a Chinese tech giant quietly dethroned Microsoft and Google in an ongoing competition in AI. The company was Baidu, China’s closest equivalent to Google, and the competition was the General Language Understanding Evaluation, otherwise
known as GLUE.
GLUE is a widely accepted benchmark for how well an AI system understands human language. It consists of nine different tests for things like picking out the names of people and organizations
in a sentence and figuring out what a pronoun like “it” refers to when there are multiple potential antecedents. A language model that scores highly on GLUE, therefore, can handle diverse reading comprehension tasks. Out of a full score of 100, the average
person scores around 87 points. Baidu is now the first team to surpass 90 with its model, ERNIE.
The public leaderboard for GLUE is constantly changing, and another team will likely top Baidu soon. But what’s notable about Baidu’s achievement is that it illustrates how AI research benefits
from a diversity of contributors. Baidu’s researchers had to develop a technique specifically for the Chinese language to build ERNIE. It just so happens, however, that the same technique makes it better at understanding English as well.
Read more
here.
|
|
|
A tragedy of errors. Breast cancer is the most common cancer for women globally, and their second leading cause of death. Though early detection and treatment can improve a
patient’s prognosis, screening tests have high rates of error. About 1 in 5 screenings fail to find breast cancer even when it’s present; 50% of women who receive annual mammograms also get at least one false alarm over a 10-year period.
DeepMind and Google Health have now developed a new AI system that performs better on both types of error than human radiologists. The researchers trained an algorithm on mammogram images from
female patients in the US and UK. The results were
published in Nature on Wednesday.
In a separate experiment, the researchers also tested the system’s ability to generalize: they trained the model using only mammograms from UK patients, and then evaluated its performance on
US patients. The system still outperformed human radiologists, which has promising implications. It shows that it may be possible to overcome one of the biggest challenges facing AI adoption in health care: the need for ever more data to
encompass a representative patient population. Read more
here.
If you come across interesting research papers or AI conferences, send them my way to algorithm@technologyreview.com.
|
|
|
Sony’s robot dogs are helping Japanese people find companionship
The owners love their robots and feel loved back, sometimes in a way that eases their worst fears of death and of loss. (BuzzFeed
News)
Two giants debate the future of the field
Yoshua Bengio and Gary Marcus mostly agreed that future AI systems need a hybrid approach. (Facebook
Livestream, starts around 01:46:10)
+ A recap of the highlights (ZDNet)
+ A follow-up (Medium)
US and European tech regulators are turning their attention to AI
It will build off the recent wave of data protection and privacy regulations and likely come in less than two years. (WSJ)
Seoul is covering the city with AI surveillance cameras
The government claims it will detect the likelihood of crime, but experts disagree on whether such behavior-pattern detection is possible or ethical. (ZDNet)
+ Computers can’t tell if you’re happy when you smile (TR)
Megvii says being blacklisted by the US was a “coming of age gift”
The Chinese tech giant will forge ahead in 2020 and continue to expand its reach. (SCMP)
Cities around the world are experimenting with on-demand buses
But the “Uber for transit” schemes aren’t seeing a lot of success. (WIRED)
A new US state law is bringing more transparency to AI in hiring
It requires companies to tell job candidates whether AI is evaluating their video interviews. (Recode)
There’s a website that texts you algorithmically-generated feet
Have at it, if that’s your thing. (Vice)
|
|
If we leave it as some mythical realm, this field of AI, that’s only accessible to the select PhDs that work on this, it doesn’t really contribute to its adoption.
—Dario Gil, the director of IBM Research, on his hope of
changing the perception of the field to make it more inclusive
|
|
|
|
|
|