This Week In Digital – Week 21, 2019

Intel makes progress toward optical chips that accelerate AI, Google brings DoorDash and Postmates to Assistant, Maps, and search results , Microsoft’s AI generates realistic speech with only 200 training samples, Moving Camera, Moving People: A Deep Learning Approach to Depth Prediction, IBM, MIT, and Harvard’s AI uses grammar rules to catch linguistic nuances of U.S. English, AI Weekly: Truly autonomous home robots aren’t within sight yet, Introducing Announcements on Alexa Built-in Devices and many more. Here’s your weekly update of This Week In Digital covering all the latest tech developments around the world.

Microsoft announces SharePoint Home Sites, Q&As in Yammer, and more – Kyle Wiggers, May 21, 2019.

This morning marks the start of Microsoft’s annual SharePoint Conference in Las Vegas, and the Seattle company wasted no time detailing the features heading to Microsoft 365, its lineup of productivity-focused, cloud-hosted software and subscription services. Customers can expect new SharePoint landing pages for their organizations, along with chatbot-like question-and-answer tool in Yammer and faster OneDrive file syncing.

Intel makes progress toward optical chips that accelerate AI – Kyle Wiggers, May 21, 2019.

Photonic integrated circuits, or optical chips, promise a host of advantages over their electronic counterparts, including reduced power consumption and processing speedups. That’s why a cohort of researchers believe they might be tailor-made for AI workloads, and why some — including MIT Ph.D. candidate Yichen Shen — have founded companies to commercialize them. Shen’s startup — Boston-based Lightelligence, which has raised $10.7 million in venture capital funding to date — recently demonstrated a prototype that improves latency up to 10,000 times compared with traditional hardware and lowers power consumption by “orders of magnitude.”

Google brings DoorDash and Postmates to Assistant, Maps, and search results – Khari Johnson, May 23, 2019.

Google is making it easier to make food orders by connecting delivery apps with Google Assistant, Google Maps, and search results. Users will be able to make orders from restaurants supported by apps like DoorDash, Postmates, Delivery.com, Slice, and ChowNow. With Google Assistant on a smartphone, you can say “Hey Google, order food from ____” to place an order. You can also say “Hey Google, reorder food from ___” to see a list of your previous orders.

Microsoft’s AI generates realistic speech with only 200 training samples – Kyle Wiggers, May 23, 2019.

Modern text-to-speech algorithms are incredibly capable, and you needn’t look further for evidence than Google’s recently open-sourced SpecAugment or Translatotron — the latter can directly translate a person’s voice into another language while retaining tone and tenor. But there’s always room for improvement. The latest open sourced tool from Facebook AI Research is Pythia, a deep learning framework designed to help with Visual Question Answering. It is built on top of the PyTorch framework and offers a modular design for building AI models.

Moving Camera, Moving People: A Deep Learning Approach to Depth Prediction – Tali Dekel, Google AI News, May 23, 2019.

The human visual system has a remarkable ability to make sense of our 3D world from its 2D projection. Even in complex environments with multiple moving objects, people are able to maintain a feasible interpretation of the objects’ geometry and depth ordering. The field of computer vision has long studied how to achieve similar capabilities by computationally reconstructing a scene’s geometry from 2D image data, but robust reconstruction remains difficult in many cases.

IBM, MIT, and Harvard’s AI uses grammar rules to catch linguistic nuances of U.S. English – Kyle Wiggers, May 24, 2019.

What’s the difference between independent and dependent clauses? Is it “me” or is it “I”? And how does “affect” differ from “effect,” really? Ample evidence suggests a strong correlation between grammatical knowledge and writing ability, and new research implies the same might be true of AI. In a pair of preprint papers, scientists at IBM, Harvard, and MIT detail tests of a natural language processing system trained on grammar rules — rules they say helped it to learn faster and perform better.

AI Weekly: Truly autonomous home robots aren’t within sight yet – Kyle Wiggers, May 24, 2019.

Our world might someday look like something out of an Isaac Asimov novel, and not for the worse. In one popular depiction of the far-flung future, robot butlers will attend to our whims and perform menial chores like washing dishes, folding laundry, and walking pets. They’ll look after our children, stand in for nurses and physician assistants at outpatient clinics and hospitals, and personalize meal plans in restaurants for every conceivable diet.

Introducing Announcements on Alexa Built-in Devices – Ford Davidson, May 24, 2019.

Today, Alexa Voice Service (AVS) is making Announcements available for certified Alexa built-in products to implement and new ones that pass the provided self-tests and certification. Alexa Announcements enables customers to make and receive one-way announcements to and from compatible Alexa built-in devices in their household. Similar to a one-way intercom, Alexa Announcements is great for quick, broad communications you want to share with your whole family.

Meet Misty, an extensible robot empowered by Microsoft and Qualcomm technologies – Seth Colaner, May 25, 2019.

Miscellaneous helper robots have been whirring, rolling, and bumping around for years. They generally take the form of something adorable, with big doe eyes and a chunky white plastic body that is friendly and completely non-threatening. Or they’ll go in the opposite direction, like Boston Dynamics’ agile robots that move in an uncanny valley way but lack things like faces. A robot called Misty falls squarely in the former category; even the name is cute.

Pythia: Facebook’s deep learning framework for the vision and language domain – Sarah Schlothauer, May 24, 2019.

The latest open sourced tool from Facebook AI Research is Pythia, a deep learning framework designed to help with Visual Question Answering. It is built on top of the PyTorch framework and offers a modular design for building AI models. Take a peek at the research involved. Can you teach AI how to read? Facebook AI Research has yet another open source deep learning offering. Built upon PyTorch, Pythia is a modular framework for deep learning.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*