This Week In Digital – Week 19, 2019

Starbucks to use Blockchain to track coffee, Redhat & Microsoft fuel hybrid cloud development with Azure Red Hat OpenShift, Cisco opensources the MindMeld Conversational AI Platform, AI-powered wearble rings could replace watches for heart monitoring, Microsoft opensources InterpretML for explaining black box AI, How Facebook plans to boost augmented reality with Spark AR. Here’s your weekly update of This Week In Digital covering all the latest tech developments around the world.

Starbucks turns to technology to brew up a more personal connection with its customers – Jennifer Sokolowsky, Microsoft, May 06, 2019.

Walk into a Starbucks store anywhere in the world and you’ll encounter a similar sight: coffee beans grinding, espresso shots being pulled and customers talking to baristas while their coffee order is hand-crafted. The process may look like a simple everyday scene, but it is carefully orchestrated to serve Starbucks’ more than 100 million weekly customers. With the help of Microsoft, Starbucks is creating an even more personal, seamless customer experience in its stores by implementing advanced technologies, ranging from cloud computing to block chain.

Red Hat and Microsoft fuel hybrid cloud development with Azure Red Hat OpenShift – RED HAT SUMMIT, May 07, 2019.

Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, and Microsoft today announced the general availability of Azure Red Hat OpenShift, which brings a jointly-managed enterprise-grade Kubernetes solution to a leading public cloud, Microsoft Azure. Azure Red Hat OpenShift provides a powerful on-ramp to hybrid cloud computing, enabling IT organizations to use Red Hat OpenShift Container Platform in their datacenters and more seamlessly extend these workloads to use the power and scale of Azure services. The availability of Azure Red Hat OpenShift marks the first jointly managed OpenShift offering in the public cloud.

Google’s Jeff Dean details how AI is advancing medicine, chemistry, robotics, and more – Kyle Wiggers, May 08, 2019.

At Google’s annual I/O developer conference in Mountain View, California, Jeff Dean, a senior fellow in Google’s Research Group and the head of Google’s AI division, gave an overview of the company’s efforts to solve challenging scientific problems with AI and machine learning. The presentation capped off a narrative that began Tuesday with the launch of Google’s $25 million global AI impact grants program, and Google’s reveal of three ongoing accessibility projects enabled by AI technologies.

Cisco Open-sources the MindMeld Conversational AI Platform – May 09, 2019.

Over the past five years, conversational applications have entered the mainstream. Virtual assistants like Siri, Cortana, Google Assistant, and Alexa field tens of billions of voice and natural language queries every month. Voice-enabled devices like the Amazon Echo and Google Home reside in over a hundred million homes and represent one of the fastest-growing product categories of all time.

AI-powered wearable rings could replace watches for heart monitoring – Jeremy Horwitz, May 09, 2019.

Heart monitoring tools have already shrunk from the size of toasters into smartwatches, so the next step might not surprise you: South Korean researchers have successfully tested heart monitoring technology in a wearable “smart ring” backed by a deep learning algorithm, reports Cardiology Today, and they expect that consumer rings could be used to detect atrial fibrillation in the foreseeable future.

AI Weekly: Google focused on privacy at I/O 2019 – Kyle Wiggers, May 10, 2019.

Yesterday marked the conclusion of Google’s I/O 2019 developer conference in San Francisco, where the company announced updates across its product portfolio. Among the highlights were the latest beta release of Android Q, Google’s cross-hardware operating system; the Pixel 3a and Pixel 3a XL; the Nest Hub Max; augmented reality in Google Search; Duplex on the web; enhanced walking directions in Google Maps; and far more than can be recounted here.

Microsoft open-sources lnterpretML for explaining black box AI – Khari Johnson, May 10, 2019.

Microsoft Research today open-sourced a software toolkit aimed at solving AI’s “black box” problem. InterpretML is made to help developers experiment with ways to introduce explanations of the output of AI systems. InterpretML is currently in alpha and available on GitHub.The ability to interpret predictions made by AI systems has become of increasing concern as AI is applied more frequently in sectors like law and health care.

How Facebook plans to boost augmented reality with Spark AR – Dean Takahashi, May 11, 2019.

More than a billion people have interacted with Facebook’s Spark AR augmented reality platform. But that might mean that we’ve all had cat faces put on our heads in silly messages. Some of us are hoping that the technology will be useful for something more than that silliness. But Facebook is working hard on helping the technology grow up. It has launched its Portal smart camera that lets people hold long-distance video calls in their living rooms.

Google’s voice strategy is ubiquitous and comprehensive – Khari Johnson, May 11, 2019.

One year ago, when Google’s Duplex emerged as a conversational AI service that makes phone calls on your behalf, much was made of its ability to talk like a human, an act too deep in the uncanny valley for some. But that’s a shortsighted assessment. The emergence of Duplex was the opening salvo in Google’s broadening voice strategy, which goes beyond powering a world-class AI assistant to offer consumers the ability to interact with businesses via an automated bot, while simultaneously offering businesses conversational services to help them interact with customers.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*