Skip to content
KenkoGeek
  • Home
  • News
  • Home
  • News

Category Archives: assistant

  • Home Google
  • Archive by category "assistant"

Announcing New Smart Home App Discovery Features


Posted by Toni Klopfenstein, Developer Advocate

When a user connects a smart device to the Google Assistant via the Home app, the user must select the appropriate related Action from the list of all available Actions. The user then clicks through multiple screens to complete their device setup. Today, we’re releasing two new features to improve this device discovery process and drive customer adoption of your Smart Home Action through the Google Home app. App Discovery and Deep Linking are two convenience features that help users find your Google-Assistant compatible smart devices quickly and onboard faster.

App Discovery enables users to quickly find your smart home Action thanks to suggestion chips within the Google Home app. You can implement this new feature through the Actions Console by creating a verified brand link between your Action, your website, and your mobile app. App Discovery doesn’t require any coding work to implement, making this a development-light feature that provides great improvements to the user experience of device linking.

In addition to helping users discover your Action directly through suggestion chips, Deep Linking enables you to guide users to your account linking flow within the Google Home app in one step. These deep links are easily added to your mobile app or web content, guiding users to your smart home integration with a single tap.

Deep Linking and App Discovery can help you create a more streamlined onboarding experience for your users, driving increased engagement and user satisfaction, and can be implemented with minimal engineering work.

To implement App Discovery and Deep Linking for your Smart Home Action, check out the developer documents, or watch the video covering these new features.

You can also check out the smart home codelabs if you are just starting to build out your Action.

We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

  • 7 Jan, 2021
  • (0) Comments
  • By editor
  • actions on google, assistant, GCP, Google, Local Home SDK, Smart Home

Announcing New Smart Home SHED Types and Traits

Posted by Toni Klopfenstein, Developer Advocate

Back in April, we released the first set of Smart Home Entertainment Device (SHED) types, including TV, set-top box, and remote, as well as the traits AppSelector, InputSelector, MediaState, TransportControl, and Volume. We are excited to announce the release of new Smart Home Entertainment Device (SHED) types and traits. These new device types and traits compliment the original set we released earlier this year, and help build out a more complete solution for smart home media and gaming devices. By implementing these types and traits on your entertainment devices, you can enable users to fully access device and media controls from any Assistant surface.

SHED Types and Traits

To expand the SHED options, we’ve released the following new device types for Smart Home:

  • Audio-video receiver
  • Streaming box
  • Streaming stick
  • Soundbar
  • Streaming soundbar
  • Speaker

We’ve also released the following new trait:

  • Channel

To ensure a consistent, high-quality experience for all end users, each of these device types require your service to report activityState and playbackStatus to Google using the ReportState API. This requirement improves the portability between media devices and helps the Assistant better understand user intents for these devices. By implementing the complete set of recommended device traits, you can further improve the quality of your smart home Action and improve device targeting for media playback command fulfilment.

For more information on how to implement these new device features, check out the docs and samples. You can also join us at our “Hey Google” Smart Home Virtual Summit to learn more about these new features.

We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

  • 6 Jul, 2020
  • (0) Comments
  • By editor
  • actions on google, assistant, GCP, Google, Smart Home

Join the “Hey Google” Smart Home Virtual Summit

Posted by Toni Klopfenstein, Developer Relations

Over the past year, we’ve been focused on building new tools and features to support our smart home developer community. Though we weren’t able to engage with you in person at Google I/O, we are pleased to announce the “Hey Google” Smart Home Virtual Summit on July 8th – an opportunity for us to come together and dive into the exciting new and upcoming features for smart home developers and users.

Join us in the keynote where Michele Turner, the Product Management director of the Smart Home Ecosystem, will share our recent smart home product initiatives and how developers can benefit from these capabilities. She will also introduce new tools that make it easier for you to develop with Google Assistant. We will also be hosting a partner panel, where you can hear from industry leaders on how they navigate the impact of COVID-19 and their thoughts on the state of the industry.

Registration is FREE! Head on over to the Summit website to register and check out the schedule. Events will be held during EMEA, APAC, and AMER friendly times. We hope to see you and your colleagues there!

  • 29 Jun, 2020
  • (0) Comments
  • By editor
  • actions on google, assistant, GCP, Google, Local Home SDK, Smart Home

Developer Preview of Local Home SDK

Posted by Toni Klopfenstein

Recently at Google I/O, we gave you a sneak peek at our new Local Home SDK, a suite of local technologies to enhance your smart home integrations. Today, the SDK is live as a developer preview. We’ve been working hard testing the platform with our partners, including GE, LIFX, Philips Hue, TP-Link, and Wemo, and are excited to bring you these additional technologies for connecting smart devices to the Google Assistant.

Figure 1: The local execution path

This SDK enables developers to more deeply integrate their smart devices into the Assistant by building upon the existing Smart Home platform to create a local execution path via Google Home smart speakers and Nest smart displays. Developers can now run their business logic to control new and existing smart devices in JavaScript that executes on the smart speakers and displays, benefitting users with reduced latency and higher reliability.

How it works:

The SDK introduces two new intents, IDENTIFY and REACHABLE_DEVICES. The local home platform scans the user’s home network via mDNS, UDP, or UPnP to discover any smart devices connected to the Assistant, and triggers IDENTIFY to verify that the device IDs match those returned from the familiar Smart Home API SYNC intent. If the detected device is a hub or bridge, REACHABLE_DEVICES is triggered and treats the hub as the proxy device for communicating locally. Once the local execution path from Google Home to a device is established, the device properties are updated in Home Graph.

Figure 2: The intents used for each execution path

When a user triggers a smart home Action that has a local execution path, the Assistant sends the EXECUTE intent to the Google Nest device rather than the developer’s cloud fulfillment. The developer’s JavaScript app is invoked, which then triggers the Local Home SDK to send control commands to the smart device over TCP, UDP socket, or HTTP/HTTPS requests. By defaulting to local execution rather than the cloud, users experience faster fulfillment of their requests. The execution requests can still be sent to the cloud path in case local execution fails. This redundancy minimizes the possibility of a failed request, and improves the overall user experience.

Additional features of the Local Home platform include:

  • Support for all Wi-Fi-enabled device types and device traits without two-factor authentication enabled.
  • No user action required to deploy Local Home benefits to all devices.
  • Easily configure discovery protocols and the hosted JavaScript app URL through the Actions console.

Figure 3: Local Home configuration tool in the Actions console

JavaScript apps can be tested on-device, allowing developers to employ familiar tools like Chrome Developer Console for debugging. Because the Local Home SDK works with the existing smart home framework, you can self-certify new apps through the Test suite for smart home as well.

Get started

To learn more about the Local Home platform, check out the API reference, and get started adding local execution with the developer guide and samples. For general information covering how you can connect smart devices to the Google Assistant, visit the Smart Home documentation, or check out the Local Technologies for the Smart Home talk from Google I/O this year.

You can send us any feedback you have through the bug tracker, or engage with the community at /r/GoogleAssistantDev. You can tag your posts with the flair local-home-sdk to help organize discussion.

  • 9 Jul, 2019
  • (0) Comments
  • By editor
  • actions on google, assistant, GCP, Google, Local Home, Smart Home

Flutter: a Portable UI Framework for Mobile, Web, Embedded, and Desktop

Posted by the Flutter Team

Today marks an important milestone for the Flutter framework, as we expand our focus from mobile to incorporate a broader set of devices and form factors. At I/O, we’re releasing our first technical preview of Flutter for web, announcing that Flutter is powering Google’s smart display platform including the Google Home Hub, and delivering our first steps towards supporting desktop-class apps with Chrome OS.

From Mobile to Multi-Platform

For a long time, the Flutter team mission has been to build the best framework for developing mobile apps for iOS and Android. We believe that mobile development is ripe for improvement, with developers today forced to choose between building the same app twice for two platforms, or making compromises to use cross-platform frameworks. Flutter hits the sweet spot of enabling a single codebase to deliver beautiful, fast, tailored experiences with high developer productivity for both platforms, and we’ve been excited to see how our early efforts have flourished into one of the most popular open source projects.

As we started to home in on our 1.0 release last year, we began experimenting with broadening the scope of Flutter to other platforms. This was triggered both by internal teams within Google who are increasingly relying on Flutter, as well as the latent potential of the Dart platform for delivering portable experiences. In particular, a small team who were already building a web framework for Dart for internal usage started an exploratory project (codename “Hummingbird”) to evaluate the technical merits of porting the Flutter engine to support the standards-based web.

The results of this project were startling, thanks in large part to the rapid progress in web browsers like Chrome, Firefox, and Safari, which have pervasively delivered hardware-accelerated graphics, animation, and text as well as fast JavaScript execution. Within a few months of beginning the project, we had the core Flutter framework primitives working, and soon after we had demos running on mobile and desktop browsers. Along with Dart’s long pedigree of compiling for the web, this proved that we could also bring the Flutter framework and apps to run on the web.

In parallel, the core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.

A Portable UI Framework for All Screens

Flutter Mobile, Web, Desktop, and Embedded

It’s worth pausing for a moment to acknowledge the business potential of a high-performance, portable UI framework that can deliver beautiful, tailored experiences to such a broad variety of form factors from a single codebase.

For startups, the ability to reach users on mobile, web, or desktop through the same app lets them reach their full audience from day one, rather than having limits due to technical considerations. Especially for larger organizations, the ability to deliver the same experience to all users with one codebase reduces complexity and development cost, and lets them focus on improving the quality of that experience.

With support for mobile, desktop, and web apps, our mission expands: we want to build the best framework for developing beautiful experiences for any screen.

Flutter for Web

This week, we are releasing the first technical preview of Flutter for the web. While this technology is still in development, we are ready for early adopters to try it out and give us feedback. Our initial vision for Flutter on the web is not as a general purpose replacement for the document experiences that HTML is optimized for; instead we intend it as a great way to build highly interactive, graphically rich content, where the benefits of a sophisticated UI framework are keenly felt.

To showcase Flutter for the web, we worked with the New York Times to build a demo. In addition to world-class news coverage, the New York Times is famous for its crossword and other puzzle games. Since avid puzzlers want to play on whatever device they’re using at the time, their development team was attracted to Flutter as a potential solution for their needs. Discovering that they could reach the web with the same code was a huge boon. At Google I/O this week, you can get a sneak peek of their newly refreshed KENKEN puzzle game, which runs with the same code on Android, iOS, web, Mac, and Chrome OS.

ken-gratulations puzzle

Here’s what Eric von Coelln, Executive Director of Puzzles at the New York Times has to say about their experiences with Flutter:

“The New York Times Crossword has more than 400,000 stand-alone subscriptions and is a daily ritual for puzzle solvers. Along with the Crossword, we’ve grown our portfolio of digital puzzles that reaches more than two million solvers each month.

We were already beginning to explore Flutter as a potential solution to the challenge of quickly developing engaging, high-quality mobile experiences. Now the addition of being able to publish to web makes Flutter an even more appealing option to quickly deploy across all of our user platforms. This update of our old Flash-based KenKen game into a multi-platform playable experience is something we’re excited to bring to our solvers this year.”

There’s lots more to say about Flutter for web than we have space for here, so check out the dedicated article about Flutter for web on the Flutter blog.

At this early stage, we’re eager to get your feedback on how you’d like to use Flutter for web. We expect to rapidly evolve the code, with a particular focus on performance, and harmonizing the codebase with the rest of the Flutter project.

Flutter for Mobile Devices

The core Flutter framework also receives an upgrade this week, with the immediate availability of Flutter 1.5 in our stable channel. Flutter 1.5 includes hundreds of changes in response to developer feedback, including updates for new App Store iOS SDK requirements, updates to the iOS and Material widgets, engine support for new device types, and Dart 2.3 featuring new UI-as-code language features.

As the framework itself matures, we’re investing in building out the supporting ecosystem. The architectural model of Flutter has always prioritized a small core framework, supplemented by a rich package community. In the last few months, Google has contributed production-quality packages for web views, Google Maps, and Firebase ML Vision, and this week, we’re adding initial support for in-app payments. And with over 2,000 open source packages available for Flutter, there are options available for most scenarios.

One particularly exciting project that we’re announcing this week at I/O is the ML Kit Custom Image Classifier. Built using Flutter and Firebase, it offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone’s camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app.

Flutter ML Kit: create datasets, collaborate to collect data, train model, run inference

Flutter continues to grow in popularity and adoption. A growing roster of demanding customers including eBay, Sonos, Square, Capital One, Alibaba and Tencent are developing apps with Flutter. And they’re having fun! Here’s what Larry McKenzie, a senior developer at eBay had to say about Flutter:

“Flutter is fast! Features that once took us multiple days to implement can be finished in a single day. Many problems we used to spend a lot of time on, simply no longer occur. Our team can now focus on creating more polished user experiences and delivering functionality. Flutter is enabling us to exceed expectations!”

More broadly, LinkedIn recently conducted a study that showed Flutter is the single fastest-growing skill among software engineers, based on site members claiming it on their profile over the last 12 months. And in the recent 2019 StackOverflow developer survey, Flutter was listed as one of the most-loved developer frameworks.

Flutter for Desktop

Flutter is also being used on the desktop. For some months, we’ve been working on the desktop as an experimental project. But now we’re graduating this into Flutter engine, integrating this work directly into the mainline repo. While these targets are not production-ready yet, we have published early instructions for developing Flutter apps to run on Mac, Windows, and Linux.

Another quickly growing Flutter platform is Chrome OS, with millions of Chromebooks being sold every year, particularly in education. Chrome OS is a perfect environment for Flutter, both for running Flutter apps, and as a developer platform, since it supports execution of both Android and Linux apps. With Chrome OS, you can use Visual Studio Code or Android Studio to develop a Flutter app that you can test and run locally on the same device without an emulator. You can also publish Flutter apps for Chrome OS to the Play Store, where millions of others can benefit from your creation.

Flutter for Embedded Devices

As the final example of Flutter’s portability, we offer Flutter embedded on other devices. We recently published samples that demonstrate Flutter running directly on smaller-scale devices like Raspberry Pi, and we offer an embedding API for Flutter that allows it to be used in scenarios including home, automotive and beyond.

Perhaps one of the most pervasive embedded platforms where Flutter is already running is on the smart display operating system that powers the likes of Google Home Hub.

Within Google, some Google-built features for the Smart Display platform are powered by Flutter today. And the Assistant team is excited to continue to expand the portfolio of features built with Flutter for the Smart Display in the coming months; the goal this year is to use Flutter to drive the overall system UI.

Other Resources

We often get asked by developers how they can get started with Flutter. We are pleased today to announce a comprehensive new training course for Flutter, built by The App Brewery, authors of the highest-rated iOS training course on Udemy. Their new course has over thirty hours of content for Flutter, including videos, demos and labs, and with Google’s sponsorship, they are announcing today a time-limited discount of this course from the retail price of $199 to just $10.

Many developers are creating inspiring apps with Flutter. In the run-up to Google I/O, we ran a contest called Flutter Create to encourage developers to see what they could build with Flutter in 5KB or less of Dart code. We had over 750 unique entries from around the world, with some amazing examples that pushed what we imagine would be possible in such a small size.

Today, we’re announcing the winners, which can be found on flutter.dev/create. Congratulations to the overall winner, Zebiao Hu, who wins a fully-loaded iMac Pro worth over $10,000!

Flutter is no longer a mobile framework, but a multi-platform framework that can help you reach your users wherever they are. We can’t wait to see what you’ll build with Flutter on the web, desktop, mobile, and beyond!

  • 7 May, 2019
  • (0) Comments
  • By editor
  • assistant, dart, Ebay, flutter, Flutter 1.5, Flutter at IO, Flutter Create, Flutter for desktop, Flutter for web, GCP, Google, Google IOS Android, IO, IO19, IO19 Flutter, keynote, multi-platform, NYT

Actions on Google at I/O 2019: New tools for web, mobile, and smart home developers

Posted by Chris Turkstra, Director, Actions on Google

People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.

At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.

Enhance your presence in Search and the Assistant

Help people with their “how to” questions

Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.

Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.

Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:

Mobile Search screenshot showing how to install a dog door How-to Markup of how to install a dog door

For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.

Check out how REI is getting extra mileage out of their YouTube video:

Laptop to Home Hub displaying How To Template for the REI compass

How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.

Easier engagement with your apps

Help people quickly get things done with App Actions

If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.

Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.

If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we’re splitting the check. I can say “Hey Google, send $15 to Chad on PayPal” and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.

Google Pixel showing App Actions Nike Run Club

Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.

Build for devices in the home

Take advantage of Smart Displays’ interactive screens

Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.

Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.

Here’s an example of what you can build when you can leverage the full screen of a Smart Display:

Full screen of a Smart Display

Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.

Enable smart home devices to communicate locally

There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.

Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.

Flowchart of Local Home SDK

Make setup more seamless

And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.

Make your devices smart with Assistant Connect

Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We’ve been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can’t wait to show you more about Assistant Connect later this year.

New device types and traits

For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.

Get started with our new tools for all types of developers

Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.

If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.

We can’t wait to build together with you!

  • 7 May, 2019
  • (0) Comments
  • By editor
  • actions on google, assistant, GCP, Google, google assistant, Google Home Hub, google io, IO19

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1 other subscriber

Most Viewed News
  • Microsoft Azure Government is First Commercial Cloud to Achieve DoD Impact Level 5 Provisional Authorization, General Availability of DoD Regions (929)
  • Introducing Coral: Our platform for development with local AI (829)
  • Enabling connected transformation with Apache Kafka and TensorFlow on Google Cloud Platform (463)
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
Tags
aws Azure Books cloud Developer Development DevOps GCP Google HowTo Learn Linux news Noticias OpenBooks SysOps Tutorials

KenkoGeek © 2019 All Right Reserved