Posted by Mario Tapia, Product Marketing Manager, Google Workspace
In today’s fast-paced and ever-changing world, it is more important than ever for developers to be able to work quickly and efficiently. With so many different tools and applications available, it can be difficult to know which ones will help you be the most productive. In this blog post, we will discuss five different DevOps application integrations for Google Chat that can help you improve your workflows and be more productive as a developer.
PagerDuty for Google Chat
PagerDuty helps automate, orchestrate, and accelerate responses to unplanned work across an organization. PagerDuty for Google Chat empowers developers, DevOps, IT operations, and business leaders to prevent and resolve business-impacting incidents for an exceptional customer experience—all from Google Chat. With PagerDuty for Google Chat, get notifications, see and share details with link previews, and act by creating or updating incidents.
How to: Use PagerDuty for Google Chat
Asana for Google Chat
Asana helps you manage projects, focus on what’s important, and organize work in one place for seamless collaboration. With Asana for Google Chat, you can easily create tasks, get notifications, update tasks, assign them to the right people, and track your progress.
How to: Use Asana for Google Chat
Jira
Jira makes it easy to manage your issues and bugs. With Jira for Google Chat, you can receive notifications, easily create issues, assign them to the right people, and track your progress while keeping everyone in the loop.
How to: Use Jira for Google Chat
Jenkins
Jenkins allows you to automate your builds and deployments. With Jenkins for Google Chat, development and operations teams can connect into their Jenkins pipeline and stay up to date by receiving software build notifications or trigger a build directly in Google Chat.
How to: Use Jenkins for Google Chat
GitHub
GitHub lets you manage your code and collaborate with your team. Integrations like GitHub for Google Chat make the entire development process fit easily into a developer’s workflow. With GitHub, teams can quickly push new commits, make pull requests, do code reviews, and provide real-time feedback that improves the quality of their code—all from Google Chat.
How to: Use GitHub for Google Chat
Next steps
These are just a few of the many different application integrations that can help you be more productive as a developer, check out the Google Workspace Marketplace for more integrations you or the team might already be using. By using the right tools and applications, you can easily stay connected with your team, manage your tasks and projects, and automate your builds and deployments.
To keep track of all the latest announcements and developer updates for Google Workspace please subscribe to our monthly newsletter or follow us @workspacedevs.
Posted by Jeannie Zhang and Kevin Po; Product Managers, Nest
As the smart home industry prepares for a major shift in usability and interoperability with Matter launching later this year, we are working to help you build more devices and connections with Google products and beyond.
At Google I/O this year, we shared updates on how Google is continuing to support smart home developers, including the launch of our new and improved Google Home Developer Center. Today, we are excited to share that the Google Home Developer Console is now in Developer Preview at console.home.google.com.
What is the Google Home Developer Console?
The Google Home Developer Console is a guided flow for developers looking to integrate with Google. It provides everything needed to build intelligent and innovative smart home products with Matter. By simplifying the process of building Matter-enabled smart home products, you can spend more time innovating with your devices and less time on the basics.
The console is a part of the Google Home Developer Center we announced earlier this year; the go-to starting place for anyone interested in developing smart home devices and apps with Google.
Google Home Device SDK
Along with this new console, we have also released two new software development kits to make building Matter devices with Google easier. We’ve created the Google Home Device SDK, which extends the open-source Matter SDK with development, testing, and go-to market tools; making it the fastest and easiest way to build Matter devices.
Created with both new and experienced smart home developers in mind, the Google Home Device SDK has tools such as code samples, code labs and a Matter virtual device to help you start building, integrating and testing your Matter devices with Google easily.
At I/O this year, we announced Intelligence Clusters, which will allow you to access Google intelligence about the home locally and directly on your Matter devices, using a similar structure to clusters within Matter. To protect the privacy and security of our users, we have built guardrails into our Intelligence Clusters, beginning with Home & Away, to ensure that user information is always encrypted, processed locally, and only with user consent and visibility. You can learn more about these guardrails and fill out our interest form here.
Google Home Mobile SDK
Apps are invaluable to the user experience for your devices, so we have also deployed the Google Home Mobile SDK, a tool to build Android Apps that connect directly with Matter devices. Our mobile SDK streamlines the setup process, creating a more consistent and reliable experience for Android users. These APIs make it easier to set up devices in your app, Google Home, and third party ecosystems, and to share devices with other ecosystems and apps.
Why build with Google?
Even with Matter making interoperability the standard, determining the best platform for your smart devices is still an important consideration. Google’s end-to-end tools for Matter devices and apps complement your existing development platforms, accelerate time-to-market for your devices, improve reliability, and let you differentiate with Google Home while having interoperability with other Matter platforms.
Getting Started
Looking to get started building with Matter? Before hopping into the Google Home Developer Console, head over to our Get Started page to gather all the information you need to know before building.
We’re committed to supporting smart home developers that build and innovate with Google, by providing easy and high-quality resources. The latest tools are just an example of our ongoing commitment to be partners in this industry. We can’t wait to see what you build!
The previous Module 12 episode of the Serverless Migration Station video series demonstrated how to add App Engine Memcache usage to an existing app that has transitioned from the webapp2 framework to Flask. Today’s Module 13 episode continues its modernization by demonstrating how to migrate that app from Memcache to Cloud Memorystore. Moving from legacy APIs to standalone Cloud services makes apps more portable and provides an easier transition from Python 2 to 3. It also makes it possible to shift to other Cloud compute platforms should that be desired or advantageous. Developers benefit from upgrading to modern language releases and gain added flexibility in application-hosting options.
While App Engine Memcache provides a basic, low-overhead, serverless caching service, Cloud Memorystore “takes it to the next level” as a standalone product. Rather than a proprietary caching engine, Cloud Memorystore gives users the option to select from a pair of open source engines, Memcached or Redis, each of which provides additional features unavailable from App Engine Memcache. Cloud Memorystore is typically more cost efficient at-scale, offers high availability, provides automatic backups, etc. On top of this, one Memorystore instance can be used across many applications as well as incorporates improvements to memory handling, configuration tuning, etc., gained from experience managing a huge fleet of Redis and Memcached instances.
While Memcached is more similar to Memcache in usage/features, Redis has a much richer set of data structures that enable powerful application functionality if utilized. Redis has also been recognized as the most loved database by developers in StackOverflow’s annual developers survey, and it’s a great skill to pick up. For these reasons, we chose Redis as the caching engine for our sample app. However, if your apps’ usage of App Engine Memcache is deeper or more complex, a migration to Cloud Memorystore for Memcached may be a better option as a closer analog to Memcache.
Migrating to Cloud Memorystore for Redis featured video
Performing the migration
The sample application registers individual web page “visits,” storing visitor information such as IP address and user agent. In the original app, the most recent visits are cached into Memcache for an hour and used for display if the same user continuously refreshes their browser during this period; caching is a one way to counter this abuse. New visitors or cache expiration results new visits as well as updating the cache with the most recent visits. Such functionality must be preserved when migrating to Cloud Memorystore for Redis.
Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how the most recent visits are cached into Memcache. After completing the migration, the underlying caching infrastructure has been swapped out in favor of Memorystore (via language-specific Redis client libraries). In this migration, we chose Redis version 5.0, and we recommend the latest versions, 5.0 and 6.x at the time of this writing, as the newest releases feature additional performance benefits, fixes to improve availability, and so on. In the code snippets below, notice how the calls between both caching systems are nearly identical. The bolded lines represent the migration-affected code managing the cached data.
Switching from App Engine Memcache to Cloud Memorystore for Redis
Wrap-up
The migration covered begins with the Module 12 sample app (“START”). Migrating the caching system to Cloud Memorystore and other requisite updates results in the Module 13 sample app (“FINISH”) along with an optional port to Python 3. To practice this migration on your own to help prepare for your own migrations, follow the codelab to do it by-hand while following along in the video.
While the code migration demonstrated seems straightforward, the most critical change is that Cloud Memorystore requires dedicated server instances. For this reason, a Serverless VPC connector is also needed to connect your App Engine app to those Memorystore instances, requiring more dedicated servers. Furthermore, neither Cloud Memorystore nor Cloud VPC are free services, and neither has an “Always free” tier quota. Before moving forward this migration, check the pricing documentation for Cloud Memorystore for Redis and Serverless VPC access to determine cost considerations before making a commitment.
One key development that may affect your decision: In Fall 2021, the App Engine team extended support of many of the legacy bundled services like Memcache to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache even when upgrading to 3.x so long as you retrofit your code to access bundled services from next-generation runtimes.
A move to Cloud Memorystore and today’s migration techniques will be here if and when you decide this is the direction you want to take for your App Engine apps. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we plan to cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.
Posted by Rebecca Nathenson, Director, Product Management
As we recently announced at I/O, we’re investing in new ways to make Google Assistant your go-to conversational helper for everyday tasks. And we couldn’t do that without a rich community of developers. While Conversational Actions were an excellent way to experiment with voice, the ecosystem has evolved significantly over the last 5 years and we’ve heard some important feedback: users want to engage with their favorite apps using voice, and developers want to build upon their existing investments in Android.
In response to that feedback, we’ve decided to focus our efforts on making App Actions with Android the best way for developers to create deeper, more meaningful voice-forward experiences. As a result, we will turn down Conversational Actions one year from now, in June 2023.
Improving voice-forward experiences
Whether someone asks Assistant to start a workout, order food, or schedule a grocery pickup, we know users are looking for ways to get things done more naturally using voice. To allow developers to integrate those helpful voice experiences into existing Android content more easily – without having to build from scratch – we’re committed to working with them to build App Actions with Android. This will give users more ways to engage with an app’s content – like voice queries and proactive suggestions – and access the app features they already know and love.
We’re continuing to expand the reach of App Actions in the following ways:
Bringing more traffic without more development work (i.e. Assistant can now direct users to apps even when queries don’t mention an app name);
Driving users to the app’s Play Store page if they don’t have the app installed yet; and
Surfacing in ‘All Apps’ search for Pixel 6 users.
App Actions not only make your apps easier to discover; you can offer deeper voice experiences by allowing users to simply ask for what they need in their queries. Moreover, we’ll continue investing in all of the popular Assistant experiences users love, like Timers, Media, Home Automation, Communications, and more.
Supporting our developers
We know that these changes aren’t easy, which is why we’re giving developers a year to prepare for the turndown of Conversational Actions. We’re here to help you navigate this transition with these helpful resources:
Learn about the turndown: Visit the Conversational Actions sunset page for full details on what is being turned down (such as console analytics), news and updates (such as opening Media Actions more broadly to radio and live TV integrations), and FAQs.
Get started with App Actions: Take the App Actions Learning Pathway, a step-by-step training course that guides new and seasoned Android developers to implement voice-enabled app experiences. Get hands-on coding experience with our four App Actions Codelabs.
Looking ahead, we envision a platform that is intuitive, natural, and voice-forward – and one that allows developers to leverage the entire Android ecosystem of devices so they can easily reach more users. We’re always looking to improve the Assistant experience and we’re confident that App Actions is the best way to do that. We’re grateful for all you’ve done to build the Google Assistant ecosystem over the past 5 years and we’re here to help navigate the changes as we continue to make it even better. We’re excited about what lies ahead and we’re grateful to build it together.
In our ongoing Serverless Migration Station series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another objective is to help developers learn how to move away from App Engine legacy APIs (now called “bundled services”) to Cloud standalone equivalent services. Once this has been accomplished, apps are much more portable, making them flexible enough to:
In today’s Module 12 video, we’re going to start our journey by implementing App Engine’s Memcache bundled service, setting us up for our next move to a more complete in-cloud caching service, Cloud Memorystore. Most apps typically rely on some database, and in many situations, they can benefit from a caching layer to reduce the number of queries and improve response latency. In the video, we add use of Memcache to a Python 2 app that has already migrated web frameworks from webapp2 to Flask, providing greater portability and execution options. More importantly, it paves the way for an eventual 3.x upgrade because the Python 3 App Engine runtime does not support webapp2. We’ll cover both the 3.x and Cloud Memorystore ports next in Module 13.
Got an older app needing an update? We can help with that.
Adding use of Memcache
The sample application registers individual web page “visits,” storing visitor information such as the IP address and user agent. In the original app, these values are stored immediately, and then the most recent visits are queried to display in the browser. If the same user continuously refreshes their browser, each refresh constitutes a new visit. To discourage this type of abuse, we cache the same user’s visit for an hour, returning the same cached list of most recent visits unless a new visitor arrives or an hour has elapsed since their initial visit.
Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how each visit is registered. After the update, the app attempts to fetch these visits from the cache. If cached results are available and “fresh” (within the hour), they’re used immediately, but if cache is empty, or a new visitor arrives, the current visit is stored as before, and this latest collection of visits is cached for an hour. The bolded lines represent the new code that manages the cached data.
Adding App Engine Memcache usage to sample app
Wrap-up
Today’s “migration” began with the Module 1 sample app. We added a Memcache-based caching layer and arrived at the finish line with the Module 12 sample app. To practice this on your own, follow the codelab doing it by-hand while following the video. The Module 12 app will then be ready to upgrade to Cloud Memorystore should you choose to do so.
If you do want to move to Cloud Memorystore, stay tuned for the Module 13 video or try its codelab to get a sneak peek. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we hope to one day cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.
We launched Coral in 2019 with a mission to make edge AI powerful, private, and efficient, and also accessible to a wide variety of customers with affordable tools that reliably go from prototype to production. In these first few years, we’ve seen a strong growth in demand for our products across industries and geographies, and with that, a growing need for worldwide availability and support.
That’s why we’re pleased to announce that we have signed an agreement with ASUS IoT, to help scale our manufacturing, distribution and support. With decades of experience in electronics manufacturing at a global scale, ASUS IoT will provide Coral with the resources to meet our growth demands while we continue to develop new products for edge computing.
ASUS IoT is a sub-brand of ASUS dedicated to the creation of solutions in the fields of AI and the internet of things (IoT). Their mission is to become a trusted provider of embedded systems and the wider AI and IoT ecosystem. ASUS IoT strives to deliver best-in-class products and services across diverse vertical markets, and to partner with customers in the development of fully-integrated and rapid-to-market applications that drive efficiency – providing convenient, efficient, and secure living and working environments for people everywhere.
ASUS IoT already has a long-standing history of collaboration with Coral, being the first partner to release a product using the Coral SoM when they launched the Tinker Edge T development board. ASUS IoT has also integrated Coral accelerators into their enterprise class intelligent edge computers and was the first to release a multi Edge TPU device with the award winning AI Accelerator PCIe Card. Because we have this history of collaboration, we know they share our strong commitment to new innovation in edge computing.
ASUS IoT also has an established manufacturing and distribution processes, and a strong reputation in enterprise-level sales and support. So we’re excited to work with them to enable scale and long-term availability for Coral products.
With this agreement, the Coral brand and user experience will not change, as Google will maintain ownership of the brand and product portfolio. The Coral team will continue to work with our customers on partnership initiatives and case studies through our Coral Partnership Program. Those interested in joining our partner ecosystem can visit our website to learn more and apply.
Coral.ai will remain the home for all product information and documentation, and in the coming months ASUS IoT will become the primary channel for sales, distribution and support. With this partnership, our customers will gain access to dedicated teams for sales and technical support managed by ASUS IoT.
ASUS IoT will be working to expand the distribution network to make Coral available in more countries. Distributors interested in carrying Coral products will be able to contact ASUS IoT for consideration.
We continue to be impressed by the innovative ways in which our customers use Coral to explore new AI-driven solutions. And now with ASUS IoT bringing expanded sales, support and resources for long-term availability, our Coral team will continue to focus on building the next generation of privacy-preserving features and tools for neural computing at the edge.
We look forward to the continued growth of the Coral platform as it flourishes and we are excited to have ASUS IoT join us on our journey.
Recently, we discussed containerizing App Engine apps for Cloud Run, with or without Docker. But what about Cloud Functions… can App Engine users take advantage of that platform somehow? Back in the day, App Engine was always the right decision, because it was the only option. With Cloud Functions and Cloud Run joining in the serverless product suite, that’s no longer the case.
Back when App Engine was the only choice, it was selected to host small, single-function apps. Yes, when it was the only option. Other developers have created huge monolithic apps for App Engine as well… because it was also the only option. Fast forward to today where code follows more service-oriented or event-driven architectures. Small apps can be moved to Cloud Functions to simplify the code and deployments while large apps could be split into smaller components, each running on Cloud Functions.
Refactoring App Engine apps for Cloud Functions
Small, single-function apps can be seen as a microservice, an API endpoint “that does something,” or serve some utility likely called as a result of some event in a larger multi-tiered application, say to update a database row or send a customer email message. App Engine apps require some kind web framework and routing mechanism while Cloud Function equivalents can be freed from much of those requirements. Refactoring these types of App Engine apps for Cloud Functions will like require less overhead, helps ease maintenance, and allow for common components to be shared across applications.
Large, monolithic applications are often made up of multiple pieces of functionality bundled together in one big package, such as requisitioning a new piece of equipment, opening a customer order, authenticating users, processing payments, performing administrative tasks, and so on. By breaking this monolith up into multiple microservices into individual functions, each component can then be reused in other apps, maintenance is eased because software bugs will identify code closer to their root origins, and developers won’t step on each others’ toes.
Migration to Cloud Functions
In this latest episode of Serverless Migration Station, a Serverless Expeditions mini-series focused on modernizing serverless apps, we take a closer look at this product crossover, covering how to migrate App Engine code to Cloud Functions. There are several steps you need to take to prepare your code for Cloud Functions:
Divest from legacy App Engine “bundled services,” e.g., Datastore, Taskqueue, Memcache, Blobstore, etc.
If your app is a monolith, break it up into multiple independent functions. (You can also keep a monolith together and containerize it for Cloud Run as an alternative.)
Make appropriate application updates to support Cloud Functions
The first three bullets are outside the scope of this video and its codelab, so we’ll focus on the last one. The changes needed for your app include the following:
Remove unneeded and/or unsupported configuration
Remove use of the web framework and supporting routing code
For each of your functions, assign an appropriate name and install the request object it will receive when it is called.
Regarding the last point, note that you can have multiple “endpoints” coming into a single function which processes the request path, calling other functions to handle those routes. If you have many functions in your app, separate functions for every endpoint becomes unwieldy; if large enough, your app may be more suited for Cloud Run. The sample app in this video and corresponding code sample only has one function, so having a single endpoint for that function works perfectly fine here.
This migration series focuses on our earliest users, starting with Python 2. Regarding the first point, the app.yaml file is deleted. Next, almost all Flask resources are removed except for the template renderer (the app still needs to output the same HTML as the original App Engine app). All app routes are removed, and there’s no instantiation of the Flask app object. Finally for the last step, the main function is renamed more appropriately to visitme() along with a request object parameter.
This “migration module” starts with the (Python 3 version of the) Module 2 sample app, applies the steps above, and arrives at the migrated Module 11 app. Implementing those required changes is illustrated by this code “diff:”
Migration of sample app to Cloud Functions
Next steps
If you’re interested in trying this migration on your own, feel free to try the corresponding codelab which leads you step-by-step through this exercise and use the video for additional guidance.
All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 as well as content for the next-generation Cloud Functions service, so stay tuned. If you’re curious whether it’s possible to write apps that can run on App Engine, Cloud Functions, or Cloud Run with no code changes at all, the answer is yes. Hope this content is useful for your consideration when modernizing your own serverless applications!
Posted by By Kyle Zhao, Software Engineer and Charles Maxson, Developer Advocate
Nothing breaks the flow of getting work done like having to stop what you’re doing in one application and switch over to another to look up information, log an event, create a ticket or check on the status of a project task. For Google Workspace users who also rely on Atlassian’s Jira Software for their issue tracking and project management needs, Jira for Chat helps bridge the gap between having conversations with your team in Google Chat and seamlessly staying on top of issues and tasks from Jira while keeping everyone in the loop.
Recently, there have been a number of enhancements to the Google Chat framework for developers that allows them to make connections between applications like Jira and Google Chat a whole lot better. And in this post, we’ll take a look at how the latest version of Jira for Chat takes advantage of some of those newer enhancements for building apps and bots for Chat. Whether you are thinking about building or upgrading your own integration with Chat, or are simply interested in getting more out of using Jira with Google Workspace for you and your team, we’ll cover how Jira for Chat brings those newer features to life.
Connections made easy: Improved Connection Flow
One of the most important steps for getting users to leverage any integration is to make it as easy as possible to set up. Setting up Jira to integrate with Chat requires two applications to be installed, 1) the Google Chat bot for Jira Cloud from the Atlassian Marketplace and 2) Jira for Chat (unfortunately there are no direct links available, but you can navigate to it in the Chat catalog) located in the Google Chat application under the “+” icon to start a chat.
In the earlier version of Jira for Chat, the setup required a number of steps that were somewhat less intuitive. That’s changed, with the redesign of the new connection flow process that’s built around an improved connection wizard that provides detailed visual information to connect Jira for Chat to your Jira instance.
The new wizard (made possible by enhancements with the Chat dialogs feature) takes the guesswork of trudging through a number of tedious steps, shows actionable errors if something has been misconfigured or isn’t working and makes it easier by parse out Jira URLs guiding users along the way. See the connection wizard in action below. Now anyone can set it up like a pro!
Jira for Chat Connection Flow Wizard Dialog
Batched Notifications: Taking care of notification fatigue
A user favorite feature of Jira for Chat is its ability to keep you informed via Google Chat of updates to your team’s projects, tickets and tasks. But nobody likes a ‘chatty’ app either and notification fatigue is real—and really annoying. Notifications are only useful when they provide valuable information in a timely fashion without being overburdening – otherwise they run the risk of being ignored or even turned off.
To avoid notification fatigue, the Jira Chat bot enables batched notifications that optimizes sending notifications in batches based on the time elapsed since the last activity in an issue. Jira for Chat will send all updates to a ticket with a single card to Google Chat if a lot of activity is happening in Jira until at least 15 seconds have passed since the last update to the issue or 60 seconds have passed since the first update in the group. The latter keeps notifications fresh in case a lot of continuous activity is happening.
Updates to the same Jira issue are grouped in one notification card, until one of the following conditions is true:
15 seconds have passed without any additional updates to the issue.
Example: Alice reassigned issue X at 6:00:00, and then added a comment at 6:00:10. Both the “assignee change” and the “new comment” will be grouped into a single notification, sent at 6:00:25.
60 seconds have passed since the first update in the group (to ensure a timely delivery)
Example: Alice reassigned issue X at 6:00:00, and kept adding comments every 10 seconds. A notification card should be posted around 6:01:00, with all the changes in the past 60 seconds.
Example, Batching Notifications from 5 down to 1
Link Unfurling: Relevant context where you need it
One of the goals of integrating applications with Google Workspace is streamlining the flow of information with less clicks and fewer open tabs, making the new Link Unfurling feature a welcome addition to any Chat bot. Link Unfurling (also known as Link Previews) preemptively includes contextual information associated with a link passed to a Chat message, keeping the information inline and in context to the conversation while eliminating the need to interrupt your focus by following the link out of the conversation to its original source to gather more information.
Specifically with Jira for Chat, this means when a teammate posts a Jira link in Chat or pings you asking about more information about one of your tickets they’ve just linked in a message, you can now see that information immediately in the conversation along with the link, saving the steps of having to resort back to Jira every time. Link unfurling with the Jira Chat bot happens automatically once the app has been added and configured within a Chat conversation, there’s nothing additional that users need to do, and any links that Jira can preview will automatically get previewed within Chat.
Link Unfurling example in Jira for Chat
Create Issue Dialog: Take action from within Chat
Imagine you are in a lengthy conversation thread with colleagues in Google Chat, when you come to the conclusion that the topic you are discussing warrants a new ticket being created in your Jira instance. Instead of pivoting away from the conversation in Chat to create a new ticket in Jira, you can now quickly create a new Jira issue in Chat thanks to Jira for Chat.
To create an issue from Chat, simply invoke the slash command /jira_create to bring up the Create Issue dialog (enabled by the Chat dialogs feature). Then specify the Project that you would like to assign the ticket to, select Ticket Type, and enter a brief Summary. The rest of the fields are optional such as labels and description, and those, as well as advanced fields can always be filled out within your Jira instance at a later time if you would like. This way you can jump right back into the conversation, knowing you won’t forget to get this ticket logged, but also without missing a beat with what your team is talking about.
Create a Jira Issue Dialog
Takeaway and More Resources
The new enhancements to Jira for Chat make it a super useful companion for teams that rely on Google Workspace and Jira Software to manage their work. Whether it’s the new and improved connection flow, the less-is-more batched notifications handling, or the instant gratification of creating issues directly from Chat, it’s more than just a productivity booster, but also a great showcase for how the types of apps you can build with Google Chat are evolving.
Get started with Jira for Chat today or learn how you can build your own apps for Google Chat with the developer docs. To keep up with all the news about the Google Workspace Platform, please subscribe to our newsletter.
Location targeting helps your advertising to focus on finding the right customers for your business. Are you, as a digital marketer, spending a lot of time optimizing your location targeting settings for your digital marketing campaigns? Are your ads running only in locations where you can deliver your services to your users or outside as well?
Read further to find out how Radium can help automate your digital marketing campaigns location targeting and make sure you only run ads where you deliver your services.
The location targeting settings challenge
Configuring accurate location targeting settings in Marketing Platforms like Google Ads allows your ads to appear in the geographic locations that you choose: countries, cities and zip codes OR radius around a location. As a result, precise geo targeting could help increase the KPIs of your campaigns such as the return on investment (ROI), the cost per acquisition (CPA) at high volumes, etc.
Mapping your business area to the available targeting options (country, city and zip code targeting or radius targeting) in Marketing Platforms is a challenge that every business doing online marketing campaigns has faced. This challenge becomes critical if you offer a service that is only available in a certain geographical area. This is particularly relevant for Food or Grocery Delivery Apps or organizations that run similar business models.
Adjusting these location targeting settings is a time consuming process. In addition, manually translating your business or your physical stores delivery areas into geo targeting settings is also an error prone process. And not having optimal targeting options might lead to ads shown to users that you cannot really deliver your services to, so you would likely lose money and time setting location targeting manually.
How can Radium help you?
Radium is a simple Web Application, based on App Scripts, that can save you money and time. Its UI helps you automatically translate a business area into radius targeting settings, one of the three options for geo targeting in Google Ads. It also provides you with an overview of the geographical information about how well the radius targeting overlaps with your business delivery area.
It has a few extra features like merging a few areas into one and generating the optimal radius targeting settings for those.
How does it work?
You can get your app deployed and running in less than an hour following these instructions. Once you’re done, in no time you can customize and improve your radius targeting settings to better meet your needs and optimize your marketing efforts.
Per delivery area that you provide, you will be able to visualize different circles in the UI, select one from the default circles or opt in for custom circle radius settings:
Large Circle: Pre-generated circle that englobes the rectangle that surrounds the targeting area
Small Circle: Pre-generated circle contained in the rectangle that surrounds the targeting area, touching its sides
Threshold Circle: Pre-generated circle with the minimum radius to cover at least the 90% of your delivery area, to maximize targeting and minimize waste
Custom Circle: Circle which center and radius can be customized manually by drag-and-drop and using the controls of the UI
Large Circle
Small Circle
Threshold Circle
Custom Circle
Take advantage of metrics to compare between all the radius targeting options and select the best fit for your needs. In red you can see the visualization of the business targeting area and, overlapped in gray, the generated radius targeting.
Metrics:
Radius: radius of the circle, in km
% Intersection: area of the Business Targeting Area inside the circle / total Business Targeting Area size
% Waste: area of circle excluding the Business Targeting Area / total Business Targeting Area size
Circle Size: area of the circle, in km2
Intersection Size: area of the Business Targeting Area, in km2
Waste Size: area of the circle excluding the Business Targeting Area, in km2
Circle Score: % Intersection – % Waste. The highest score represent the sweet spot, maximizing the targeting area and minimizing the waste area
Once you are done optimizing the radius settings, it’s time to activate them in your marketing campaigns. Radium offers you different ways of storing and activating this output, so you can use the one that better fits your needs:
Export your data to a Spreadsheet. This will allow you to have a mapping of readable names for each delivery area and its targeting settings, to generate the campaign settings in the csv format expected by Google Ads and to bulk upload them using Google Ads Editor
Directly download the csv file that can be uploaded to Google Ads via Google Ads Editor to bulk upload the settings of your campaigns
Upload them manually using the Google Ads UI
Find all the details about how to activate your location targeting settings in this step by step guide
Getting started with Radium
There are only 2 things you need to have in order to benefit from Radium:
Map of your business’ delivery areas in either format:
KML file representing the polygon shaped targeting areas (see sample file)
CSV file with lat-lng, radius and name of the area it belongs to, more oriented to physical stores and restaurants (see sample file)
To get you started please visit the Radium repository on Github.
Summary
So, in conclusion, Radium helps you automate the location targeting configuration and optimization for your Google Ads campaigns, saving you time and minimizing errors of manual adjustments.
Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil
Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.
Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.
Wouldn’t it be great to have a common reusable structure and just add the specific code for each of the stages?
Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.
Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.
The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.
Prediction Framework Architecture
To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.
Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:
Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.
In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.
For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.
Posted by Google Cloud training & certifications team
Validated cloud skills are in demand. With Google Cloud certifications, employers know that certified individuals have proven knowledge of various professional roles within the cloud industry. Google Cloud certifications have also been recognized as some of the highest-paying IT certifications for the past several years. This year, the Google Cloud Certified Professional Data Engineer topped the list with an average salary of $171,749, while the Google Cloud Certified Professional Cloud Architect came in second place, with an average salary of $169,029.
You may be wondering what sort of background you need to take advantage of these opportunities: What sort of classes should you take? How exactly do you get started in the cloud without experience? Here are some tips to start learning about Google Cloud and build your cloud computing skills.
Get hands-on experience with cloud computing
Google Cloud training offers a wide range of learning paths featuring comprehensive courses and hands-on labs, so you get to practice with the real Google Cloud console. For instance, If you wanted to take classes to prepare for the Professional Data Engineer certification mentioned above, there is a complete learning path featuring four courses and 31 hands-on labs to help familiarize you with relevant topics like BigQuery, machine learning, IoT, TensorFlow, and more.
There are nine learning paths providing you with a launch pad to all major pillars of cloud computing, from networking, cloud security, database management, and hybrid cloud infrastructure. Each broader learning path contains specific learning paths to help you specifically train for job roles like Machine Learning Engineer. Visit the Google Cloud training page to find the right path for you.
Learn live from cloud experts
Google Cloud regularly hosts a half-day live training event called Cloud OnBoard which features hands-on learning led by experts. All sessions are also available to watch on-demand after the event.
If you’re a developer new to cloud computing, we recommend you start with Google Cloud Fundamentals, an entry-level course to learn about the basics of Google Cloud. Experts guide you through hands-on labs where you can practice using the Google Console, Google Cloud Shell, and more.
You’ll be introduced to core components of Google Cloud and given an overview of how its tools impact the entire cloud computing landscape. The curriculum covers Compute Engine and how to create VM instances from scratch and from existing templates, how to connect them together, and end with projects that can talk to each other safely and securely. You will also learn about the different storage and database options available on Google Cloud.
Other Cloud OnBoard event topics include cloud architecture, Kubernetes, data analytics, and cloud application development.
Explore Google Cloud infrastructure
Cloud infrastructure is the backbone of the internet. Understanding cloud infrastructure is a good starting point to start digging deeper into cloud concepts because it will give you a taste of the various aspects of cloud computing to figure out what you like best, whether it’s networking, security, or application development.
Build your foundational Google Cloud knowledge with our on-demand infrastructure training in the cloud infrastructure learning path. This learning path will provide you with practical experience through expert-guided labs which dive into Cloud Storage and other key application services like Google Cloud’s operations suite and Cloud Functions.
Show off your skills
Once you have a strong grasp on Google Cloud basics, you can start earning skill badges to demonstrate your experience.
Skill badges are digital credentials that recognize your ability to solve real-world problems with your cloud knowledge. You can share them on your resume or social profile so your professional network sees your technical skills. This can be useful for recruiters or employers as you transition to cloud computing work.Skill badges also enable you to get in-depth, hands-on experience with different Google Cloud offerings on the way to earning the credential.
You can also use them to start preparing for Google Cloud certifications which are more intensive and show employers that you are a cloud expert. Most Google Cloud certifications recommend having at least 6 months or up to several years of industry experience depending on the material.
Ready to get started in the cloud? Visit the Google Cloud training page to see all your options from in-person classes, online courses, special events, and more.
Recently, Google Workspace Marketplace launched new features for developers to improve the application search and evaluation experience in the Marketplace for our Google Workspace admins and end users.
The new badges will allow developers to promote their published Google Workspace Marketplace applications on their own websites. Users will be taken directly to the Marketplace application listing, where they can review application details, privacy policy, terms of service, and more. These users will then be able to securely install applications directly from the Google Workspace Marketplace.
This promotional tool can help you reach more potential users and grow your business. By being part of the Google Workspace Marketplace ecosystem, you receive direct access to the more than 3 billion Google Workspace users, where they can browse, search, and install solutions they need. To date, we’ve seen a stunning 4.8 billion apps installed in Google Workspace, transforming the way users connect, create, and collaborate.
The new Google Workspace Marketplace badge for developers.
Use the badge to show your existing and potential customers where they can find your application, backed by Google Workspace’s industry-leading standards of quality, security, and compliance. For developers with products available globally, the badge is available in 29 languages including Arabic, Spanish, Vietnamese, and more.
The new Google Workspace Marketplace badge creation page for developers.
Refresh your Marketplace application listing and marketing assets
When you are ready to incorporate the new badge onto your website to feature your Marketplace app, we recommend that you also review the creatives and information used in your app’s listing in the Google Workspace Marketplace. You should also include well designed feature graphics in your listing and localize your featured graphics, screenshots, and description to improve conversions globally.