Skip to content
KenkoGeek
  • Home
  • News
  • Home
  • News

Category Archives: Application Development

  • Home GCP
  • Archive by category "Application Development"
  • 5 Dec 2019
  • By editor
  • Categories: Application Development, G Suite, G Suite Developers, GCP, Sheets
From Sheets to Apps: how to curate and send content automatically with a simple script

No matter the size of your business or the industry, sharing information is natural. It’s what makes a company run. And if you work in marketing, you understand that content can be valuable to people long after it’s been shared. 

From whitepapers to ebooks to videos, businesses put a lot of effort into making content, but this information can often get buried. As time passes, it may not be referenceable on websites, in old email threads, or in online search. Instead of requiring individuals to “dig” to find your information, consider making it available at their request.  In this post, we’ll go over how to curate and send content using an online form and a simple script in G Suite.

Curate and send marketing materials automatically
Let’s say that you work for a gardening business and your customers are interested in receiving marketing materials on specific subjects, like sustainability, community gardening, nutrition and more. You can use Google Forms as the primary interface to gather requests and automatically curate and email relevant information on these topics. When a user checks off boxes in the form, they get an email with links to the assets they selected thanks to a bit of Apps Script, Google’s JavaScript platform in the cloud.

You can even use a Google Docs template to make the email appear professional with a special header, custom font or inserted imagery. 

How to set up the script
If you’d like to try this for yourself, follow these step-by-step instructions in our G Suite Solution Gallery, which houses many free scripts that you can use. You’ll first need to make a copy of a spreadsheet, and then access Apps Script within the spreadsheet interface to have the code set up for you. In just seven steps, you can set up a workflow to automatically email users content that they desire. You can also customize your code as you see fit.

What’s great is that the email workflow is kicked off every time a user submits a form response thanks to using the onFormSubmit function.  By activating the Sheet’s trigger under Tools > Script Editor, the script turns the spreadsheet into a basic app to send content follow-ups without any need to fine-tune the code.

Use Google Sheets to analyze performance
To take it a step further, you can measure the performance of the content that’s downloaded from your “spreadsheet application” using Google Sheets’ built-in data analysis and visualization tools. Use pivot tables or cell functions within Sheets to tally and analyze the total requests for each content asset. Or more conveniently, try the Explore feature at the bottom of your spreadsheet, have Sheets do the analysis for you (with the help of machine learning).  Ask questions like “which content topics had the highest count?” to get insights.

Once you’re ready to present findings, Sheets integrates closely with Google Slides and Docs, so you can insert charts and tables within documents to share with others. Click the “update” button on each visual to have the data refreshed in real-time, so you always present the latest information.

Next steps? Copy the code to build your own “spreadsheet application” and automatically share content with others. Or check out this article to learn how the code works in more detail. If you want more inspiration, check out our G Suite Solutions Gallery to see what else you can build.

  • 26 Nov 2019
  • By editor
  • Categories: Application Development, GCP, Google Cloud Platform, serverless
You can cook turkey in a toaster oven, but you don’t have to

When I was in college and couldn’t make it home for the Thanksgiving holiday, I would get together with other students in the same situation and do the next best thing: cook a traditional Thanksgiving feast of roast turkey, mashed potatoes and gravy, stuffing, and green beans by ourselves. In a dorm room. Using the kitchen equipment we had available: a toaster oven and a popcorn popper. 

The resulting dinner wasn’t terrible, but it didn’t hold a candle to the meal my family was enjoying back home, made with the benefit of an oven, high-BTU range, food processor, standing mixer—you get the idea.

Software development teams are sometimes in a similar situation. They need to build something new and have a few tools, so they build their application using what they have. Like our dorm-room Thanksgiving dinner, this can work, but it is probably not a good experience and may not get the best result.

Today, with cloud computing, software development teams have a lot more resources available to them. But sometimes teams move to the cloud but keep using the same old tools, just on a larger scale. That’s like moving from a toaster oven to a wall of large ovens, but not looking into how things like convection or microwave ovens, broilers, sous-vide cooking, instant pots, griddles, breadmakers, or woks can help you make a meal.

In short, if you’re an application developer and you’ve moved to the cloud, you should really explore all the new kinds of tools you can use to run your code, beyond configuring and managing virtual machines.

Like the number of side dishes on my parents’ holiday table, the number of Google Cloud Platform products you might use can be overwhelming. Here are a few you might want to look at first:

  • App Engine Standard Environment is a serverless platform for web applications. You bring your own application code and let the platform handle the web server itself, along with scaling and monitoring. It can even scale to zero, so if there are idle periods without traffic, you won’t be paying for computer time you aren’t using.

  • Some of the code you need might not be an application, but just a handler to deal with events as they happen, such as new data arriving or some operation being ready to start. Cloud Functions is another serverless platform that runs code written in supported languages in response to many kinds of events. Cloud Run can do similar tasks for you, with fewer restrictions on what languages and binaries you can run, but requiring a bit more management on your part.

  • Do you need regular housekeeping tasks performed, such as generating daily reports or deleting stale data? Instead of running a virtual machine just so you can trigger a cron job, you can have Cloud Scheduler do the triggering for you. If you want to get really fancy (like your aunt’s bourbon pecan pie), you can implement it with another serverless offering such as Cloud Functions, at specified intervals.

  • Instead of installing and managing a relational database server, use Cloud SQL instead. It’s reliable and secure, and handles backups and replication for you.

  • Maybe you don’t need (or just don’t want to use) a relational database. Cloud Firestore is a serverless NoSQL database that’s easy to use and that will scale up or down as needed. It also replicates your data across multiple regions for extremely high availability.

  • After Thanksgiving dinner, you may feel like a blob. Or you may just need to store blobs of data, such as files. But you don’t want to use a local filesystem, you want replicated and backed up storage. Some teams put these blobs into general purpose databases, but that’s not a good fit and can be expensive. Cloud Storage is designed to store and retrieve blob-format data on demand, affordably and reliably.

These products are great starting points in rethinking what kind of infrastructure your application could be built on, once you have adopted cloud computing. You might find they give you a better development experience and great outcomes relative to launching and managing more virtual machines. Now if you’ll excuse me, dinner’s ready!

  • 26 Nov 2019
  • By editor
  • Categories: Application Development, GCP, Google Cloud Platform, Management Tools
Stackdriver Logging comes to Cloud Code in Visual Studio Code

A big part of troubleshooting your code is inspecting the logs. At Google Cloud, we offer Cloud Code, a plugin to popular integrated development environments (IDEs) to help you write, deploy, and debug cloud-native applications quickly and easily. Stackdriver Logging, meanwhile, is the go-to tool for all Google Cloud Platform (GCP) logs, providing advanced searching and filtering as well as detailed information about them. 

But deciphering logs can be tedious. Even worse, you need to leave your IDE to access Stackdriver Logging. Now, with the Cloud Code plugin, you can access your Stackdriver logs in the Visual Studio Code IDE directly! The new Cloud Code logs viewer helps you simplify and streamline the diagnostics process with three new features:

  • Integration with Stackdriver Logging 
  • A customizable logs viewer
  • Kubernetes-specific filtering  

View Stackdriver logs in VS Code

With the new Cloud Code logs viewer you can access your Stackdriver logs in VS Code directly. Simply open the logs viewer and Cloud Code displays all your Stackdriver logs. You can edit the filters just like you do in Stackdriver, and if you would like to see more detailed information you can easily return to Stackdriver Logging from the IDE with your filters in place.

StackdriverLogsViewerFromCommandPalette.gif

In contrast to kubectl logs, Stackdriver logs are natively integrated with Google Cloud. Learn more about Stackdriver Logging here. 

Improved log exploration 

The new logs viewer provides a structured logs viewing experience that has several new features including: severity filters, colorized output, streaming capabilities, and timezone conversions. The new logs viewer presents an organized view of logs and lets you filter and search your logs from within VS Code. Think of the logs viewer as your first stop for all of your logs without having to leave your IDE.  The logs viewer will supports kubectl logs.

UsingFilters.gif

Kubernetes-specific filtering 

Kubernetes logs are complex. The new logs viewer lets you filter on Kubernetes-specific elements including: namespace, deployment, pod, container, and keyword. This allows you to easily see logs for specific pod or all the logs from a given deployment, helping you so you can navigate complex logs more effectively.

In addition to manual filtering, you can access the logs viewer from the Cloud Code resource browser and use the tree view to filter your logs. This way, you can locate a resource with the context around it. The tree view shows status and context information that can help you find important logs such as unhealthy or orphaned pods.

ViewLogFromTreeView.gif

Get started 

Accessing Stackdriver Logs in VS Code with Cloud Code brings your logs closer to your code, with advanced filtering options that help you stay focused and in your IDE. To learn more, check out this guide to getting started with the Log Viewer. If you are new to Cloud Code or Stackdriver Logging, start by learning how to install Cloud Code and set up Stackdriver. If you are already using Cloud Code and Stackdriver Logging, there are no prerequisites to get started—just open the new logs viewer with Cloud Code and you’re ready to go!

  • 20 Nov 2019
  • By editor
  • Categories: Anthos, Apigee, Application Development, GCP, Hybrid Cloud, Next
Don’t just move to the cloud, modernize with Google Cloud

Our customers tell us they don’t just want to migrate their applications from point A to point B, they want to modernize their applications with cloud-native technologies and techniques, wherever those applications may be. 

Today, we’re excited to tell you about a variety of new customers that are using Anthos to transform their application portfolio, as well as new cloud migration, API management, and application development offerings:

  • New customers leveraging Anthos for a variety of on-prem, cloud and edge use cases
  • The general availability of Migrate for Anthos
  • Apigee hybrid in general availability
  • The general availability of Cloud Code

Accelerating app modernization with Anthos

Anthos was the first open app modernization platform to offer a unified control plane and service delivery across diverse cloud environments—managed cloud, on-premises and edge. Since it became generally available in the spring, organizations across a variety of industries and geographies have turned to Anthos to bring the benefits of cloud, containers and microservices to their applications. 

According to the findings from Forrester’s Total Economic Impact study, customers adopting Anthos have seen up to 5x return on investment based on the savings from ongoing license and support costs, and the incremental savings from operations and developer productivity. For one customer in the financial services industry, rolling out new features and updates to their core banking application used to take at least a quarter. Now with Anthos, they were able to eliminate months long development and release cycles, and roll out on a weekly basis. That’s a 13x improvement on time to market. 

This week, several new European Anthos customers will take the stage at Next UK to talk about how they’re using Anthos to transform their IT operations. 

Kaeser Kompressoren SE of Coburg, Germany, is a provider of compressed air products and services. The company needed a consistent platform to deploy and manage existing on-prem SAP workloads, like SAP Data Hub, and also wanted to be able to tap into other services running in Google Cloud to get more value from those environments. 

“Application modernization is enabling business innovation for Kaeser,” said Falko Lameter, CIO. “To gain better insights from data, we knew we needed to incorporate advanced machine learning and data analytics in all our applications. We chose Google Cloud’s Anthos because it offered the flexibility to incrementally modernize our legacy application on-premises without business disruption, while allowing us to run other applications on Anthos in Google Cloud and take advantage of its managed data analytics and ML/AI services.”

Then there’s Denizbank. Based in Turkey, Denizbank provides a variety of commercial banking services, and established the first Digital Banking Department in Turkey in 2012. Denizbank turned to Anthos for an open application modernization platform to help it develop its next-generation mobile banking applications.

“We operate in 11 different countries and have to comply with various regulatory requirements like data locality and sovereignty, which mandates some or all applications to reside on premises in certain countries, while the rest of the apps can move to the cloud in other countries,” said Dilek Duman, COO of DenizBank. “We chose Google Cloud’s Anthos for its flexibility to modernize our existing application investments with ease, and to deliver AI/ML powered software faster while improving operational security and governance. Anthos gives us the ability to have a unified management view of our hybrid deployments, giving us a consistent platform to run our banking workloads across environments.” 

Anthos is even starting to be deployed to edge locations, where, thanks to its 100% software-based design, it can run on any number of hardware form factors. We’re in advanced discussions with customers in telecommunications, retail, manufacturing and entertainment about using Anthos for edge use cases, as well as with global hardware OEMs.

Move and modernize with Migrate for Anthos

In addition to leveraging cloud technology for their on-premises environments with Anthos, customers also want to simultaneously migrate to the cloud and modernize with containers. That’s why we’re happy to announce the general availability of Migrate for Anthos, which provides a fast, low-friction path to convert physical servers or virtual machines from a variety of sources (on-prem, Amazon AWS, Microsoft Azure, or Google Compute Engine) directly into containers in Anthos GKE.

Migrate for Anthos makes it easy to modernize your applications without a lot of manual effort or specialized training. After upgrading your on-prem systems to containers with Migrate for Anthos, you’ll benefit from a reduction in OS-level management and maintenance, more efficient resource utilization, and easy integration with Google Cloud services for data analytics, AI and ML, and more. 

DevFactory aims to offload repetitive tasks in software development so that dev teams can focus on coding and productivity. As advocates for optimization through containers, they found Migrate for Anthos a key way to help deliver on their goals:  

“We usually see less than 1% resource utilization in data centers. Migrate for Anthos is a remarkable tool that allows us to migrate data center workloads to the cloud in a few simple steps,” said Rahul Subramaniam, CEO, Devfactory. “By automatically converting servers and virtual machines into containers with Migrate for Anthos, we get better resource utilization and dramatically reduced costs along with managed infrastructure in the end state, which makes this a very exciting and much-needed solution.” 

Migrate for Anthos is available at no additional cost, and can be used with or without an Anthos subscription.

API-first, everywhere, with Apigee hybrid 

To drive modernization and innovation, enterprises are increasingly adopting API-first approaches to connecting services across hybrid and multi-cloud environments. To address the need for hybrid API management, we’re announcing the general availability of Apigee hybrid, giving you the flexibility to deploy your API runtimes in a hybrid environment, while using cloud-based Apigee capabilities such as developer portals, API monitoring, and analytics. Apigee hybrid can be deployed as a workload on Anthos, giving you the benefits of an integrated Google Cloud stack, with Anthos’ automation and security benefits. 

Gap Inc. uses Apigee to publish, secure, and analyze APIs and easily onboard the development teams working with those APIs. Apigee hybrid will help Gap Inc. overcome the traditional tradeoffs between on-premises and cloud, providing the best of both worlds.   

“With Apigee hybrid, we can have an easy to manage, localized runtime for scenarios where latency or data sensitivity require it. At the same time, we can continue to enjoy all the benefits of Apigee such as Apigee’s developer portal and its rich API-lifecycle management capabilities,” said Patrick McMichael, Enterprise Architect at Gap Inc. 

Simplifying the developer experience

Google Cloud application development tools are designed to help you simplify creating apps for containers and Kubernetes, incorporate security and compliance into your pipelines, and scale up or down depending on demand, so you only pay for what you use. 

With these goals in mind, last week, we announced the general availability of Cloud Run and Cloud Run for Anthos. Cloud Run is a managed compute platform on Google Cloud that lets you run serverless containers on on a fully managed environment or on Anthos. With Cloud Run fully managed, you can easily deploy and run stateless containers written in any language, and enjoy serverless benefits such as automatic scale up and scale down and pay-for-use—without having to manage the underlying infrastructure. 

Cloud Run for Anthos, meanwhile, brings those same serverless developer experience to Anthos managed clusters, giving developers access to a modern, serverless compute platform while their organization modernizes its on-prem environment with Kubernetes. 

Easier Kubernetes development with Cloud Code

Today, we’re excited to announce the general availability of another important member of the Google Cloud application development stack: Cloud Code, which lets developers write, debug and deploy code to Google Cloud or any Kubernetes cluster through extensions to popular Integrated Developer Environments (IDEs) such as Visual Studio Code and IntelliJ. 

Developers are most productive while working in their favorite IDE. By embracing developers’ existing workflow and tools, Cloud Code makes working with Kubernetes feel like you are working with a local application, while preserving the investment you’ve made to configure your tools to your own specific needs. Cloud Code dramatically simplifies the creation and maintenance of Kubernetes applications.

In addition, Cloud Code speeds up development against Kubernetes by extending the edit-debug-review “inner loop” to the cloud. You get rapid feedback on your changes, ensuring that they’re of high quality. And when it comes to moving code to the production environment, Cloud Code supports popular continuous integration and delivery (CI/CD) tools like Cloud Build. 

Finally, with Cloud Code, diagnosing issues does not require a deep understanding of Kubernetes, thanks to connected debuggers and cluster-wide logging that help you address issues all from the context of your favorite tool. 

Toward modern, efficient applications

Application modernization means a lot of things to a lot of people. Depending on your environment, it can mean updating VMs to containers and Kubernetes, it can mean moving them to the cloud, or it can mean distributing them to edge locations and unifying workloads with consistent API and service management. For others, application modernization means using cloud-native tools and concepts like serverless and CI/CD. Whatever your definition, we can help you realize your business and modernization goals, achieving greater agility while improving overall governance.

  • 7 Nov 2019
  • By editor
  • Categories: Application Development, Containers & Kubernetes, DevOps & SRE, GCP, Open Source
Kubernetes development, simplified—Skaffold is now GA

Back in 2017, we noticed that developers creating Kubernetes-native applications spent a long time building and managing container images across registries, manually updating their Kubernetes manifests, and redeploying their applications every time they made even the smallest code changes. We set out to create a tool to automate these tasks, helping them focus on writing and maintaining code rather than managing the repetitive steps required during the edit-debug-deploy ‘inner loop’. From this observation, Skaffold was born.

Today, we’re announcing our first generally available release of Skaffold. Skaffold simplifies common operational tasks that you perform when doing Kubernetes development, letting you focus on your code changes and see them rapidly reflected on your cluster. It’s the underlying engine that drives Cloud Code, and a powerful tool in and of itself for improving developer productivity.

Skaffold’s central command, skaffold dev, watches local source code for changes, and rebuilds and redeploys applications to your cluster in real time. But Skaffold has grown to be much more than just a build and deployment tool—instead, it’s become a tool to increase developer velocity and productivity.

Feedback from Skaffold users bears this out. “Our customers love [Kubernetes], but consistently gave us feedback that developing on Kubernetes was cumbersome. Skaffold hit the mark in addressing this problem,” says Warren Strange, Engineering Director at ForgeRock. “Changes to a Docker image or a configuration that previously took several minutes to deploy now take seconds. Skaffold’s plugin architecture gives us the ability to deploy to Helm or Kustomize and use various Docker build plugins such as Kaniko. Skaffold replaced our bespoke collection of utilities and scripts with a streamlined tool that is easy to use.”

A Kubernetes developer’s best friend

Skaffold is a command line tool that saves developers time by automating most of the development workflow from source to deployment in an extensible way. It natively supports the most common image-building and application deployment strategies, making it compatible with a wide variety of both new and pre-existing projects. Skaffold also operates completely on the client-side, with no required components on your cluster, making it super lightweight and high-performance.

Skaffolds inner development loop.png
Skaffold’s inner development loop

By taking care of the operational tasks of iterative development, Skaffold removes a large burden from application developers and substantially improves productivity.

Over the last two years, there have been more than 5,000 commits from nearly 150 contributors to the Skaffold project, resulting in 40 releases, and we’re confident that Skaffold’s core functionality is mature. To commemorate this, let’s take a closer look at some of Skaffold’s core features.

Fast iterative development
When it comes to development, skaffold dev is your personal ops assistant: it knows about the source files that comprise your application, watches them while you work, and rebuilds and redeploys only what’s necessary. Skaffold comes with highly optimized workflows for local and remote deployment, giving you the flexibility to develop against local Kubernetes clusters like Minikube or Kind, as well as any remote Kubernetes cluster.

“Skaffold is an amazing tool that simplified development and delivery for us,” says Martin Höfling, Principal Consultant at TNG Technology Consulting GmbH. “Skaffold hit our sweet spot by covering two dimensions: First, the entire development cycle from local development, integration testing to delivery. Second, Skaffold enabled us to develop independently of the platform on Linux, OSX, and Windows, with no platform-specific logic required.”

Skaffold’s dev loop also automates typical developer tasks. It automatically tails logs from your deployed workloads, and port-forwards the remote application to your machine, so you can iterate directly against your service endpoints. Using Skaffold’s built-in utilities, you can do true cloud-native development, all while using a lightweight, client-side tool.

Production-ready CI/CD pipelines
Skaffold can be used as a building block for your production-level CI/CD pipelines. Taylor Barrella, Software Engineer at Quora, says that “Skaffold stood out as a tool we’d want for both development and deployment. It gives us a common entry point across applications that we can also reuse for CI/CD. Right now, all of our CI/CD pipelines for Kubernetes applications use Skaffold when building and deploying.”

Skaffold can be used to build images and deploy applications safely to production, reusing most of the same tooling that you use to run your applications locally. skaffold run runs an entire pipeline from build to deploy in one simple command, and can be decomposed into skaffold build and skaffold deploy for more fine-tuned control over the process. skaffold render can be used to build your application images, and output templated Kubernetes manifests instead of actually deploying to your cluster, making it easy to integrate with GitOps workflows.

Profiles let you use the same Skaffold configuration across multiple environments, express the differences via a Skaffold profile for each environment, and activate a specific profile using the current Kubernetes context. This means you can push images and deploy applications to completely different environments without ever having to modify the Skaffold configuration. This makes it easy for all members of a team to share the same Skaffold project configuration, while still being able to develop against their own personal development environments, and even use that same configuration to do deployments to staging and production environments.

On-cluster application debugging
Skaffold can help with a whole lot more than application deployment, not least of which is debugging. Skaffold natively supports direct debugging of Golang, NodeJS, Java, and Python code running on your cluster!

The skaffold debug command runs your application with a continuous build and deploy loop, and forwards any required debugging ports to your local machine. This allows Skaffold to automatically attach a debugger to your running application. Skaffold also takes care of any configuration changes dynamically, giving you a simple yet powerful tool for developing Kubernetes-native applications. skaffold debug powers the debugging features in Cloud Code for IntelliJ and Cloud Code for Visual Studio Code.

google cloud code.png

Cloud Code: Kubernetes development in the IDE

Cloud Code comes with tools to help you write, deploy, and debug cloud-native applications quickly and easily. It provides extensions to IDEs such as Visual Studio Code and IntelliJ to let you rapidly iterate, debug, and deploy code to Kubernetes. If that sounds similar to Skaffold, that’s because it is—Skaffold powers many of the core features that make Cloud Code so great! Things like local debugging of applications deployed to Kubernetes and continuous deployment are baked right into the Cloud Code extensions with the help of Skaffold.

To get the best IDE experience with Skaffold, try Cloud Code for Visual Studio Code or IntelliJ IDEA!

What’s next?

Our goal with Skaffold and Cloud Code is to offer industry-leading tools for Kubernetes development, and since Skaffold’s inception, we’ve engaged the broader community to ensure that Skaffold evolves in line with what users want. There are some amazing ideas from external contributors that we’d love to see come to fruition, and with the Kubernetes development ecosystem still in a state of flux, we’ll prioritize features that will have the most impact on Skaffold’s usefulness and usability. We’re also working closely with the Cloud Code team to surface Skaffold’s capabilities inside your IDE.

With the move to general availability, it’s never been a better time to start using (or continue to use) Skaffold, trusting that it will provide an excellent and production-ready development experience that you can rely on.

For more detailed information and docs, check out the Skaffold webpage, and as always, you can reach out to us on Github and Slack.


Special thanks to all of our contributors (you know who you are) who helped make Skaffold the awesome tool it is today!

  • 29 Oct 2019
  • By editor
  • Categories: Application Development, Compute, GCP, Google Cloud Platform, serverless
App Engine Java 11 is GA—deploy a JAR, scale it, all fully managed

Attention, Java developers. If you want to build modern Java backends, use modern frameworks, or use the latest language features of Java 11, know that you can now deploy and scale your Java 11 apps in App Engine with ease. 

We’re happy to announce that the App Engine standard environment Java 11 runtime is now generally available, giving you the flexibility to run any Java 11 application, web framework, or service in a fully managed serverless environment. 

Modern, unrestricted, managed
With the App Engine standard environment Java 11 runtime, you are in control of what you want to use to develop your application. You can use your favorite framework, such as Spring Boot, Micronaut, Quarkus, Ktor, or Vert.x. In fact, you can use pretty much any Java application that serves web requests specified by the $PORT environment variable (typically 8080). You can also use any JVM language, be it Apache Groovy, Kotlin, Scala, etc.

With no additional work, you also get the benefits of the fully managed App Engine serverless platform. App Engine can transparently scale your application up to handle traffic spikes, and also scale it back down to zero when there’s no traffic. App Engine automatically updates your runtime environment with latest security patches to the operating system and the JDK, so you don’t have to spend time provisioning or managing servers, load balancer, or even any infrastructure at all!

You also get traffic splitting, request tracing, monitoring, centralized logging, and production debugger capabilities out of the box.

In addition, if you can start your Java 11 application locally with java -jar app.jar, then you can run it on App Engine standard environment Java 11 runtime, with all the benefits of a managed serverless environment.

Finally, the App Engine standard environment Java 11 runtime comes with twice the amount of memory than the earlier Java 8 runtime, at no additional cost. Below is a table outlining the memory limit for each instance class.

memory limits.png

Getting started with a Spring Boot application
At beta, we showed you how to get started with a simple hello world example. Now, let’s take a look at how to start up a new Spring Boot application.

Learn how to deploy a Spring Boot application using a JAR file to Google App Engine standard for Java 11. The runtime can now deploy a JAR file, using gcloud command line, or Maven and Gradle plugins.

To start up a new Spring Boot application, all you need is a GCP project and the latest gcloud CLI installed locally. Then, follow these steps:

1. Create a new Spring Boot application from the Spring Boot Initilizr with the Web dependency and unzip the generated archive. Or, simply use this command line:

2. Add a new REST Controller that returns “Hello App Engine!”:

src/main/java/com/example/demo/HelloController.java

3. Build the application JAR:

4. Deploy it using gcloud CLI:

Once the deployment is complete, browse over to https://[PROJECT-ID].appspot.com to test it out (or simply run gcloud app browse). Your application will use the default app.yaml configuration, on an F1 instance class.

To customize your runtime options, such as running with more memory and CPU power, configure an environment variable, or change a Java command line flag, or add an app.yaml file:

src/main/java/appengine/app.yaml

Then, you can deploy an application using either a Maven or Gradle plugin:

Note: You can also configure the plugin directly into Maven’s pom.xml, or in Gradle’s build script.

Finally, you can also deploy a pre-built JAR with an app.yaml configuration using the gcloud CLI tool. First create an empty directory and place both the JAR file and app.yaml in that directory so the directory content looks like this:

Then, from that directory, simply run:

Try it out!
Read the App Engine Standard Java 11 runtime documentation to learn more. Try it with your favorite frameworks with samples in the GCP Java Samples Github Repository. If you have an existing App Engine Java 8 application, read the migration guide to move it to App Engine Java 11. Finally, don’t forget you can take advantage of the App Engine free tier while you experiment with our platform.

From the App Engine Java 11 team: Ludovic Champenois, Eamonn McManus, Ray Tsang, Guillaume Laforge, Averi Kitsch, Lawrence Latif, and Angela Funk.

  • 10 Oct 2019
  • By editor
  • Categories: Application Development, GCP, Google Cloud Platform, Identity & Security
Best practices for password management, 2019 edition

It is hard to imagine life today without passwords. They come in many forms, from your email credentials to your debit card PIN number, and they’re all secrets you use to help prove your identity. But traditional password best practices are no match for today’s sophisticated, and often automated, cybersecurity threats. With the all-too-often news of massive data breaches, leaked passwords, and phishing attacks, internet users must adapt to protect their valuable information. 

While passwords are far from perfect, they aren’t going away in the foreseeable future. Google’s automatic protections prevent the vast majority of account takeover attacks—even when an attacker knows the username and password—but there are also measures that users and IT professionals can take to further enhance account security. In the spirit of October being National Cybersecurity Awareness Month, we’ve released two new whitepapers to help you navigate password security.

Modern password security for users provides pragmatic and human-centric advice for end users to help improve your authentication security habits. We go in-depth with tips on improving the security of the passwords you use today, advice on how to answer security questions, and explanations of why certain practices should be avoided.

Modern password security for system designers is the first paper’s technical counterpart, outlining the latest advice on password interfaces and data handling. It provides technical guidance on how to handle UTF-8 characters, advice on sessions, and best practices for building a secure authentication system that can stand up to modern threats.

Our aim is to promote an open and secure internet where users are equipped to protect their personal information and online systems are designed to prevent credential loss, even if those systems are compromised. We hope these whitepapers—available in PDF form at the links above—help you in your quest to better protect your environment.

  • 7 Oct 2019
  • By editor
  • Categories: Application Development, DevOps & SRE, GCP, Google Cloud Platform
Push configuration with zero downtime using Cloud Pub/Sub and Spring Framework

As application configuration grows more complex, treating it with the same care we treat code—applying best practices for code review, and rolling it out gradually—makes for more stable, predictable application behavior. But deploying application configuration together with the application code takes away a lot of the flexibility that having separate configuration offers in the first place. Compared with application code, configuration data has different:

  • Granularity – per server or per region, rather than (unsurprisingly) per application.

  • Lifecycle – more frequent if you don’t deploy your application very often; less frequent, perhaps, if you embrace continuous deployment for code.

This leads us to an important best practice for software development teams: separating code from configuration when deploying applications. More recently, DevOps teams have started to practice “configuration as code”—storing configuration in version-tracked repositories. 

But if you update your configuration data separately, how will your code learn about it and use it? It’s possible, of course, to push new settings and restart all application instances to pick up the updates, but that could result in unnecessary downtime.

If you’re a Java developer and use the Spring Framework, there’s good news. Spring Cloud Config lets applications monitor a variety of sources (source control, database etc.) for configuration changes. It then notifies all subscriber applications that changes are available using Spring Cloud Bus and the messaging technology of your choice. 

If you’re running on Google Cloud, one great messaging option is Cloud Pub/Sub. In the remainder of this blog post, you’ll learn how to configure Spring Cloud Config and Spring Cloud Bus with Cloud Pub/Sub, so you can enjoy the benefits of configuration maintained as code and propagated to environments automatically.

Setting up the server and the client

Imagine you want to store your application configuration data in a GitHub repository. You’ll need to set up a dedicated configuration server (to monitor and fetch configuration data from its true source), as well as a configuration client embedded in the application that contains your business logic. In a real world scenario, you’d have many business applications or microservices, each of which has an embedded configuration client talking to the server and retrieving the latest configuration from it. You can find the full source code for all the examples in this post in this Spring Cloud GCP sample app.

Spring Cloud GCP.png

Configuration server setup

To take advantage of the power of distributed configuration, it’s common to set up a dedicated configuration server. You configure a GitHub webhook to notify it whenever there are changes, and the configuration server, in turn, notifies all the interested applications that run the business logic that new configuration is available to be picked up.

The configuration server has the following three dependencies (we recommend using the Spring Cloud GCP Bill Of Materials for setting up dependency versions):

pom.xml

The first dependency, spring-cloud-gcp-starter-bus-pubsub, ensures that Cloud Pub/Sub is the Spring Cloud Bus implementation that powers all the messaging functionality.

The other two dependencies make this application act as a Spring Cloud Config server capable of being notified of changes by the configuration source (Github) on the /monitor HTTP endpoint it sets up.

The config server application also needs to be told where to find the updated configuration; we use a standard Spring application properties file to point it to the GitHub repository containing the configuration:

application.properties

You’ll need to customize the port if you are running the example locally. Like all Spring Boot applications, the configuration server normally runs on port 8080 by default, but that port is used by the business application we are about to configure, so an override is needed.

The last piece you need to run a configuration server is the Java code!

PubSubConfigGitHubServerApplication.java

As is typical for Spring Boot applications, the boilerplate code is minimal—all the functionality is driven by a single annotation, @EnableConfigServer. This annotation, combined with the dependencies and configuration, gives you a fully functional configuration server capable of being notified when a new configuration arrives by way of the /monitor endpoint. Then, in turn, the configuration server notifies all the client applications through a Cloud Pub/Sub topic.

Speaking of the Cloud Pub/Sub topic, if you run just the server application, you’ll notice in the Google Cloud Console that a topic named springCloudBus was created for you automatically, along with a single anonymous subscription (a bit of trivia: every configuration server is capable of receiving the configuration it broadcasts, but configuration updates are suppressed on the server by default).

Configuration client setup

Now that you have a configuration server, you’re ready to create an application that subscribes to that server’s vast (well… not that vast) knowledge of configuration.

The client application dependencies are as follows:

pom.xml

The client needs a dependency on spring-cloud-gcp-starter-bus-pubsub, just as the server did. This dependency enables the client application to subscribe to configuration change notifications arriving over Cloud Pub/Sub. The notifications do not contain the configuration changes; the client applications will pull those over HTTP.

Notice that the client application only has one Spring Cloud Config dependency: spring-cloud-config-client. This application doesn’t need to know how the server finds out about configuration changes, hence the simple dependency.

For this demo, we made a web application, but client applications can be any type of application that you need. They don’t even need to be Java applications, as long as they know how to subscribe to a Cloud Pub/Sub topic and retrieve content from an HTTP endpoint!

Nor do you need any special application configuration for a client application. By default, all configuration clients look for a configuration server on local port 8888 and subscribe to a topic named springCloudBus. To customize the configuration server location for a real-world deployment, simply configure the spring.cloud.config.uri property in the bootstrap.properties file, which is read before the regular application initialization. To customize the topic name, add the spring.cloud.bus.destination property to the regular application.properties file, making sure that the config server and all client applications have the same value.

And now, it’s time to add the client application’s code:

PubSubConfigApplication.java

ExampleController.java

Again, the boilerplate here is minimal—PubSubConfigApplication starts up a Spring Boot application, and ExampleController sets up a single HTTP endpoint /message. If no configuration server is available, the endpoint serves the default message of “none”. If a configuration server is found on the default localhost:8888 URL, the configuration found there at client startup time will be served. The @RefreshScope annotation ensures that the message property gets a new value whenever a configuration refresh event is received.

The code is now complete! You can use the mvn spring-boot:run command to start up the config server and client in different terminals and try it out. 

To test that configuration changes propagate from GitHub to the client application, update configuration in your GitHub repository, and then manually invoke the /monitor endpoint of your config server (you would configure this to be done automatically through a GitHub webhook for a deployed config server):

After running the above command, the /message endpoint serves the most recent value retrieved from GitHub.

And that’s all that’s required for a basic Spring Cloud Config with Cloud Pub/Sub-enabled bus server/client combination. In the real world, you’ll most likely serve different configurations to different environments (dev, QA etc.). Because Spring Cloud Config supports hierarchical representation of configuration, it can grow to adapt to any environment setup.

For more information, visit the Spring Cloud GCP documentation and sample.

  • 27 Sep 2019
  • By editor
  • Categories: Application Development, GCP, serverless
6 strategies for scaling your serverless applications

A core promise of a serverless compute platform like Cloud Functions is that you don’t need to worry about infrastructure: write your code, deploy it and watch your service scale automatically. It’s a beautiful thing. 

That works great when your whole stack auto-scales. But what if your service depends on an APIs or databases with rate or connection limits? A spike of traffic might cause your service to scale (yay!) and quickly overrun those limits (ouch!). In this post, we’ll show you features of Cloud Functions, Google Cloud’s event-driven serverless compute service, and products like Cloud Tasks that can help serverless services play nice with the rest of your stack.

Serverless scaling basics

GCP Serverless Scale.png
Serverless scaling patterns

Lets review the basic way in which serverless functions scale as you take a function from your laptop to the cloud. 

  1. At a basic level, a function takes input, and provides an output response. 
  2. That function can be repeated with many inputs, providing many outputs.  
  3. A serverless platform like Cloud Functions manages elastic, horizontal scaling of function instances. 
  4. Because Google Cloud can provide near-infinite scale, that can have consequences for other systems with which your serverless function interacts.

Most scale-related problems are the result of limits on infrastructure resources and time. Not all things scale the same way, and not all serverless workloads have the same expected behaviors in terms of how they get work done. For example, whether or not the result of a function is returned to the caller or is only directed elsewhere, can change how you handle increasing scale in your function. Different situations may call for one or more different strategies to manage challenges scale can introduce.

Luckily, you have lots of different tools and techniques at your disposal to help ensure that your serverless applications scale effectively. Let’s take a look. 

1. Use Max Instances to manage connection limits
Because serverless compute products like Cloud Functions and Cloud Run are stateless, many functions use a database like Cloud SQL for stateful data. But this database might only be able to handle 100 concurrent connections. Under modest load (e.g., fewer than 100 queries per second), this works fine. But a sudden spike can result in hundreds of concurrent connections from your functions, leading to degraded performance or outages. 

One way to mitigate this is to configure instance scaling limits on your functions. Cloud Functions offers the max instances setting. This feature limits how many concurrent instances of your function are running and attempting to establish database connections. So if your database can only handle 100 concurrent connections, you might set max instances to a lower value, say 75. Since each instance of a function can only handle a single request at a time, this effectively means that you can only handle 75 concurrent requests at any given time.  

2. Use Cloud Tasks to limit the rate of work done
Sometimes the limit you are worried about isn’t the number of concurrent connections, but the rate at which work is performed. For example, imagine you need to call an external API for which you have a limited per-minute quota. Cloud Tasks gives you options in managing the way in which work gets done. It allows you to perform the work outside of the serverless handler in one or more work queues. Cloud Tasks supports rate and concurrency limits, making sure that regardless of the rate work arrives, it is performed with rates applied. 

3. Use stateful storage to defer results from long-running operations
Sometimes you want your function to be capable of deferring the requested work until after you provide  an initial response. But you still want to make the result of the work available to the caller eventually. For example, it may not make sense to try to encode a large video file inside a serverless instance. You could use Cloud Tasks if the caller of your workload only needs to know that the request was submitted. But if you want the caller to be able to retrieve some status or eventual result, you need an additional stateful system to track the job. In Google APIs this pattern is referred to as a long-running operation. There are several ways you can achieve this with serverless infrastructure on Google Cloud, such as using a combination of Cloud Functions, Cloud Pub/Sub, and Firestore.

4. Use Redis to rate limit usage
Sometimes you need to perform rate-limiting in the context of the HTTP request. This may be because you are performing per-user rate limits, or need to provide a back-pressure signal to the caller of your serverless workload. Because each serverless instance is stateless and has no knowledge of how many other instances may also be serving requests, you need a high-performance shared counter mechanism. Redis is a common choice for rate-limiting implementations. Read more about rate limiting and GCP, and see this tutorial for how to use serverless VPC access to reach a private Redis instance and perform rate limiting for serverless instances.

5. Use Cloud Pub/Sub to process work in batches
When dealing with a large number of messages, you may not want to process every message individually. A common pattern is to wait until a sufficient number of messages have accumulated before handling all of them in one batch. Cloud Functions integrates seamlessly with Cloud Pub/Sub as a trigger source, but serverless workloads can also use Cloud Pub/Sub as a place to accumulate batches of work, as the service will store messages for up to seven days.

Then, you can use Cloud Scheduler to handle these accumulated items on a regular schedule, triggering a function that processes all the accumulated messages in one batch run. 

You can also trigger the batch process more dynamically based on the number and age of accumulated messages. Check out this tutorial, which uses Cloud Pub/Sub, Stackdriver Alerting and Cloud Functions to process a batch of messages. 

6. Use Cloud Run for heavily I/O-bound work
One of the more expensive components of many infrastructure products is compute cycles. This is reflected in the pricing of many managed services which include how many time-units of CPU you use. When your serverless workload is just waiting around for a remote API call it may make to return, or waiting for a file to read, these are moments where you are not using the CPU, but are still “occupying it” so will be billed.  Cloud Run, which lets your run fully managed serverless containers, allows your workload to specify how many concurrent requests it can handle. This can lead to significant increases in efficiency for I/O bound workloads. 

For example, if the work being done spends most of its time waiting for replies from slow remote API calls, Cloud Run supports up to 80 requests concurrently on the same serverless instance which shares the use of the same CPU allocation. Learn more about tuning this capability for your service.

When to use which strategy

After reading the above, it may be clear which strategy might help your current project. But if you are looking a little more guidance, here’s a handy flow-chart.

GCP Serverless-scale-flow.png
click to enlarge

Of course you might choose to use more than one strategy together if you are facing multiple challenges.

Just let it scale

Even if you don’t have any scaling problems with your serverless workload, you may still be uneasy, especially if this is your first time building software in a serverless environment—what if you’re about to hit some limit, for example? Rest easy, the default limits for Google Cloud serverless infrastructure are high enough to accomodate most workloads without having to do anything. And if you do find yourself approaching those limits, we are happy to work with you to keep things running at any scale. When your serverless workload is doing something useful, more instances is a good thing!

Serverless compute solutions like Cloud Functions and Cloud Run are a great way to build highly scalable applications—even ones that depend on external services. To get started, visit cloud.google.com/serverless to learn more.

  • 20 Sep 2019
  • By editor
  • Categories: Application Development, DevOps & SRE, GCP, Inside Google Cloud
Cloud Build named a Leader for Continuous Integration in the Forrester Wave

Today, we are honored to share that Cloud Build, Google Cloud’s continuous integration (CI) and continuous delivery (CD) platform, was named a Leader in The Forrester Wave™: Cloud-Native Continuous Integration Tools, Q3 2019. The report identifies the 10 CI providers that matter most for continuous integration (CI) and how they stack up on 27 criterias. Cloud Build received the highest score in both the current offering and strategy categories. 

“Google Cloud Build comes out swinging, matching up well with other cloud giants. Google Cloud Build is relatively new when compared to the other public cloud CI offerings; this vendor had a lot to prove, and it did…. Customer references are happy to trade the cost of paying for the operation and management of build servers for moving their operations to Google’s pay-per-compute system. One reference explained that it’s in the middle of a cloud shift and is currently executing up to 650 builds per day. With that proved out, this customer plans to move an additional 250 repositories to the cloud” – Forrester Wave™ report

Top score in the Current Offering category: Among all 10 CI providers evaluated in the Wave, Cloud Build got the highest score in the Current Offering category. The Current Offering score is based on Cloud Build’s strength in the developer experience, build speed and scale, enterprise security and compliance, and enterprise support, amongst other criteria.

Top score in the Strategy category: Along with top score in the current offering category, Cloud Build also received the highest score in the Strategy category. In particular, Cloud Build’s scores in the partner ecosystem, commercial model, enterprise strategy and vision, and product roadmap criteria contributed to the score. 

CI plays an increasingly important role in DevOps, allowing enterprises to drive quality from the start of their development cycle. The report mentions “that cloud-native CI is the secret development sauce that enterprises need to be fast, responsive, and ready to take on incumbents and would-be digital disruptors.”

At Google, we’ve seen first-hand how cloud-based serverless CI tool can help drive quality and security at scale, and the lessons we’ve learned in that process manifest directly in Cloud Build. 

Today, organizations of all sizes use Cloud Build to drive productivity improvements via automated, repeatable CI processes. Some customers include Zendesk, Shopify, Snap, Lyft, and Vendasta, who chose Cloud Build for its:

  • Fully serverless platform: Cloud Build scales up and scale down in response to load with no need to pre-provision servers or pay in advance for additional capacity. 

  • Flexibility: With custom build steps and pre-created extensions to third party apps, enterprises can easily tie their legacy or home-grown tools as a part of their build process.

  • Security and compliance features: Developers have an ability to perform deep security scans within the CI/CD pipeline and ensure only trusted container images are deployed to production. 

We’re thrilled by Forrester’s recognition for Cloud Build. You can download a copy of the report here.

  • 3 Sep 2019
  • By editor
  • Categories: Application Development, Chrome Enterprise, GCP, Google Cloud Platform
Build a dev workflow with Cloud Code on a Pixelbook

Can you use a Pixelbook for serious software development? Do you want a workflow that is simple, doesn’t slow you down, and is portable to other platforms? And do you need support for Google Cloud Platform SDK, Kubernetes and Docker? I switched to a Pixelbook for development, and I love it!

Cloud Code.png

Pixelbooks are slim, light, ergonomic, and provide great performance. Chrome OS is simple to use. It brings many advantages over traditional operating systems: 

  • frictionless updates
  • enhanced security
  • extended battery life

And the most compelling feature for me: almost instant coming to life after sleep. This is great when hopping between meetings and on the road. 

A little about me – I’m a Developer Programs Engineer. I work on Google Cloud and contribute to many open source projects. I need to accomplish repeatable development tasks: working with Github, build, debug, deploy and observe. Running and testing the code on multiple platforms is also of high importance. I can assure you, the workflow below built on Pixelbook satisfies all the following:

  • Simple, repeatable development workflow with emphasis on developer productivity
  • Portable to other platforms (Linux, MacOS, Windows)—“create once, use everywhere”
  • Support for Google Cloud Platform SDK, Github, Kubernetes and Docker.

Let’s dive into how you can set up a development environment on Pixelbook that meets all those requirements using Cloud Code for Visual Studio Code, remote extensions, and several other handy tools. If you are new to the world of Chromebooks and switching from a PC, check out this post to get started.

Step 1: Enable Linux apps on Pixelbook

Linux for Chromebooks (aka Crostini) is a project to let developers do everything they need locally on a Chromebook, with an emphasis on web and Android app development. It adds Linux support.  

On your Pixelbook:

1. Go to Settings (chrome://settings) in the built-in Chrome browser.
2. Scroll down to the “Linux (Beta) ” section (see screenshot below).

Enable Linux apps on Pixelbook.png
3. Click “Turn on” and follow the prompts. It may take up to 10 minutes depending on your Wi-Fi connection.
4. At the end, a new Terminal window should automatically open to a shell within the container. We’re all set to continue to the next step – installing developer tools!

Pin the terminal window to your program bar for convenience.

Configure Pixelbook keyboard to respect Function keys
Folks coming from Windows or MacOS backgrounds are used to using Function keys for development productivity. On Chrome OS, they are replaced by default to a group of shortcuts. 

However, we can bring them back:

Navigate to chrome://settings. Now, pick “Device” on the left menu, then pick “keyboard”. Toggle “treat top-row keys as function keys”:

Configure Pixelbook keyboard to respect Function key.png

Step 2: Install development tools

For Kubernetes development on GCP, we need to install tools like Docker, Google Cloud SDK and kubectl. Pixelbook Linux is Debian Stretch, so we will install prerequisites for docker and gcloud using instructions for Debian Stretch distribution.

Install and configure Google Cloud SDK (gcloud):
Run these commands from gcloud Debian quickstart to install gcloud sdk:

Troubleshooting
You might run into this error:

Your keyrings are out of date. Run the following commands and try the Cloud SDK commands again:

Add gcloud to PATH

Installing Docker CE for Linux:
Follow these instructions.

And then add your user to the docker group:

NOTE: This allows running docker commands without sudo.

Install kubectl

Installing Visual Studio Code

Go to VSCode linux install instructions page.

  1. Download the.deb package (64bit) from the link on the page.

  2. After the download is complete, install the deb file using “Install app with Linux (beta)”:

Installing Visual Studio Code.png

Troubleshooting
If you don’t see “Install with Linux” as an option for the deb file, double check that you switched to the beta channel.

Now let’s install a few extensions that I find helpful when working on a remote container using VS Code:

  • Docker – managing docker images, autocompletion for docker files, and more.

  • Remote Containers – use a docker container as a full-featured development environment. 

These two, along with Cloud Code, are key extensions in our solution.

Step 3: Configuring Github access

Configure github with SSH key

Now copy and past the key into Github.

NOTE:If facing permissions error doing ssh-add, run sudo chown $USER .ssh and re-run all the steps for github setup again.

Set the username and email of github:

Step 4: Remote development

Now that we have the tools installed and Github access configured, let’s configure our development workflow. In order to create a solution that is portable to other platforms, we will use remote containers extension. We will create a container that will be used to build, deploy and debug applications that we create. This is how it will work:

We will open our codebase in a remote container. This will let VS Code think that it is open in isolated Linux environment, so everything we do (build, deploy, debug, file operations) will be interpreted as if we were working on a dedicated Linux VM with its own file system: every command we execute on VS Code will be sent for execution on our remote container. This way we achieve the goal of portability—remote Linux container can run on both MacOS and Windows just like we do it on Pixelbook with Chrome OS that supports Linux.

Dev Container settings for each repo

Here’s how to set up a dev container for an existing project. You can find the full source code in the Cloud Code templates repo. This Github repo includes templates for getting started with repeatable Kubernetes development in five programming languages—Node.js, Go, Java, Python and .NET. Each template includes configuration for debugging and deploying the template to Kubernetes cluster using Cloud Code for VS Code and IntelliJ. For simplicity, we work with a HelloWorld template that just serves “Hello World” message from a simple web server in a single container.

To enable remote container development, we need to add a .devcontainer folder with two files:

  • Dockerfile — defines container image that holds all developer tools we need installed in a remote development container

  • Devcontainer.json — Instructs VS Code Remote Tools extension how to run remote development container.

Creating a container image for remote development
Our remote container needs to have the SDK we use for development in the programming language of our choice. In addition, it needs tools that enable Cloud Code and Kubernetes workflows on Google Cloud. Therefore in the Dockerfile we install:

  • Google Cloud SDK

  • Skaffold — tool Cloud Code uses for handling the workflow for building, pushing and deploying apps in containers

  • Docker CLI

In addition, container images are immutable. Every time we open the code in a remote container, we’ll get a clean state—no extra settings will be persisted between remote container reloads by default (kubernetes clusters to work with, gcloud project configuration, github ssh keys). To address that, we mount our host folders as drives in the container (see this part later in devcontainer.json) and copy its content to the folder in the container file system where dev tools expect to find these files. 

Example from Dockerfile of kubeconfig, gcloud and ssh keys sync between host and remote container:

devcontainer.json
This file tells Remote Container extension which ports to expose in the container, how to mount drives, which extensions to install in the remote container, and more.

A few notable configurations:

runArgs contains command line arguments remote extension passes to docker when remote container is launched. This is where we set environment variables and mount external drives in a container. This helps to eliminate authorizations and specifies the kubernetes clusters we want to work with in Cloud Code.

In the extensions section, we add a few VS Code extensions for enhanced productivity in the development container. These will be installed on a dev container but not on the host, so you can tailor this choice to the codebase you plan to work on in the dev container. In this case I am setting up for nodejs development.

  • Cloud Code for VS Code — Google’s extension that helps to write, deploy and debug cloud-native applications quickly and easily. It allows deploying code to kubernetes and supports 5 programming languages.

  • Npm support for VS Code

  • Code Spell Checker

  • Markdownlint — Improves the quality of markdown files. 

  • Gitlens — Shows the history of code commits along with other relevant useful information.

  • Output colorizer — Colors the output of various commands. Helpful when observing application logs and other info in the IDE.

  • Vscode-icons — Changes icons to known file extensions for better visibility and discoverability of the files.

  • Docker — Manages docker images, autocompletion for docker files and more

  • TSLint — Linting for typescript (optional)

  • Bracket pair colorizer (optional)

  • Npm intellisense (optional)

  • ESLint Javascript (optional)

Hello World in Dev Container on Pixelbook

Let’s try to build, debug and deploy the sample Hello World nodejs app on Pixelbook using the remote dev container setup we just created:

  • Initialize gcloud by running gcloud init in a command line of your Pixelbook and following the steps. As part of our earlier setup, when we open the code in a remote container, Gcloud settings will be sync’ed into a dev container, so you won’t need to re-initialize every time.

  • Connect to a GKE cluster using the command below. We will use it to deploy our app. This also can be done outside of the dev container and will be sync’ed using our earlier setup in .devsettings.

  • Open the code in dev container: In VS Code command palette, type: Remote-Containers: Open Folder in Container… and select your code location. The code will open in dev container, pre-configured with all the toolset and ready to go!

  • Build and deploy the code to GKE using Cloud Code: In VS Code Command Palette, type: Cloud Code: Deploy and follow the instructions. Cloud Code will build the code, package it into container image, push it into container registry, then deploy it into GKE cluster we initialized earlier—all from the dev container on a Pixelbook!

Though slick and small, the Pixelbook might just fit your developer needs. With VS Code, Remote development extension, Docker, Kubernetes and Cloud Code you can lift your development setup to the next level, where there is no need to worry about machine-specific or platform-specific differences affecting your productivity. By sharing dev container setup on Github, developers that clone your code will be able to reopen it in a container (assuming they have the Remote – Containers extension installed).

Cloud Code Deploy.gif

Once done, developers will get an isolated environment with all dependencies baked in — just start coding!

If you have a Pixelbook — or if you don’t, and just want to try out Cloud Code — the Hello World app and all config files are available on GitHub. Let me know how it went and what your favorite setup for developer productivity is.

Further reading

  • Set up Linux (Beta) on your Chromebook

  • Chromebook Developer Toolbox

  • Getting Started with Cloud Code for VS Code

  • Cloud Code Templates Repo

  • Developing inside a Container

  • 27 Aug 2019
  • By editor
  • Categories: Application Development, GCP
Ruby support comes to App Engine standard environment

We have some exciting news for App Engine customers. Ruby is now Beta on App Engine standard environment, in addition to being available on the App Engine flexible environment. Let’s dive into what that means if you’re a technical practitioner running your apps on Google Cloud. 

There are lots of technical reasons to choose App Engine standard vs. flexible environment (this link explains it if you are curious), but at a high level, App Engine standard environment brings a number of benefits to developers. For many users the most noticeable change is a decrease in deployment time from 4-7 minutes on App Engine flexible environment down to 1-3 minutes on App Engine standard. App Engine standard environment also supports scale-to-zero so you don’t have to pay for your website when no one is using it. Finally, start-up time for new instances is measured in seconds rather than minutes—App Engine standard environment is simply more responsive to changes in load. 

Scale-to-zero has its advantages in terms of cost, but it also means that you’ll want a truly serverless background processing architecture. For that, Cloud Pub/Sub and Cloud Tasks are great solutions for handling background tasks, and they also operate on a pay-per-use model. 

We expect most Ruby developers to choose App Engine standard environment over App Engine flexible environment. The faster deployment time and scale-to-zero features are a huge benefit to most development processes. And deploying an existing Rails app to App Engine standard environment is pretty straightforward. But as they say, your mileage may vary. Look at the pros and cons in our documentation to choose the right App Engine for your Ruby applications.


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Most Viewed News
  • Microsoft Azure Government is First Commercial Cloud to Achieve DoD Impact Level 5 Provisional Authorization, General Availability of DoD Regions (339)
  • Introducing Coral: Our platform for development with local AI (315)
  • Enabling connected transformation with Apache Kafka and TensorFlow on Google Cloud Platform (312)
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
Tags
aws Azure Books cloud Developer Development DevOps GCP Google HowTo Learn Linux news Noticias OpenBooks SysOps Tutorials

KenkoGeek © 2019 All Right Reserved