Skip to content
KenkoGeek
  • Home
  • News
  • Home
  • News

Category Archives: cloud-computing

  • Home HowTo
  • Archive by category "cloud-computing"

13 Most Common Google Cloud Reference Architectures

Posted by Priyanka Vergadia, Developer Advocate

Google Cloud is a cloud computing platform that can be used to build and deploy applications. It allows you to take advantage of the flexibility of development while scaling the infrastructure as needed.

I’m often asked by developers to provide a list of Google Cloud architectures that help to get started on the cloud journey. Last month, I decided to start a mini-series on Twitter called “#13DaysOfGCP” where I shared the most common use cases on Google Cloud. I have compiled the list of all 13 architectures in this post. Some of the topics covered are hybrid cloud, mobile app backends, microservices, serverless, CICD and more. If you were not able to catch it, or if you missed a few days, here we bring to you the summary!

Series kickoff #13DaysOfGCP

#1: How to set up hybrid architecture in Google Cloud and on-premises

Day 1

#2: How to mask sensitive data in chatbots using Data loss prevention (DLP) API?

Day 2

#3: How to build mobile app backends on Google Cloud?

Day 3

#4: How to migrate Oracle Database to Spanner?

Day 4

#5: How to set up hybrid architecture for cloud bursting?

Day 5

#6: How to build a data lake in Google Cloud?

Day 6

#7: How to host websites on Google Cloud?

Day 7

#8: How to set up Continuous Integration and Continuous Delivery (CICD) pipeline on Google Cloud?

Day 8

#9: How to build serverless microservices in Google Cloud?

Day 9

#10: Machine Learning on Google Cloud

Day 10

#11: Serverless image, video or text processing in Google Cloud

Day 11

#12: Internet of Things (IoT) on Google Cloud

Day 12

#13: How to set up BeyondCorp zero trust security model?

Day 13

Wrap up with a puzzle

Wrap up!

We hope you enjoy this list of the most common reference architectures. Please let us know your thoughts in the comments below!

  • 23 Jun, 2020
  • (0) Comments
  • By editor
  • AI, Artificial Intelligence, cloud-computing, Data Science, GCP, Google, machine learning, microservices, ml, serverless

AWS re:Invent 2019 Swag Review

The complete guide to swag from the biggest cloud conference in the world — it was the year of the reusable straw

Well, it’s December 2019 and you know what that means — it’s AWS re:Invent time again! While the announcement of new services are great, let’s get to the real fun — a swag review from the biggest cloud conference in the world.

This year I tried as hard as possible to give a COMPLETE review of re:Invent swag. I visited almost every single booth, save for a few that either had no swag or were only offering a sticker or some chocolate.

I also didn’t collect things I got a LOT of in previous years — so less socks, no pins and no t-shirts. I did however take photos of as many of them as possible, as there were still some amazing designs out there!

So, without further ado, here we go!

Amazon

We begin each year with the hoodie and the bottle. This year AWS have gone blue and it looks fantastic! It also comes with the re-usable bottle as well, which are solid and were available in a bunch of colors. They also worked together with Cupanion which will donate water every time your bar code is scanned.

AWS Certification Lounge
Next we head over to the certification lounge to get our certified swag! This year, a pullover, socks, pin and sticker. The pullover is very nice, with a thin sports-like fabric. Thanks cert team!

4k/8k Charity Run
For the 4th year running I will be taking part in the charity run, this year it promises to be a lovely 0c/32f degrees.. brrrrrr ❄️ To make up for that, the t-shirt is really nice!

Throughout the week there was also a bunch of swag available depending on the different AWS booth departments you visited. Last year I scored a spoon with “Serverless for breakfast” teaspoon on it. This year the Serverless booth gave out a “Serverless for lunch” fork, and I look forward to my Serverless for butter knife in 2020 and Serverless for dessert spoon in 2021!

re:Play

The biggest tech party of the year always has some cool shirts and this year did NOT disappoint! They were really awesome neon t-shirts. One was a 3d text rotation thing and the other was space invaders.

A Cloud Guru
And then we move on to our own swag from A Cloud Guru — this years t-shirt. We didn’t choose the life, it chose us!

Ultra exclusive ACG swag: work for us and I’ll get you one of these light up shirts for re:Play. It’s extremely rare, but a good enough incentive to come work for us, don’t you think? 😉

Swag of the year

I have two winners this year for my favorite pieces of swag.

Anitian

The first goes to Anitian for their packets of Stumptown Coffee. I’d never seen bags of coffee being given away before, a truly unique and well received offering from the tech crowd!

Lucidchart

There’s really no explanation required here, Lucidchart were giving out hammocks amongst all their other cool swag. HAMMOCKS!

Me, ridiculously attempting to use a hammock between trees that obviously don’t support my weight

Honorable mention — Solace

I *loved* solace’s keychain Sriracha sauce! such a cool idea and also the first time I’ve seen it at a conference (the pop top was cool too).

Fresh Swag

Last year pins were all the rage, in 2017 the socks were the new thing (and are still quite popular in 2019), but this year the new and environmentally conscious swag was re-usable metal straws.

I think about 8 different companies gave them out this year, and they are a fantastic idea. All came with a pipe cleaner too, which is useful for keeping them clean.

Some were also collapsible as well, which is super convenient! Straws came courtesy of Rackspace, LG, Barracuda, GitLab, DXC, Acquia, Tech Mahindra and the AWS Partner Network (and probably a few more I missed).

The Booth Run

This is where I attempted to visit every booth to see what they were giving away. There’s no bad here, everyone put in a lot of effort and were really happy to show me what they had.

Thank you to ALL the booth staff and marketing staff and everyone involved in letting me take these photos and welcoming me with open arms, even our own competitors who were wondering why on earth I was at their booth. I just wanted to show the world the swag!

So, let’s get started in no particular order except the one I used to attempt to navigate the expo floor.

BMC gave out an Oculus Quest and a claw machine with lots of goodies

FireEye with the playing cards

Forcepoint had Rubiks cubes and lip balm

SuSe with the amazing plush geckos, socks, stickers and webcam covers!

Postman had shirts, fidget spinners and an awesome collection of stickers

CloudLink with the sweet sunglasses, tickets, pins and pens

Velocity had pens, straws, cups, travel bags and so many things:

Percona had candy, sunglasses, a really nice cup and.. I think they’re more stickers or tape? (please let me know if you know what that orange roll is :D)

Hitachi had some really nice clear bottles and koozies (I think)

Goldman Sachs Engineering had some sweet bluetooth speakers, pens, mints and travel mug.

Citrix had pens, mints, a nice tote bag and car charger

ThreatModeler had hats and phone card holders

Infinidat had a really nice shirt and pin

Split Software were giving out Nintendo Switch Lite’s which I seriously wanted and didn’t win 😢 (the wall of them was very cool though).

At Sysdig, you didn’t pick your swag, Zoltar did it for you.

And they had awesome bottles, stickers, international power adapters and pop sockets.

Datica Health had some sleek notebooks, pens and webcam covers

Giant Swarm had some SWEET t-shirts and even a baby onesie!

RoundTower had a koozie, a shirt, pin and socks!

Timescale had sunglasses, lip balm, a tote bag and coasters

DXC had a shirt, straw, socks, stickers, pen and notebook, as well as a cable holder/taco thing.

Fastly had a really nice wooden wireless phone charger, stickers and a shirt.

neqto by JIG-SAW had clips, stickers, phone holders, pens and silly putty (I think?)

Signal Sciences with the live WAF love shirt, and the booth staff were excited to model it, so thank you for that!

VictorOps has been a favourite of mine since their 2016 custom t-shirt printing station, this year they were giving out the Millennium falcon, pins and their famous cat shirt!

Coalfire had a fire tv stick and amazon alexa you could win

VividCortex always deliver with their hats! unicorns, wolves, bears.. and.. I’m sure I had a seal or snow leopard in my 2016 review.

LaunchDarkly had an awesome looking bandanna and stickers

Quest had light up bouncy balls, cups, stickers, pens and stick-a-ribbons!

Rubrik never disappoint with their socks and stickers.

Cloudability not only had their shirts, they also gave away Nintendo Switches and Oculus Quests!

D2IQ had an AMAZING custom t-shirt stand. I always have full respect for the people running these things, it’s extremely hard work to pump out these shirts all day long and they did such a great job.

DataDog are a staple of reInvent, their shirts are a hot item, and even rarer are their socks, this was from their business booth.

Pluralsight had a game at their booth to see what you won, they had wall adapter power banks, 3-in-1 chargers, some funky socks and even an Oculus Go.

Rapid7 had a nice t-shirt and stickers

Lightstep had a drone, pens, lanyard and awesome shirt and stickers!

Delphix had a SWEET Starwars theme, they had light sabers and the cutest luggage tags i’ve ever seen.

Cohesity with the greenest socks you’ll ever see! and a squishy truck! my son loves these things!

intermix.io had a pretty sweet shirt and sticker

and SenecaGlobal were giving out some mugs, pens, stickers and various echos

Fugue always have some lovely shirts, stickers and webcam covers and this year was no different!

opsani’s shirt and stickers were really colorful as well!

Sun Technologies had a bag for everyone walking past, which from what I saw was roughly 50,000 of them.

CenturyLink had a skill tester with lots of goodies inside

and the AWS JAM Lounge had some powerbanks, shirts, coins and stickers (as well as a memory game I was unable to get a photo of)

CapitalOne had one of the best designed shirts for the event in my opinion, and ran out fast. Also, some awesome decks of cards. Whoever was your designer this year did an outstanding job!

This guy I ran into in the food hall, only guy in the Venetian with more swag than me. Look at that suit. If anyone knows this gentleman’s name please let me know as I’d love to link him here 😉

Splunk always have their catch phrase shirts, pick what you want from the list! also, socks!

TrendMicro had some decks of cards and a chance to win Bluetooth Sunglasses!

Xerox had clips and a dancing robot

PrinterLogic had a fantastic shirt

8×8 had the CUTEST little cheetahs

LG had a push the button game with lots of prizes, including metal straws, echo shows and dots, switches and fitbits.

AVI Networks had a koozie, usb charger cables and a sweet ninja shirt.

Evolven had a great collection of coasters, I really should have taken one of each but my luggage space was basically non-existent at this point. Also pictured: me!

tugboat logic with the cutest stickers and tugboat bath toy

and extrahop with their light up bouncy balls and play-doh

ivanti had sunglasses, a yo-yo, dancing robot and koozie.

Blameless had some drones, Starwars toys, The Unicorn Project, Nintendo Switch Lites, as well as stickers to give away.

The awesome guys I chatted to at Presidio couldn’t stop talking about their luggage tags and the chance to win a 3 piece luggage set (actually awesome, I own the smaller one).

ManageEngine and Device42 with the sweet socks!

komprise with ACTUAL DONUTS and a sticker and pen. But DONUTS. They looked so good… mmm…

Hammerspace had.. a hammer. and a hammer pen. and a t-shirt with a hammer on it. and a USB key with a hammer on it. They’re experts at hammering things, like picking awesome swag.

Igneous had the cable taco things too, and the Imperial Star Destroyer lego to be won

readme had stickers and usb-c converters and gumballs

Qumulo had a Millennium Falcon, webcam covers and an angry carrot dude

Flowmill with BINOCULARS! what an awesome piece of swag! and stickers, too.

Matillion, who a few years ago won my most useful swag prize for a single stick of chapstick have stepped it up so far you can now not only build your own lego person, they donated to Girls who Code for every badge scan. Simply awesome, guys and girls.

I made our founder Ryan Kroonenburg, can you see the resemblance?

Deloitte had a nice bottle

GitHub let you build your own OctoCat!

This PagerDuty mascot Pagey, made of squishy stuff so you can throw it at the wall when your phone keeps buzzing with alerts. We’ve all been there guys, still an awesome piece of swag. Stickers too!

Cloudtamer had pins, stickers, pens and a bottle opener keychain compass carabiner.

NS1 know that it’s always DNS (I completely agree). They also had some mugs and Switches.

Hypergrid had straws which for some reason didn’t make it into my other original post about straws (also pens).

SoftNAS were giving away light up cubes and had a chance to win some cool drones.

Harness had a slot machine with a few prizes, namely their duck mascot!

Threat Stack had the coolest light up AXE and pins, stickers and shirt.

sas had a fidget pen, usb extension cable, stickers, t-shirt and mouse! they also had some giveaways if your key opened the box.

redis had a very nice shirt and stickers and a daily scooter giveaway.

TEKsystems also had straws, stickers and a pin. They didn’t make my original straw post either because a friend wanted some straws, so they got these ones! #sharetheswag

Cloudinary with the CUTEST unicorns and stickers

(x) matters with the fanny pack / bum bag (depending where in the world you’re from) which they advertised as the biggest bag you can bring in to re:Play, which was great because I actually brought it to re:Play to carry my stuff. Thanks guys and girls! oh also, a freakin Electric GoKart up for grabs.

Sentry had BATH BOMBS. This was a really beautiful piece of swag, in both uniqueness and presentation. Really nice work whoever came up with this one, I know quite a few of these went back to our offices to give out to the people who couldn’t attend re:Invent and they were very well received!

Symbee were the booth next to ours last year.. and this year they happened to be next to us again. I’m not sure what the odds of that happening were, but it’s pretty amazing. They’re a great bunch of guys and always have this really nice mug to give out!

GitLab.. how can I put this? They had a whole DINER going on. Coasters were records, they had pins and straws, cold drinks.. and at one point I even got an actual vinyl record from them. I’m going to have to go to my fathers house to listen to what’s actually on it (hard to see in the pic but it is grooved, not just blank).

Sungard had a nice bottle

Unisys had some flashing shoelaces!

and New Relic had A beanie, many colours of their “Deploy glambda” nail polish and stickers! They also had an awesome switch/zelda pack and rubiks cubes.

App Associates had.. so much stuff! pens, hats, bags, stickers, tattoos(!?)

Spotist with the sock chute

Qubole with shot glasses and hand sanitizer and.. i’m not sure what those square things are!

Scylla Cloud had these cute octopus dudes, shirts and egg drones!

ServiceNow had the socks and pins (and a pen i didn’t seem to get a photo of)

JFrog had such a cool shirt and frog squishy

Qualys had a huge tote, pins, coin and cards

Nutanix had a great portable anti slip mouse mat, charger cable, luggage tag, sticker and 5000mAh power bank. I really love the design of these!

Our pals at LinuxAcademy had their Pinehead squishies and stickers!

and DOMO had some cool stickers

I mentioned them earlier, but Rackspace also had some stickers in addition to their golden straw!

Liberty Mutual had a nice bluetooth headphone set and sticker!

and memSQL had some really pretty socks and lip balm

Software AG had a huge offering of shirts, socks, stickers and lip balm

Turbot had a skill testing machine where you could win.. actually I’m not sure. Please tag me if you know what these were!

and mongoDB had about a billion of these socks they were giving out all week, they look awesome!

Valtix with the sweet sport socks and t-shirt.

and Snowflake with the I ❤ data shirt and cute polar bear!

Acqueon had these pens with spinny heads and mini footballs.

VMWare had a huge slot machine with a few prizes, t-shirts, bottles, travel organizers, wireless chargers and lens covers.

logz.io had a huuuuge offering of mugs, bottle openers, notepads with a pen, tshirts, foldup backpacks and koozies.

and moogsoft had their squishy cow, nice stickers and pen

Cognizant had a lovely bottle and tote bag

druva had a great shirt, socks and giveaways

RedHat let you CUSTOMIZE YOUR RED HAT. They had a bunch of patches available and you got to pick two of them to be pressed onto your hat. Seriously awesome!

Densify had the BEST light saber at the show, not only does it light up in 3 different colours it makes lightsaber noises as you swing it around. They also had stress balls, a blinking wrist band which could earn you more swag if you were found wearing one, a dancing robot and lip balm.

Jet Brains had an awesome collection of stickers

Logicworks had stickers and the torpedo throw toy

tibco had a hat, usb hub, charger cable, pen, pin, hat, bose headphones and signed mini helmet prizes!! they looked so awesome!

zendesk had the BEST mugs i’ve ever seen at a conference, with the cork bottom to save your table from coffee rings or even heat damage, as well as the wooden wireless chargers.

telos had some pens and charger cables

Hewlett Packard Enterprise had a phone holder, webcam cover, wireless charger (i think?) and an instant photo printer!

arm were very security conscious, providing a webcam cover, mic blocker and USB data blocker.

EDB had bottles, socks, phone card holders, webcam covers and pens!

fortinet were in the sock game as well as a pen.

shi had two awesome sock designs, some stickers and m&m’s

Clubhouse had a FANTASTIC childrens sized t-shirt which my son is now proudly wearing, as well as some awesome stickers, pin and hand sanitizer.

Atlassian had this years version of their awesome socks. I think the first swag socks I ever got were from Atlassian, and I wear them to this day.

McAfee had an awesome tote bag, shirt, bouncy ball and pen

Capsul8 had an awesome trucker cap and Tux the penguin!

The king of socks, Sophos, had their collection on display for me. These aren’t all they give out, they usually have about 10–15 different pairs for any given re:Invent!

dremio had their cute narwhal shirt and plushy

SentinelOne would give you a Rubiks cube (and sticker) if you could solve it. My colleague brock tubre accomplished that in under a minute! (I need a lot more practice).

wish.com had tote bags and a nice t-shirt

and chef had stickers, a pen and a Bagito, which is a reuseable shopping bag.

Sisense spiced things up with their hot sauce and stickers

HANCOM group had some really cute keychain plushies, sticky note pads and some awesomely shiny stickers on offer

Kong had amazing socks, pins, pens and stickers

circleci had stickers and shirts

SailPoint had a pin and an extendable water bottle, which is very cool! I’d never seen that before.

taos had some pens, webcam covers and quicksnap koozies which were really cool.

zadara had a lot of things but by the time I got there they had just pens and organiser bags left over (sorry zadara! lots of booths to get through!)

Logic Monitor had their funky socks

Aporeto had the CUTEST octopus plush toy and shirt which this gentleman was only too happy to show off.

Cloudera had all important bags to carry all the other swag.

SearchBlox had a coaster/bottle opener combo with some stickers

Cloudzero had pins, stickers and a koozie

Informatica had some of the nicest socks at the show, Bombas. Each pair given out also donated a pair to someone in need. They also had some pins.

CloudAcademy had some hats, shirts, cable taco, webcam cover and stickers

CloudFlare had some sweet socks too (anyone counting the total amount of socks yet?)

Transposit had octopus stickers and COSMIK ICE CREAM. I’d never seen this before, seriously cool!

and Rockset had a cool t-shirt

Sumologic with their sumo dudes, always a favourite

StackRox certainly rocked the place with their purple light up bouncy balls, stickers and pens!

Nylas had some fantastic stickers, t-shirt and socks!

Cloudbees had a nice shirt and carry case

and Qualcomm had socks, cord organizers, straws and a phone holder.

Teradata had the only piece of swag I wanted but was unable to get, this awesome tiny bluetooth speaker (it was so cute!), as well as a cable organizer.

Now a word from Check Point, the only booth trying to do away with swag and instead was allowing you to choose two charities they donate to on your behalf instead, Girls who Code and Boys and Girls Clubs of America. Despite this being a swag promoting blog, I think it was a fantastic idea and fully support their mission!

Clumio had a whole range of things on their spin wheel, socks, phone battery packs and charging cables and webcam covers.

Instana had these cool squishy dudes

and Collibra had pens, mins and koozies

Fivetran had socks, shirt, pencil case and phone charging cables

boomi had really nice contigo bottles, stickers and pins

and talend had the socks, pen and webcam cover, and were also giving away a PS4 with PS VR. Sweet!

SignalFx had some bright pink socks, lip balm and some stickers and a pin

and dynatrace had some bright shirts too

tableau had loads of pins, stickers, a REALLY nice backpack and fortune cookies!

Stackery then had a tshirt, pins, stickers and glasses cleaning cloths

GitOps had a difficult rubiks cube (because the center square is actually directional, making it harder than normal), and some stickers

coinbase had some pins and stickers and free bitcoin (one of those may or may not be true).

Thundra had the sweet sunglasses, shirt and stickers

Wandisco had a shirt, a really nice beanie, stress ball, webcam cover and lip balm

O’Reilly had a huge collection of stickers, sunglasses and pens

Refinitiv had some stickers and a cool cube man! If you could fold him back up into a cube you got to keep him!

Prisma (by paloalto networks) had a shirt, webcam cover, pin and socks.

Databricks had heaps of stuff but by the time I got there it was a phone charger, pen and really nice notepad.

Attunity had a nice mug, pen and hand sanitizer

Gremlin had a gremlin!

CTO.ai had a bottle opener, square stress cube, tote bag and tshirt

sigma had a whole bunch of cool things, a hangover kit, t-shirts, stickers and bottle openers

Cloudwiry had hydration packs and a luggage scale, which is really useful for determining if you’ve picked up too much swag before heading home. They also had an amenity kit, pencils and drawing pads, Tide markers, whatever those black and white things are.. they had a lot of useful things that I didn’t have time to ask what they all were!

ChaosSearch had some pens, stickers and some relation to Corey Quinn!

Acquia had the straws, pens and stickers

and imperva had heat packs, sunglasses, cards, vegas survival kit, lint roller, pad and pen. I sadly missed the bacon station they had in the mornings!

synopsys had usb fans and pins

and Slalom had some women who build stickers and shirts

radware also had a wooden cloud dude!

and Wavefront by vmware had these cute little Volkswagon toys that my son absolutely ADORES.

nginx had some stickers and pins

veeam also had some international power adapters, really useful for those of us visiting other countries!

and the AWS Partner Network had hand sanitizer, notepads, straws and another cable taco!

and FINALLY, some AWS serverless heroes were wandering around with #wherespeter shirts. Did you get one? and did you find Peter? I did!

now.. believe it or not… I think that’s it. Every booth I could get to, I did. You’ve seen it all. Well, you think you’ve seen it all. This was also new this year and I’m really impressed it has been implemented AND used:

Super awesome on AWS’s part. Not everyone can take everything home, so being able to donate it instead of throwing things out is a great initiative from AWS.

OK, that’s about it for this year, please let me know what I missed (I realllly tried hard to get everything so if I did miss something I’ll be happy to add it!), I know there will be something awesome (like the DeepComposer) I didn’t have time to line up for. What did you get? what were your favorites? let me know in the comments below!

Thanks Cloud Gurus! and see you all next year.


AWS re:Invent 2019 Swag Review was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 10 Dec, 2019
  • (0) Comments
  • By editor
  • amazon, AWS, cloud-computing, HowTo, reinvent, swag

A Cloud Guru at AWS re:Invent 2019

Where to find the gurus in Las Vegas!

Here’s where you can find A Cloud Guru at AWS re:Invent 2019!

We’re looking forward to meeting you, hearing your feedback, handing out some awesome swag, and sharing our latest content and features.

Monday | Tuesday | Wednesday | Thursday | Info Sessions | Social

Monday, Dec 2

10:00 AM — 8:00 PM: Hackathon with Ryan Kroonenburg
To get re:invent started, hackathon with Ryan judging and winning teams getting a full-year membership to A Cloud Guru!

The Non-Profit Hackathon for Good provides a hands-on and team-oriented experience while supporting non-profit organizations. It is open to all skill levels. Be sure to attend the mixer on Sunday from 6–9pm at the Level Up inside the MGM Grand to build your team! More info here.

Non-Profit Hackathon for Good
10:00 AM — 8:00 PM
Venue: MGM Grand

Join the Non-Profit Hackathon for Good!

10:00AM — Machine Learning with Kesha Williams
In this session, learn how to level-up your skills and career through the journey of Kesha Williams, an AWS Machine Learning Hero.

CMY201— Java developer to machine-learning practitioner
10:00 AM — 11:00 AM
Venetian, Level 4, Delfino 4005

1:45PM — Getting Started with Machine Learning
In this chalk talk with Kesha Williams, learn how to get started building, training, and deploying your first machine learning model.

AIM226 — How to successfully become a machine learning developer
1:45 PM — 2:45 PM
Venetian, Level 3, Murano 3201A

Tuesday, Dec 3

All Day — A Cloud Guru at Booth 727!
When the exhibition hall opens on Tuesday, head over to booth #727 to say hello to Ryan and the crew from A Cloud Guru — see you there!

Wednesday, Dec 4

All Day — A Cloud Guru Booth 727!
After the keynote, A Cloud Guru will be heading back to Expo Hall in the Venetian. Stop by and say hello!

6:00PM — AWS Certification Reception
Are you AWS Certified? Register for the AWS Certification Reception and celebrate alongside our A Cloud Guru instructors! Space is limited, so be sure to register early for this event. Hope to see you there!

AWS Certification Reception
6:00 PM — 8:00 PM
Brooklyn Bowl |The LINQ

“Hello Cloud Gurus!” — Ryan and Sam Kroonenburg, co-Founders of A Cloud Guru

Thursday, Dec 5

10:30 AM — AWS DeepRacer with Scott Pletcher
Scott Pletcher will share how to host your own AWS DeepRacer event with everything from building a track, logistics, getting support from AWS, planning, leaderboards and more.

How to Roll Your Own DeepRacer Event
10:30 AM –11:00 AM
Venetian, Level 2, Hall C, Expo, Developer Lounge

Check out the Fast and Curious — our FREE DeepRacer Series!

1:00pm — AWS Security with Faye Ellis
AWS has launched a security certification for specialists to demonstrate their skills, which are in high demand. Learn about the major areas of security and AWS services you’ll need to know to become a security specialist and obtain the certification.

DVC07 — Preparing for the AWS Certified Security Specialty exam
1:00 PM — 1:30 PM
Venetian, Level 2, Hall C, Expo, Developer Lounge

All Week — Info Sessions

A Cloud Guru will be available every day for info sessions to share our latest content and features for business memberships. Be sure to schedule an appointment today — sessions are limited!

A Cloud Guru on Social Media
Follow us on Twitter, Facebook, and LinkedIn for updates! Be sure to subscribe to A Cloud Guru’s AWS This Week — and stay tuned for Ryan’s video summary of all the major re:Invent announcements!

Keep being awesome cloud gurus!


A Cloud Guru at AWS re:Invent 2019 was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 11 Nov, 2019
  • (0) Comments
  • By editor
  • amazon, amazon-web-services, AWS, cloud-computing, HowTo, technology

The State of Serverless, Circa 10/2019

The State of Serverless, circa 2019

My observations from Serverlessconf NYC and the current state of serverless, the ecosystem, the friction, and innovation

Back in the spring of 2016, A Cloud Guru hosted the first ever Serverless conference in a Brooklyn warehouse. In many ways, that inaugural conference was the birth of the Serverless ecosystem.

Serverlessconf was the first time that this growing community came together to talk about the architectures, benefits, and approaches powering the Serverless movement.

Last week A Cloud Guru once again brought top cloud vendors, startups, and thought leaders in the Serverless space to New York City to exchange ideas, trends, and practices in this rapidly growing space.

In addition to the “hallway track”, which was a great way to meet and (re)connect with talented and passionate technology experts — there were multiple tracks of content.

Collectively, these conferences are a great way to take the pulse of the community — what’s getting better, what’s still hard, and where the bleeding edge of innovation sits.

With apologies to the vendors and talks I didn’t manage to get to, here’s my take on the State of Serverless after spending nearly a week with many of its best and brightest.

Enterprise users have shown up — with their stories
Back in 2016, much of the content (and nearly every talk’s opening slides) at Serverlessconf was some flavor of “Here’s how we define Serverless.”

Content focused on how to get started and lots of how-to talks. Notably absent back in 2016? Enterprise users talking about their experiences applying Serverless in real life with the sole exception of Capital One.

While the latest Serverlessconf retains its technology and practice focus, it was fantastic to see companies like the Gemological Institute of America, Expedia, T-mobile, Mutual of Enumclaw Insurance, and LEGO up on stage in 2019 talking about adopting and benefitting from Serverless architectures.

Growing ecosystem
The highly scientific metric of “square feet of floor space devoted to vendors” continues to grow year over year. But more importantly, those vendors have moved from early stage awareness and information gathering to offering products and services in the here and now.

System integrators and consulting firms specializing in Serverless practices are also showing up — more evidence of enterprise traction in the space.

Configuration & AWS CloudFormation are still creating friction
The buzz was around whether declarative or imperative “Infrastructure-as-Code” is the better approach, alternatives to CloudFormation, easier ways to construct and deploy Serverless architectures. Topics like these featured strongly in both actual content and hallway conversations in 2019 — just as they did in 2016.

Whatever your position on recent approaches like AWS’s cdk and the utility of declarative approaches like AWS SAM, it’s clear that CloudFormation and other vendor-provided options still aren’t nailing it.

Vendors like Stackery.io got a lot of foot traffic from attendees looking for easier ways to build and deploy Serverless apps, while talks from Brian LeRoux and Ben Kehoe explored both the problem, and potential solutions, to the difficulties of using CloudFormation today.

Google and Cloudflare are playing the role of category challengers
Google Cloud Run is taking an intriguing approach — offering customers a container-based specification with the scales-on-usage and pay-per-request model of AWS Lambda. It’s still too early to call GCR’s product market fit, but it’s exciting to see Google step back and reimagine what a Serverless product can be.

Meanwhile, Cloudflare workers exploit that company’s massive edge infrastructure investment to run chunks of computation that make Lambda functions look huge by comparison. It’s not necessarily a solution to general compute, but given expectations that the bulk of silicon will live on the edge, rather than in data centers, in the future, I’d keep my eye on this one.

Serverless innovation isn’t over
Johann Schleier-Smith talked about UC Berkeley’s position paper on Serverless and the growing attention that Serverless is getting from the research community.

Yours truly laid out a recipe for building the Serverless Supercomputer, starting with Serverless Networking that opens the door to building distributed algorithms serverlessly.

Chris Munns reviewed the pace of innovation for AWS Lambda since its launch in 2014 and hinted at more to come at next month’s AWS re:Invent in Las Vegas.

With their amusing name helping to grab attention, The Agile Monkeys presented a Serverless answer to Ruby on Rails with a high-level object model that compiles down to Lambda functions and other serverless componentry.

It’s still not easy enough
Serverless might sound like a technology stack, but it’s really a vision for software development. In contrast to the ever-growing complexity of servers and Kubernetes, attendees at a Serverless conference are looking for ways to do more with less — less infrastructure, less complexity, less overhead, and less waste.

But while a desire for simplicity and “getting the business of business” done unites the attendees at a conference like this, it’s still the case that too much non-essential complexity gets in the way.

Tools, IDEs, debuggers, security, config & deployment, CI/CD pipelines…a lot of energy from vendors to startups to consultants to enterprise adopters is flowing into getting Serverless projects across the finish line. It may be way easier than servers (and containers), but it’s clearly still not easy enough.

Conferences like this help, but more and better documentation, more sharing of best practices, and tools that can truly streamline the job of delivering business value on top of Serverless remain a work in progress…leaving a lot of untapped potential opportunity in the space still to explore!

Author disclosures: I presented at Serverless NYC ’19 for which I received a registration fee waiver. I’m a former employee of both AWS and Microsoft and currently an independent board member of Stackery.io. I received no compensation from any of the companies or organizations cited above for writing or distributing this article and the opinions provided are my own.


The State of Serverless, Circa 10/2019 was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 18 Oct, 2019
  • (0) Comments
  • By editor
  • AWS, Azure, cloud-computing, Google Cloud Platform, HowTo, serverless

Serverless Scheduler

This project allows developers to quickly schedule events with precision, allows a large number of open tasks, and scales

Continue reading on A Cloud Guru »

  • 14 Oct, 2019
  • (0) Comments
  • By editor
  • AWS, cloud-computing, HowTo, programming, serverless, technology

Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

The metrics point to crypto still being a toy until it can achieve real world business scale demonstrated by Amazon DynamoDB

14 transactions per second. No matter how passionate you may be about the aspirations and future of crypto, it’s the metric that points out that when it comes to actual utility, crypto is still mostly a toy.

After all, pretty much any real world problem, including payments, e-commerce, remote telemetry, business process workflows, supply chain and transport logistics, and others require many, many times this bandwidth to handle their current business data needs — let alone future ones.

Unfortunately the crypto world’s current solutions to this problem tend to either blunt the advantages of decentralization (hello, sidechains!) or look like clumsy bolt-ons that don’t close the necessary gaps.

Real World Business Scale

Just how big is this gap, and what would success look like for crypto scalability? We can see an actual example of both real-world transaction scale and what it would take to enable migrating actual business processes to a new database technology by taking a look at Amazon’s 2019 Prime Day stats.

The AWS web site breaks down Amazon retail’s adoption and usage of NoSQL (in the form of DynamoDB) nicely:

Amazon DynamoDB supports multiple high-traffic sites and systems including Alexa, the Amazon.com sites, and all 442 Amazon fulfillment centers. Across the 48 hours of Prime Day, these sources made 7.11 trillion calls to the DynamoDB API, peaking at 45.4 million requests per second.

45 million requests per second. That’s six zeros more than Bitcoin or Eth. Yikes. And this is just one company’s traffic, and only a subset at that. (After all, Amazon is a heavy user of SQL databases as well as DynamoDB), so the actual TPS DynamoDB is doing at peak is even higher than the number above.

Talk about having a gap to goal…and it doesn’t stop there. If you imagine using a blockchain (with or without crypto) for a real-world e-commerce application and expect it support multiple companies in a multi-tenanted fashion, want it to replace legacy database systems, and need a little headroom to grow — a sane target might look like 140 million transactions per second.

That’s seven orders of magnitude from where we are today.

The Myth of Centralization

Why are these results so different? Let’s examine this dichotomy a little closer. First, note that DynamoDB creates a fully ordered ledger, known as a stream, for each table. Each table is totally ordered and immutable; once emitted, it never changes.

DynamoDB is doing its job by using a whole lot of individual servers communicating over a network to form a distributed algorithm that has a consensus algorithm at its heart.

Cross-table updates are given ACID properties through a transactional API. DynamoDB’s servers don’t “just trust” the network (or other parts of itself), either — data in transit and at rest is encrypted with modern cryptographic protocols and other machines (or the services running on them) are required to sign and authenticate themselves when they converse.

Any of this sound familiar?

The classic, albeit defensive, retort to this observation is, “Well, sure, but that’s a centralized database, and decentralized data is so much harder that is just has to be slower.” This defense sounds sort of plausible on the surface, but it doesn’t survive closer inspection.

First, let’s talk about centralization. A database running in single tenant mode with no security or isolation can be very fast indeed — think Redis or a hashtable in RAM, either of which can achieve bandwidth numbers like the DynamoDB rates quoted above. But that’s not even remotely a valid model for how a retail giant like Amazon uses DynamoDB.

Different teams within Amazon (credit card processing, catalog management, search, website, etc.) do not get to read and write each others’ data directly — these teams essentially assume they are mutually untrustworthy as a defensive measure. In other words, they make a similar assumption that a cryptocurrency blockchain node makes about other nodes in its network!

On the other side, DynamoDB supports millions of customer accounts. It has to assume that any one of them can be an evildoer and that it has to protect itself from customers and customers from each other. Amazon retail usage gets exactly the same treatment any other customer would…no more or less privileged than any other DynamoDB user.

Again, this sounds pretty familiar if you’re trying to handle money movement on a blockchain: You can’t trust other clients or other nodes.

These business-level assumptions are too similar to explain a 7 order of magnitude difference in performance. We’ll need to look elsewhere for an explanation.

Is it under the hood?

Now let’s look at the technology…maybe the answer is there. “Consensus” often gets thrown up as the reason blockchain bandwidth is so low. While DynamoDB tables are independent outside of transaction boundaries, it’s pretty clear that there’s a lot of consensus, in the form of totally ordered updates, many of which represent financial transactions of some flavor in those Prime Day stats.

Both blockchains and highly distributed databases like DynamoDB need to worry about fault tolerance and data durability, so they both need a voting mechanism.

Here’s one place where blockchains do have it a little harder: Overcoming Byzantine attacks requires a larger majority (2/3 +1) than simply establishing a quorum (1/2 +1) on a data read or write operation. But the math doesn’t hold up: At best, that accounts for 1/6th of the difference in bandwidth between the two systems, not 7 orders of magnitude.

What about Proof of Work? Ethereum, Bitcoin and other PoW-based blockchains intentionally slow down transactions in order to be Sybil resistant. But if that were the only issue, PoS blockchains would be demonstrating results similar to DynamoDB’s performance…and so far, they’re still not in the ballpark. Chalk PoW-versus-PoS up to a couple orders of magnitude, though — it’s at least germane as a difference.

How about the network? One difference between two nodes that run on the open Internet and a constellation of servers in (e.g.) AWS EC2 is that the latter run on a proprietary network. Intra-region, and especially intra-Availability Zone (“AZ”) traffic can easily be an order of magnitude higher bandwidth and an order of magnitude lower latency than open Internet-routed traffic, even within a city-sized locale.

But given that most production blockchain nodes at companies like Coinbase are running in AWS data centers, this also can’t explain the differences in performance. At best, it’s an indication that routing in blockchains needs more work…and still leaves 3 more orders of magnitude unaccounted for.

What about the application itself? Since the Amazon retail results are for multiple teams using different tables, there’s essentially a bunch of implicit sharding going on at the application level: Two teams with unrelated applications can use two separate tables, and neither DynamoDB nor these two users will need to order their respective data writes. Is this a possible semantic difference?

For a company like Amazon retail, the teams using DynamoDB “know” when to couple their tables (through use of the transaction API) and when to keep them separate. If a cryptocurrency API requires the blockchain to determine on the fly whether (and how) to shard by looking at every single incoming transaction, then there’s obviously more central coordination required. (Oh, the irony.)

But given that we have a published proof point here that a large company obviously will perform application level sharding through its schema design and API usage, it seems clear that this is a spurious difference — at best, it indicates an impoverished API or data model on the part of crypto, not an a priori requirement that a blockchain has to be slow in practice.

In fact, we have an indication that this dichotomy is something crypto clients are happy to code to: smart contracts. They’re both 1) distinguished in the API from “normal” (simple transfer) transactions and 2) tend to denote their participants in some fashion.

It’s easy to see the similarity between smart contract calls in a decentralized blockchain and use of the DynamoDB transaction API between teams in a large “centralized” company like Amazon retail. Let’s assume this accounts for an order of magnitude; 2 more to go.

Managed Services and Cloud Optimization

One significant difference in the coding practices of a service like DynamoDB versus pretty much any cryptocurrency is that the former is highly optimized for running in the cloud.

In fact, you’d be hard pressed to locate a line of code in DynamoDB’s implementation that hasn’t been repeatedly scrutinized to see if there’s a way to wring more performance out of it by thinking hard about how and where it runs. Contrast this to crypto implementations, which practically make it a precept to assume the cloud doesn’t exist.

Instance selection, zonal placement, traffic routing, scaling and workload distribution…most of the practical knowledge, operational hygiene, and design methodology learned and practiced over the last decade goes unused in crypto. It’s not hard to imagine that accounts for the remaining gap.

Getting Schooled on Scalability

Are there design patterns we can glean from a successfully scaled distributed system like DynamoDB as we contemplate next-generation cryptocurrency blockchain architectures?

We can certainly “reverse engineer” some requirements by looking at how a commercially viable solution like Amazon’s Prime Day works today:

  • Application layer (client-provided) sharding is a hard requirement. This might take a more contract-centric form in a blockchain than in a NoSQL database’s API, but it’s still critical to involve the application in deciding which transactions require total ordering versus partial ordering versus no ordering. Partial ordering via client-provided grouping of transactions in particular is virtually certain to be part of any feasible solution.
  • Quorum voting may indeed be a bottleneck on performance, but Byzantine resistance per se is a red herring. Establishing a majority vote on data durability across mutually authenticated storage servers with full encoding on the wire isn’t much different from a Proof-of-Stake supermajority vote in a blockchain. So while it matters to “sweat the details” on getting this inner loop efficient, it can’t be the case that consensus per se fundamentally forces blockchains to be slow.
  • Routing matters. Routing alone won’t speed up a blockchain by 7 orders of magnitude, but smarter routing might shave off a factor of 10.
  • Infrastructure ignorance comes at a cost. Cryptocurrency developers largely ignore the fact that the cloud exists (certainly that managed services, the most modern incarnation of the cloud, exist). This is surprising, given that the vast majority of cryptocurrency nodes run in the cloud anyway, and it almost certainly accounts for at least some of the large differential in performance. In a system like DynamoDB you can count on the fact that every line of code has been optimized to run well in the cloud. Amazon retail is also a large user of serverless approaches in general, including DynamoDB, AWS Lambda, and other modern cloud services that wring performance and cost savings out of every transaction.

We’re not going to solve blockchain scaling in a single article 😀, but there’s a lot we can learn by taking a non-defensive look at the problem and comparing it to the best known distributed algorithms in use by commercial companies today.

Only by being willing to learn and adapt ideas from related areas and applications can blockchains and cryptocurrencies grow into the lofty expectations that have been set for them…and claim a meaningful place in scaling up to handle real-world business transactions.


Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 3 Oct, 2019
  • (0) Comments
  • By editor
  • AWS, Blockchain, cloud-computing, crypto, cryptocurrency, HowTo

15 Hours with CloudFormation

Coding a serverless NAT puncher in AWS CloudFormation — a narrated description of a real-world serverless configuration.

Continue reading on A Cloud Guru »

  • 3 Oct, 2019
  • (0) Comments
  • By editor
  • AWS, cloud-computing, coding, HowTo, programming, serverless

Serverless Mullet Architectures

Business in the front, party in the back. Bring on the mullets!

A 1930’s Bungalow in Sydney that preserved its historical front facade while radically updating the yard-facing rear of the house. Credit Dwell.

In residential construction, a mullet architecture is a house with a traditional front but with a radically different — often much more modern — backside where it faces the private yard.

Like the mullet haircut after which the architecture is named, it’s conventional business in the front — but a creative party in the back.

I find the mullet architecture metaphor useful in describing software designs that have a similar dichotomy. Amazon API Gateway launched support for serverless web sockets at the end of 2018, and using them with AWS Lambda functions is a great example of a software mullet architecture.

In this case, the “front yard” is a classic websocket — a long-lived, duplex TCP/IP socket between two systems established via HTTP.

Classic uses for websockets include enabling mobile devices and web browsers to communicate with backend systems and services in real time, and to enable those services to notify clients proactively — without requiring the CPU and network overhead of repeated polling by the client.

In the classic approach, the “server side” of the websocket is indeed a conventional server, such as an EC2 instance in the AWS cloud.

The serverless version of this websockets looks and works the same on the front — to the mobile device or web browser, nothing changes. But the “party in the back” of the mullet is no longer a server — now it’s a Lambda function.

To make this work, API Gateway both hosts the websocket protocol (just as it hosts the HTTP protocol for a REST API) and performs the data framing and dispatch. In a REST API call, the relationship between the call to the API and API Gateway’s call to Lambda (or other backend services) is synchronous and one-to-one.

Both of these assumptions get relaxed in a web socket, which offers independent, asynchronous communication in both directions. API Gateway handles this “impedance mismatch” — providing the long-lived endpoint to the websocket for its client, while handling Lambda invocations (and response callbacks — more on those later) on the backend.

Here’s a conceptual diagram of the relationships with its communication patterns:

A Serverless Websocket Architecture on AWS

When is a serverless mullet a good idea?

When (and why) is a serverless mullet architecture helpful? One simple answer: Anywhere you use a websocket today, you can now consider replacing it with a serverless backend.

Amazon’s documentation uses a chat relay server between mobile and/or web clients to illustrate one possible case where a serverless approach can replace a scenario that historically could only be accomplished with servers.

However, there are also interesting “server-to-server” (if you’ll forgive the expression) applications of this architectural pattern beyond long-lived client connections. I recently found myself needing to build a NAT puncher rendezvous service — essentially a simplified version of a STUN server.

You can read more about NAT punching here, but for the purposes of our discussion here, what matters is that I had the following requirements:

  1. I needed a small amount of configuration information from each of two different Lambda functions. Let’s call this info a “pairing key” — it can be represented by a short string. For discussion purposes, we’ll refer to the two callers as “left” and “right”. Note that the service is multi-tenanted, so there are potentially a lot of left/right pairs constantly coming and going, each using different pairing keys.
  2. I also needed a small amount of metadata that I can get from API Gateway about the connection itself (basically the source IP as it appears to API Gateway, after any NATting has taken place).
  3. I have to exchange the data from (2) between clients who provide the same pairing key in (1); that is, left gets right’s metadata and right gets left’s metadata. There’s a lightweight barrier synchronization here: (3) can’t happen until both left and right have shown up…but once they have shown up, the service has to perform (3) as quickly as possible.

The final requirement above is the reason a simple REST API backed by Lambda isn’t a great solution: It would require the first arriver to sit in a busy loop, continuously polling the database (Amazon DynamoDB in my case) waiting for the other side to show up.

Repeatedly querying DynamoDB would drive up costs and we’d be subject to maximum integration duration of an API call of 30 seconds. Using DynamoDB change streams doesn’t work here, either, as the Lambda they would invoke can’t “talk” to the Lambda instance created by invoking the API. It’s also tricky to use Step Functions — “left” and “right” are symmetric peers here, so neither one knows who should kick off a workflow.

Enter…The Mullet

So what can we do that’s better? Well, left and right aren’t mobile or web clients, they’re Lambdas — but they have a very “websockety” problem. They need to coordinate some data and event timing through an intermediary that can “see” both conversations and they benefit from a communication channel that can implicitly convey the state of the barrier synchronization required.

The protocol is simple and looks like this (shown with left as the first arrival):

Here we take full advantage of the mullet architecture:

  • Clients arrive (and communicate) asynchronously with respect to one another, but we can also track the progression of the workflow and coordinate them from the “server” — here, a Lambda/Dynamo combo — that tracks the state of each pairing.
  • API Gateway does most of the heavy lifting, including detecting the data frames in the websocket communication and turning them into Lambda invocations.
  • API Gateway model validation verifies the syntax of incoming messages, so the Lambda code can assume they’re well formed, making the code even simpler.

The architecture is essentially the equivalent of a classic serverless “CRUD over API Gateway / Lambda / Dynamo” but with the added benefits of asynchronous, bidirectional communication and lightweight cross-call coordination.

One important piece of the puzzle is the async callback pathway. There’s an inherent communication asymmetry when we hook up a websocket to a Lambda.

Messages that flow from client to Lambda are easy to model — API Gateway turns them into the arguments to a Lambda invocation. If that Lambda wants to synchronously respond, that’s also easy — API Gateway turns its result into a websocket message and sends it back to the client after the Lambda completes.

But what about our barrier synchronization? In the sequence chart above, it has to happen asynchronously with respect to left’s conversation. To handle this, API Gateway creates a special HTTPS endpoint for each websocket. Calls to this URL get turned into websocket messages that are sent (asynchronously) back to the client.

In our example, the Lambda handling the conversation with right uses this special endpoint to unblock left when the pairing is complete. This represents more “expressive power” than normally exists when a client invokes a Lambda function.

Serverless Benefits

The serverless mullet architecture offers all the usual serverless advantages. In contrast to a serverful approach, such as running a (fleet of) STUN server(s), there are no EC2 instances to deploy, scale, log, manage, or monitor and fault tolerance and scalability comes built in.

Also unlike a server-based approach that would need a front end fleet to handle websocket communication, the code required to implement this approach is tiny — only a few hundred lines, most of which is boilerplate exception handling and error checking. Even the JSON syntax checking of the messages is handled automatically.

One caveat to this “all in on managed services” approach is that the configuration has a complexity of its own — unsurprisingly, as we’re asking services like API Gateway, Lambda, and Dynamo to do a lot of the heavy lifting for us.

For this project, my AWS CloudFormation template is over 500 lines (including comments), while the code, including all its error checking, is only 383 lines. Asingle data point, but illustrative of the fact that “configurating” the managed services to handle things like data frame syntax checking by exhibiting an embedded JSON Schema makes for some non trivial CloudFormation.

However, a little extra complexity in the config is well worth it to gain the operational operational benefits of letting AWS maintain and scale all that functionality!

Mullets all Around

Serverless continues to expand its “addressable market” as new capabilities and services join the party. Fully managed websockets backed by Lambda is a great step forward, but it’s far from the only example of mullet architectures.

Amazon AppSync, a managed GraphQL service, is another example. It offers a blend of synchronous and asynchronous JSON-based communication channels — and when backed by a Lambda instead of a SQL or NoSQL database, it offers another fantastic mullet architecture that makes it easy to build powerful capabilities with built-in query capabilities, all without the need for servers.

AWS and other cloud vendors continue to look for ways to make development easier, and hooking up serverless capabilities to conventional developer experiences continues to be a rich area for new innovation.

Business in the front, party in the back …

bring on the mullets!


Serverless Mullet Architectures was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 30 Sep, 2019
  • (0) Comments
  • By editor
  • AWS, cloud-computing, HowTo, microservices, programming, serverless

What’s the friction with Serverless?

Three pain points of serverless adoption from The Agile Monkeys

The team at The Agile Monkeys has worked on non-trivial applications with a wide range of technologies for more than a decade — mainly in the retail sector on solutions from e-commerce management to warehouse automation and everything in between. Our engineers are very aware of the enormous challenges of scalability, reliability, and codebase management that many companies face when developing business solutions.

Based on our experience, we’re convinced that serverless is the execution paradigm of the future that solves many challenges of modern application development. But we still see friction in the currently available abstractions — and available tooling still makes it hard to take advantage of the true potential of Serverless.

In the past decade, most successful retail companies opted for using pre-built monolithic e-commerce platforms that they customized for their needs. But with the growth of their user base, those platforms can no longer manage peak load (like Black Friday) anymore.

As a result, we’ve repeatedly been involved in “brain-surgery projects” to split their big monolithic codebases into microservices over the past few years. But the architecture change came with new challenges: handling synchronization and communication between services efficiently, and a huge increase in operational complexity.

We started researching Serverless as a potential solution for those challenges two years ago and we saw its tremendous possibilities.

Serverless not only removes most of the operational complexity but, thanks to the many hosted service offerings, it allows us to deprecate part of the glue code required to deal with coordinating services. And for companies that are already selling, it can be smoothly introduced when implementing new features or when a redesign of a service is needed, without affecting the original codebase.

It’s very easy to deploy a lambda function and implement basic use cases, going beyond that requires a lot of knowledge about the cloud service offerings.

But while Serverless is evolving very quickly, it’s still a relatively new idea. Many VPs and Senior Engineers are still hesitant to introduce it into their production systems because of the mindset change and the training it would require for teams that are already working as well-oiled machines.

While it’s very easy to deploy a lambda function and implement basic use cases with the existing tools, going beyond the basics requires a lot of knowledge about the cloud, managed service offerings, and their guarantees and limitations.

When introducing new engineers to #serverless, it still feels like teaching them how to draw an owl: Step 1: Write and deploy a lambda function Step 2: Build the rest of your damn production-ready event-driven application!

 — @javier_toledo

Faster time to market?

It’s a common mantra in Serverless forums to say that Serverless means “faster time to market,” and we’ve found this can be true for well-trained teams. However, teams that are just starting the journey may not find it to be the case, and can easily become frustrated and end up dropping Serverless in favor of better-known tools.

Both from our experiences with clients adopting (or rejecting) Serverless, and our own experience releasing applications like Made for Serverless ourselves, we’ve found the following three pain points along the way:

Pain Point #1: Engineers starting with Serverless might need more time than you’d expect to be productive.

It requires a paradigm shift. You have to switch to “the Serverless way,” start thinking in a more event-driven way, and resist the temptation to build the same kind of backend code you always have but deploy it on lambda.

Serverless is still in a relatively early stage, so we don’t have well-known and established application-level design patterns in the same way that we have MVC for classic database-driven applications.

For instance, if you google “CQRS in AWS” you’ll find half a dozen articles with half a dozen different designs, all of them valid under certain circumstances and offering different guarantees.

As tools are under heavy development, new utilities that look amazing in demos and getting-started guides may have more bugs and hidden limitations than we’d like to admit, requiring some trial, error, and troubleshooting (oh! the price of being on the cutting edge of technology).

Pain Point #2: You definitely need cloud knowledge to succeed.

The existing frameworks provide handy abstractions to significantly reduce the configuration burden, but you still have to know what you’re doing and understand the basics of roles, managed cloud services, and lambdas in order to build anything non-trivial. You need to pick the right services and configure them properly, which requires a lot of knowledge beyond lambda functions.

We see a trend on current frameworks for serverless where they’re providing higher-level abstractions and building blocks. But, when it’s time to build an application, we miss an experience like Ruby on Rails or Spring Boot, which help developers write business logic and provide some basic structure to their programs.

Existing tools are optimizing the process of configuring the cloud, but a team can’t safely ignore that to fully focus on modeling their domain.

Is a sense, we’re still at a point where the tools are optimizing the process of configuring the cloud (and they’re doing great work there!), but we haven’t yet reached the point where a team can safely forget about that and focus on modeling their domain.

Paint Point #3: Functions are actually a pretty low-level abstraction.

I know this might be a hot take, but for us, functions are a very low-level abstraction that might make it challenging to properly architect your project as your services grow.

When you’re starting with Serverless, the idea of splitting your code into small, manageable functions is compelling. But since there are no clear guides on properly architecting the code in a lambda function, we rely on engineers to manage this every time.

And while more experienced engineers will figure out solutions, less experienced ones might find this difficult. In any case, moving from one project to another will require reinventing the wheel, because there are no well-established conventions.

Identifying the challenges is just the first step to improvement. We strongly believe in a Serverless future where everyone is using this technology, because it’s what makes sense from a business perspective (companies need to focus on what makes them special and externalize everything else).

So what do we think is needed to get to that point?

Our innovation team is working on some ideas that we will share in Serverlessconf NYC. Stay tuned for our next article in the series that we will publish during the event!

This is a guest article written by The Agile Monkeys’ innovation team: Javier Toledo, Álvaro López Espinosa and Nick Tchayka, with reviews, ideas and contributions from many other people in our company. Thank you, folks!


What’s the friction with Serverless? was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 17 Sep, 2019
  • (0) Comments
  • By editor
  • AWS, cloud-computing, HowTo, programming, serverless, the-agile-monkeys

The Rise of the Serverless Architect

The focus has expanded to the entire application lifecycle

Over the last 4 years of developing the Serverless Framework, and working closely with many of the thousands of organizations that use it to build their applications, I’ve had the good fortune of watching the movement evolve considerably.

In the early days I met countless pioneering developers who were utilizing this new architecture to build incredible things, despite the considerable challenges and relatively crude tooling that existed.

I also worked with many of these early pioneers to convince their organizations to go all-in on Serverless, despite the lack of successful case studies and tried and true best practices — often based simply on an internal POC that promised a shorter development cycle and lower total cost of ownership.

As the tooling has evolved, and the case studies have piled up, I’ve noticed that these early Serverless pioneers have forged a new title that is gaining prominence within organizations — that of Serverless Architect.

What is a Serverless Architect?

Early on in the Serverless journey, when we were initially developing the Serverless Framework (in those days known as JAWS), all of the focus was on development and deployment.

It was clear that this new piece of infrastructure called Lambda had some amazing qualities, but how could we as developers actually build something meaningful with it? And seeing as how Lambda is a cloud native service, the question that followed shortly after was: how can we actually deploy these apps in a sane way?

As various solutions to these problems were developed and improved upon, the focus of developers building Serverless applications expanded to the entire application lifecycle, including testing, monitoring and securing their Serverless apps.

The focus of Serverless has expanded to the entire application lifecycle

A Serverless Architect is a developer who takes this lifecycle focused view and often personally owns at least part of every stage of the Serverless Application Lifecycle. They don’t simply write functions — they implement business results while thinking through how the code that delivers those results will be developed, deployed, tested, monitored, and secured.

Why is the Serverless Architect essential?

Serverless architectures are essentially collections of managed services connected by functions. Because of this unique and novel model it’s important that the architect has a deep understanding of the event-driven, cloud native paradigm of the architecture.

The demand for the Serverless Architect is a direct result of the unique nature of this architecture and the Serverless Application Lifecycle that accompanies it. Unlike legacy architectures, these various lifecycle stages are no longer separate concerns handled by separate teams at separate times — but rather a single integrated lifecycle that needs to be addressed in a unified way.

There are a couple specific reasons this is the case with Serverless:

  1. Due to the reduced complexity and self-serve nature of the Serverless architecture, developers are more likely to be responsible for the monitoring and security of their applications.
  2. Due to the cloud native nature of the services that make up a Serverless Architecture, develop, deploy, and test stages are naturally more integrated.
  3. Due to the focus on simplicity with Serverless architecture, there’s a stronger desire for fewer tools and more streamlined experiences.

As organizations mature in their Serverless adoption, the demand for these Serverless Architectures grows quickly. While one person thinking this way in the early days is often all that is needed to get adoption off the ground, it often takes teams of Serverless Architects to scale to a ‘serverless first’ mindset.

What types of tooling does the Serverless Architect need?

As Serverless continues to grow in adoption and the number of Serverless Architects continues to increase, it’s becoming clear that unified tooling that addresses the entire Serverless Application Lifecycle is going to be increasingly valuable.

Cobbling together multiple complex solutions is antithetical to the whole Serverless mindset — and if that’s what’s required to be successful with Serverless than somethings gone wrong.

At Serverless Inc. we’re evolving the Serverless Framework to address the complete application lifecycle while maintaining the streamlined developer workflow that our community has grown to love. We’re working hard to ensure that Serverless Architects have the tools they need to flourish and we’re always excited to hear feedback.

Sign up free and let us know what you think.


The Rise of the Serverless Architect was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 6 Sep, 2019
  • (0) Comments
  • By editor
  • AWS, cloud-computing, HowTo, programming, serverless, technology

How I replicated an $86 million project in 57 lines of code

When an experiment with existing open source technology does a “good enough” job

The Victoria Police are the primary law enforcement agency of Victoria, Australia. With over 16,000 vehicles stolen in Victoria this past year — at a cost of about $170 million — the police department is experimenting with a variety of technology-driven solutions to crackdown on car theft. They call this system BlueNet.

To help prevent fraudulent sales of stolen vehicles, there is already a VicRoads web-based service for checking the status of vehicle registrations. The department has also invested in a stationary license plate scanner — a fixed tripod camera which scans passing traffic to automatically identify stolen vehicles.

Don’t ask me why, but one afternoon I had the desire to prototype a vehicle-mounted license plate scanner that would automatically notify you if a vehicle had been stolen or was unregistered. Understanding that these individual components existed, I wondered how difficult it would be to wire them together.

But it was after a bit of googling that I discovered the Victoria Police had recently undergone a trial of a similar device, and the estimated cost of roll out was somewhere in the vicinity of $86,000,000. One astute commenter pointed out that the $86M cost to fit out 220 vehicles comes in at a rather thirsty $390,909 per vehicle.

Surely we can do a bit better than that.

Existing stationary license plate recognition systems

The Success Criteria

Before getting started, I outlined a few key requirements for product design.

Requirement #1: The image processing must be performed locally

Streaming live video to a central processing warehouse seemed the least efficient approach to solving this problem. Besides the whopping bill for data traffic, you’re also introducing network latency into a process which may already be quite slow.

Although a centralized machine learning algorithm is only going to get more accurate over time, I wanted to learn if an local on-device implementation would be “good enough”.

Requirement #2: It must work with low quality images

Since I don’t have a Raspberry Pi camera or USB webcam, so I’ll be using dashcam footage — it’s readily available and an ideal source of sample data. As an added bonus, dashcam video represents the overall quality of footage you’d expect from vehicle mounted cameras.

Requirement #3: It needs to be built using open source technology

Relying upon a proprietary software means you’ll get stung every time you request a change or enhancement — and the stinging will continue for every request made thereafter. Using open source technology is a no-brainer.

My solution

At a high level, my solution takes an image from a dashcam video, pumps it through an open source license plate recognition system installed locally on the device, queries the registration check service, and then returns the results for display.

The data returned to the device installed in the law enforcement vehicle includes the vehicle’s make and model (which it only uses to verify whether the plates have been stolen), the registration status, and any notifications of the vehicle being reported stolen.

If that sounds rather simple, it’s because it really is. For example, the image processing can all be handled by the openalpr library.

This is really all that’s involved to recognize the characters on a license plate:

A Minor Caveat
Public access to the VicRoads APIs is not available, so license plate checks occur via web scraping for this prototype. While generally frowned upon — this is a proof of concept and I’m not slamming anyone’s servers.

Here’s what the dirtiness of my proof-of-concept scraping looks like:

Results

I must say I was pleasantly surprised.

I expected the open source license plate recognition to be pretty rubbish. Additionally, the image recognition algorithms are probably not optimised for Australian license plates.

The solution was able to recognise license plates in a wide field of view.

Annotations added for effect. Number plate identified despite reflections and lens distortion.

Although, the solution would occasionally have issues with particular letters.

Incorrect reading of plate, mistook the M for an H

But … the solution would eventually get them correct.

A few frames later, the M is correctly identified and at a higher confidence rating

As you can see in the above two images, processing the image a couple of frames later jumped from a confidence rating of 87% to a hair over 91%.

I’m confident, pardon the pun, that the accuracy could be improved by increasing the sample rate, and then sorting by the highest confidence rating. Alternatively a threshold could be set that only accepts a confidence of greater than 90% before going on to validate the registration number.

Those are very straight forward code-first fixes, and don’t preclude the training of the license plate recognition software with a local data set.

The $86,000,000 Question

To be fair, I have absolutely no clue what the $86M figure includes — nor can I speak to the accuracy of my open source tool with no localized training vs. the pilot BlueNet system.

I would expect part of that budget includes the replacement of several legacy databases and software applications to support the high frequency, low latency querying of license plates several times per second, per vehicle.

On the other hand, the cost of ~$391k per vehicle seems pretty rich — especially if the BlueNet isn’t particularly accurate and there are no large scale IT projects to decommission or upgrade dependent systems.

Future Applications

While it’s easy to get caught up in the Orwellian nature of an “always on” network of license plate snitchers, there are many positive applications of this technology. Imagine a passive system scanning fellow motorists for an abductors car that automatically alerts authorities and family members to their current location and direction.

Teslas vehicles are already brimming with cameras and sensors with the ability to receive OTA updates — imagine turning these into a fleet of virtual good samaritans. Ubers and Lyft drivers could also be outfitted with these devices to dramatically increase the coverage area.

Using open source technology and existing components, it seems possible to offer a solution that provides a much higher rate of return — for an investment much less than $86M.

Part 2 — I’ve published an update, in which I test with my own footage and catch an unregistered vehicle, over here:

Remember the $86 million license plate scanner I replicated? I caught someone with it.


How I replicated an $86 million project in 57 lines of code was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 21 Aug, 2019
  • (0) Comments
  • By editor
  • cloud-computing, hackathons, HowTo, javascript, Open Source, tech

CloudFormation is an infrastructure graph management service — and needs to act more like it

AWS CloudFormation is an infrastructure graph management service — and needs to act more like it

CloudFormation should represent our desired infrastructure graphs in the way we want to build them

What’s AWS CloudFormation?

As Richard Boyd says, CloudFormation is not a cloud-side version of the AWS SDK. Rather, CloudFormation is an infrastructure-graph management service.

But it’s not clear to me that CloudFormation fully understands this, and I think it should more deeply align with the needs that result from that definition.

Chief among these needs is that CloudFormation resources should be formed around the lifecycle of the right concepts in each AWS service — rather than simply mapping to the API calls provided by those services.

What’s the Issue?

For an example, let’s talk about S3 bucket notifications. If there’s a standard “serverless 101”, it’s image thumbnailing. Basic stuff, right? You have an S3 bucket, and you use bucket notifications to trigger a Lambda that will create the thumbnails and write them back to the bucket.

Any intro-to-serverless demo should show best practices, so you’ll put this in CloudFormation. The best practice for CloudFormation is to never explicitly name your resources unless you absolutely have to — so you never have to worry about name conflicts.

But surprise! You simply can’t trigger a Lambda from an S3 bucket that has a CloudFormation-assigned name. The crux of it is this:

  • Bucket notification configuration is only settable through the AWS::S3::Bucket resource, and bucket notifications check for permissions at creation time. If the bucket doesn’t have permission to invoke the Lambda, creation of that notification config will fail.
  • The AWS::Lambda::EventSourcePermission resource that creates that permission requires the name of the bucket.
  • If CloudFormation is assigning the bucket name, it’s not available in the stack until the bucket (and its notification configuration) are created.

Thus, you end up with a circular dependency. The AWS-blessed solution, described in several different places, is to hard-code an explicit bucket name on both the Bucket and EventSourcePermission resources.

This isn’t necessary. If we look at the lifecycle of the pieces involved, we can see that existence of the bucket should be decoupled with the settings of that bucket.

If we had a AWS::S3::BucketNotification resource that took the bucket name as a parameter, we could create the AWS::S3::Bucket first, and provide its name to both the BucketNotification and the EventSourcePermission.

Despite this option, we’re still years into AWS explicitly punting on this issue and telling customers, in official communications, to just work around it.

What about Lambda?

Going back to infrastructure graph representation, let’s talk about Lambda. CloudFormation has traditionally managed the infrastructure onto which applications were deployed. But in a serverless world, the infrastructure is the application.

When I want to do a phased rollout of a new version of a Lambda function, I’m supposed to have a CodeDeploy resource in the same template as my function. I update the AWS::Lambda::Function resource, and CodeDeploy takes care of the phased rollout using a weighted alias—all while my stack is in the UPDATING state.

The infrastructure graph during the rollout, when two versions of the code are deployed at the same time, has no representation within CloudFormation — and that’s a problem.

What if I want this rollout to happen over an extended period of time? What if I want to deploy two versions of a Lambda function to exist alongside each other indefinitely?

The latter is literally impossible to achieve with a single CloudFormation template today. The AWS::Lambda:Version resource publishes what’s in the $LATEST, which is what is set by AWS::Lambda::Function.

Instead, when we have phased rollouts, we should be speaking of deployments, decoupled from the existence of the function itself.

A resource like AWS::Lambda::Deployment that had parameters for the function name, and the code and configuration, and published that, with the version number available as an attribute.

Multiple of these resources could be included in the same template without conflicting, and your two deployments could then be wired to a weighted alias for phased rollout. Note: To do this properly, we’d need an atomic UpdateFunctionCodeAndConfiguration API call from the Lambda service.

In this way, CloudFormation could represent the state of the graph during a rollout, not just on either side of it.

What’s the So What?

The important notion here is that a resource’s create/update/delete lifecycle doesn’t need to be mapped directly to create/update/delete API calls. Instead, the resources for a service need to match the concepts that allow coherent temporal evolution of an infrastructure that uses the service.

When this is achieved, CloudFormation can adequately represent our desired infrastructure graphs in the way we want to build them, which will only become more critical as serverless/service-full architecture grows in importance.

Epilogue: New tools like the CDK look to build client-side abstractions on CloudFormation. In general, I’m not a fan of those approaches, for reasons that I won’t detail here. In any case , they will never be fully successful if CloudFormation doesn’t support the infrastructure graph lifecycles that those abstractions need to build upon.


CloudFormation is an infrastructure graph management service — and needs to act more like it was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 6 Aug, 2019
  • (0) Comments
  • By editor
  • AWS, cloud-computing, HowTo, programming, serverless, technology

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1 other subscriber

Most Viewed News
  • Microsoft Azure Government is First Commercial Cloud to Achieve DoD Impact Level 5 Provisional Authorization, General Availability of DoD Regions (928)
  • Introducing Coral: Our platform for development with local AI (828)
  • Enabling connected transformation with Apache Kafka and TensorFlow on Google Cloud Platform (462)
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
Tags
aws Azure Books cloud Developer Development DevOps GCP Google HowTo Learn Linux news Noticias OpenBooks SysOps Tutorials

KenkoGeek © 2019 All Right Reserved