How a group of young developers want to help us vote

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

Stevens Institute of Technology’s Google Developer Student Club. Names left to right: Tim Leonard, Will Escamilla, Rich Bilotti, Justin O’Boyle, Luke Mizus, and Rachael Kondra

The Google Developer Student Club at the Stevens Institute of Technology built their own website that makes local government data user friendly for voters in local districts. The goal: Take obscure budget and transportation information, display it via an easy-to-understand UI, and help voters become more easily informed.

When Tim Leonard first moved to Hoboken, New Jersey to start school at the Stevens Institute of Technology, he was interested in anything but government. A computer science major with a deep interest in startups, one was more likely to find him at a lecture on computational structures than on political science.

However, as the founder of the Google Developer Student Club (DSC) chapter at his university, Tim and his fellow classmates had the opportunity to make the trip into New York City to attend a developer community meetup with Ralph Yozzo, a community organizer from Google Developer Groups (GDG) NYC. While Ralph had given several talks on different technologies and programming techniques, this time he decided to try something new: Government budgets.

A slide from Ralph’s presentation

Titled “Why we should care about budgets,” Ralph’s talk to the young programmers focused on why tracking government spending in their community matters. He further explained how public budgets fund many parts of our lives – from getting to work, to taking care of our health, to going to a good school. However, Ralph informed them that while there are currently laws that attempt to make this data public, a platform that makes this information truly accessible didn’t exist. Instead, most of this information is tucked away in different corners of the internet; unorganized, and hard to understand.

Tim soon realized programming could be the solution and that his team had the chance to grow in a whole new way, outside of the traditional classroom setting. With Ralph’s encouragement, Tim and his team started thinking about how they could build a platform to collect all of this data, and provide a UI that’s easy for any user to interact with. By creating a well-organized website that could pull all of this local information, streamline it, and produce easy-to-understand graphics, the DSC Stevens team imagined they could have an impact on how voters inform themselves before casting their ballots at local elections.

“What if we had a technical approach to local government? Where our site would have actionable metrics that held us accountable for getting information out to the public.”

Tim thought if local voters could easily understand how their representatives were spending their community’s money, they could use it as a new framework to decide how to vote. The next step was to figure out the best way to get started.

An image from the demo site

The DSC Stevens team quickly agreed that their goal should be to build a website about their own city, Hoboken. They named it “Project Crystal” and started taking Google App Engine courses and conducting Node.js server run throughs. With the data they would eventually store and organize, they also dove into Google Cloud demos and workshops on Google Charts. They were determined to build something that would store public information in a different way.

“Bounce rates and click through metrics ensure we evaluate our site like a startup. Instead of selling a product, our platform would focus on getting people to interact with the data that shapes their everyday lives.”

After participating in different courses on how to use Google Cloud, Maps, and Charts, they finally put it all together and created the first version of their idea – an MVP site, built to drive user engagement, that would serve as their prototype.

A video explaining the Project Crystal website

Complete with easy-to-understand budget charts, contact information for different public officials, and maps to help users locate important services, the prototype site has been their first major step in turning complicated data into actionable voting information. Excited about their progress, Tim wants to eventually host the site on Google Cloud so his team can store more data and offer the platform to local governments across the country.

Image of the DSC Steven’s team adding Google Charts to their demo site

The DSC Stevens team agrees, access to resources like Project Crystal could change how we vote. They hope with the right technical solutions around data, voters will be better informed, eager to ask more of their representatives, and more willing to participate in the day-to-day work of building their communities, together.

“Our advice to other student developers is to find outlets, like DSC, that enable you to think about helping others. For us, it was figuring out how to use our Google Cloud credits for good.”

Want to start a project of your own? If you’re a university student, join a Developer Student Club near you. If you’re a professional, find the right Google Developer Group for you.

Daring to code: How one young developer found her way in a rural community in Russia

Posted by Jennifer Kohl, Google Developers Global Communities Program Manager

Luiza in her hometown, Magas

Magas is the capital of the Republic of Ingushetia, the smallest region in Russia. Centered between Chechnya and North Ossetia, the area is no stranger to conflict. Even as it rebuilds, the region has seen its unemployment numbers rise to as high as 50 percent. Magas, a mostly rural area, is home to a small population of just under six-thousand people – it’s estimated that under 100 of them are developers.

Yet one day that small group of developers decided to take their first step towards becoming a community. These founders heard of Google Developer Groups (GDG) and had seen their community meetups in action on trips to other larger cities in Russia. Inspired by how GDG brought developers together, they believed starting a community in Magas was just what they all needed to grow.

GDG Magas was up and running immediately, hosting small community events in classrooms and meeting spaces across town. And it was there, at a local meetup, where GDG Magas met Luiza.

Luiza speaking at a University competition

At the time, Luiza was a student at a local university. Equipped with a curious mind, she was hungry to learn more. She often challenged herself to think about how women could grow professionally and personally within traditional cultures. Luiza was interested in technology, a mostly unheard of career path in this small town. At the same time, Women Techmakers, a Google program that provides resources for women in technology, started collaborating with GDG chapters around the world to help women like Luiza get started on their journey.

So together, GDG Magas and Women Techmakers started hosting talks and workshops for women in the community. Eventually, they began running a programming class for beginners, and that’s where Luiza realized she had the space to truly explore her interest in code. The community organized thirteen classes, and each Saturday Luiza would join GDG Magas to learn everything from arrays, to Python, to JavaScript, and more.

“I learned everything a beginner needs: numeral systems, loops, algorithms, and even the basics of web development. I was able to work with GDG mentors to improve my skills both in the backend and frontend. Someone was always there to answer my questions.”

With GDG Magas providing Luiza with this access to learning materials and mentorship, there has been no turning back. Luiza landed a competitive role working for an internet company, will soon give her own talks at GDG events, and is even starting her own Google Developer Student Club as she completes her studies in Magas. Luiza is now at the forefront of helping a rural town become a growing tech scene, taking the lead to shape her future and that of many young developers around her.

GDG Magas and similar developer communities are growing faster than ever, thanks to determined developers just like Luiza.

Ready to find a developer community near you? Join a local Google Developer Group, here.

Automate & Extend with Apps Script (Google Cloud for Student Developers)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

In the previous episode of our new Google Cloud for Student Developers video series, we introduced G Suite REST APIs, showing how to enhance your applications by integrating with Gmail, Drive, Calendar, Docs, Sheets, and Slides. However, not all developers prefer the lower-level style of programming requiring the use of HTTP, OAuth2, and processing the request-response cycle of API usage. Building apps that access Google technologies is open to everyone at any level, not just advanced software engineers.

Enhancing career readiness of non-engineering majors helps make our services more inclusive and helps democratize API functionality to a broader audience. For the budding data scientist, business analyst, DevOps staff, or other technical professionals who don’t code every day as part of their profession, Google Apps Script was made just for you. Rather than thinking about development stacks, HTTP, or authorization, you access Google APIs with objects.

This video blends a standard “Hello World” example with various use cases where Apps Script shines, including cases of automation, add-ons that extend the functionality of G Suite editors like Docs, Sheets, and Slides, accessing other Google or online services, and custom functions for Google Sheets—the ability to add new spreadsheet functions.

One featured example demonstrates the power to reach multiple Google technologies in an expressive way: lots of work, not much code. What may surprise readers is that this entire app, written by a colleague years ago, is comprised of just 4 lines of code:

function sendMap() {
var sheet = SpreadsheetApp.getActiveSheet();
var address = sheet.getRange('A1').getValue();
var map = Maps.newStaticMap().addMarker(address);
GmailApp.sendEmail('[email protected]',
'Map', 'See below.', {attachments:[map]});
}

Apps Script shields its users from the complexities of authorization and “API service endpoints.” Developers only need an object to interface with a service; in this case, SpreadsheetApp to access Google Sheets, and similarly, Maps for Google Maps plus GmailApp for Gmail. Viewers can build this sample line-by-line with its corresponding codelab (a self-paced, hands-on tutorial). This example helps student (and professional) developers…

  1. Build something useful that can be extended into much more
  2. Learn how to accomplish several tasks without a lot of code
  3. Imagine what else is possible with G Suite developer tools

For further exploration, check out this video as well as this one which introduces Apps Script and presents the same code sample with more details. (Note the second video emails the map’s link, but the app has been updated to attach it instead; the code has been updated everywhere else.) You may also access the code at its open source repository. If that’s not enough, learn about other ways you can use Apps Script from its video library. Finally, stay tuned for the next pair of episodes which will cover full sample apps, one with G Suite REST APIs, and another with Apps Script.

We look forward to seeing what you build with Google Cloud.

Evolving automations into applications using Apps Script

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Editor’s Note: Guest authors Diego Moreno and Sophia Deng (@sophdeng) are from Gigster, a firm that builds dynamic teams made of top global talent who create industry-changing custom software.

Prelude: Data input & management … three general choices

Google Cloud provides multiple services for gathering and managing data. Google Forms paired with Google Sheets are quite popular as they require no engineering resources while being incredibly powerful, providing storage of up to 5 million rows of data and built-in analytics for small team projects.

At the other end of the spectrum, to support a high volume of users or data, Google Cloud provides advanced serverless platforms like Google App Engine (web app-hosting) and Google Cloud Functions (function/service-hosting) that can use Google Cloud Firestore for fast and scalable data storage. These are perfect for professional engineering teams that need autoscaling to respond to any level of user traffic and data input. Such apps can also be packaged into a container and deployed serverlessly on Google Cloud Run.

However, it’s quite possible your needs are right in-between. Today, we’re happy to present the Gigster story and their innovative use of Google Apps Script—a highly-accessible service conventionally relegated to simple macro and add-on development, but which Gigster used to its advantage, building robust systems to transform their internal operations. Apps Script is also serverless, meaning Gigster didn’t have to manage any servers for their application nor did they need to find a place to host its source code.

The Gigster story

Gigster enables distributed teams of software engineers, product managers and designers to build software applications for enterprise clients. Over the past five years, Gigster has delivered thousands of projects, all with distributed software teams. Our group, the Gigster Staffing Operations Team, is responsible for assembling these teams from Gigster’s network of over 1,000 freelancers.

Two years ago, our team began building custom software to automate the multi-stage and highly manual team staffing process. Building internal software has allowed the same-size Staffing Operations Team (3 members!) to enjoy a 60x reduction in time spent staffing each role.

The Apps Script ecosystem has emerged as the most critical component in our toolkit for building this internal software, due to its versatility and ease of deployment. We want to share how one piece of the staffing process has evolved to become more powerful over time thanks to Apps Script. Ultimately, we hope that sharing this journey enables all types of teams to build their own tools and unlock new possibilities.

End-to-end automation in Google Sheets

Staffing is an operationally intensive procedure. Just finding willing and able candidates requires numerous steps:

  1. Gathering and formatting customer requirements.
  2. Communicating with candidates through multiple channels.
  3. Logging candidate responses.
  4. Processing paperwork for placement

To add complexity, many of these steps require working with different third-party applications. For awhile, we performed every step manually, tracking every piece of data generated in one central Sheet (the “Staffing Broadcast Google Sheet”). At a certain point, this back-and-forth work to log data from numerous applications became unsustainable. Although we leveraged Google Sheets features like Data Validation rules and filters, the Staffing Broadcast Sheet could not alleviate the high degree of manual processes that were required of the team.

centralized Staffing Broadcast Google Sheet

The centralized Staffing Broadcast Google Sheet provided organization, but required a high degree of manual entry for tracking candidate decisions.

The key transformation was integrating Sheets data with third-party APIs via Apps Script. This enabled us to cut out the most time-consuming operations. We no longer had to flip between applications to message candidates, wait for their replies, and then manually track responses.

To interact with these APIs, we built a user interface directly into the Staffing Broadcast Google Sheet. By introducing an information module, as well as drop-down lists and buttons, we were able to define a small set of manual actions versus the much wider list of tasks the tool would perform automatically across multiple applications.

integrating Apps Script with third-party APIs

By integrating Google Apps Script with third-party APIs and creating a user interface, we evolved the Staffing Broadcast Tool to centralize and automate almost every step of the staffing process.

doPost() is the key function in our staffing tool that facilitates third-party services triggering our Apps Script automations. Below is a snippet of how we listened to candidates’ responses from a third-party messaging application. In this case, queueing the third-party message in a Google Sheet so it can be processed with improved error-handling.

/**
* Receive POST requests and record to queue.
*/
doPost(e) {
var payload = e.postData.contents;
SpreadsheetApp.openById(SPREADSHEET_ID)
.getSheetByName("Unprocessed")
.appendRow([payload]);
return ContentService.createTextOutput(""); // Return 200
}

Almost all manual work associated with finding candidates was automated through the combination of integrations with third-party APIs and having a user interface to perform a small, defined set of actions. Our team’s day-to-day became shockingly simple: select candidates to receive messages within the Staffing Broadcast Tool, then click the “Send Broadcast” button. That’s it. The tool handled everything else.

Leveraging Sheets as our foundation, we fundamentally transformed our spreadsheet into a custom software application. The spreadsheet went from a partially automated datastore to a tool that provided an end-to-end automated solution, requiring only the click of a few buttons to execute.

Evolution into a standalone application

While satisfied, we understood that having our application live in Google Sheets had its limitations, namely, it was difficult for multiple team members to simultaneously use the tool. Using doGet(), the sibling to doPost(), we began building an HTML frontend to the Staffing Broadcast Tool. In addition to resolving difficulties related to multiple users being in a spreadsheet, it also allowed us to build an easier-to-use and more responsive tool by leveraging Bootstrap & jQuery.

Having multiple users in a single Google Sheet can create conflicts, but Apps Script allowed us to build a responsive web app leveraging common libraries like Bootstrap & jQuery that eliminated those problems while providing an improved user experience.

When other teams at Gigster got wind of what we built, it was easy to grant access to others beyond the Staffing Operations Team. Since Apps Script is part of the G Suite developer ecosystem, we relied on Google’s security policies to help deploy our tools to larger audiences.

While this can be done through Google’s conventional sharing tools, it can also be done with built-in Apps Script functions like Session.getActiveUser() that allow us to restrict access to specific Google users. In our case, those within our organization plus a few select users.

To this day, we continue to use this third version of the Staffing Broadcast Tool in our daily operations as it supports 100% of all client projects at Gigster.

Conclusion

By fundamentally transforming the Staffing Broadcast Tool with Apps Script, Gigster’s Staffing Operations Team increased its efficiency while supporting the growth of our company. Inspired by these business benefits, we applied this application-building approach using Apps Script for multiple tools, including candidate searching, new user onboarding, and countless automations.

Our team’s psychological shift about how we view what we are capable of, especially as a non-engineering team, has been the most valuable part of this journey. By leveraging an already familiar ecosystem to build our own software, we have freed team members to become more self-sufficient and valuable to our customers.

To get started on your Apps Script journey, we recommend you check out the Apps Script Fundamentals playlist and the official documentation. And if you’re a freelancer looking to build software applications for clients, we’re always looking for talented software engineers, product managers or designers to join Gigster’s Talent Network.

Thank you to Sandrine Bitton, the third member of the Staffing Operations Team, for all her help in the development of the Staffing Broadcast Tool.

Remember the $86 million license plate scanner I replicated?

I caught someone with it

Canceled driver caught in action.

A few weeks ago, I published what I thought at the time was a fairly innocuous article: How I replicated an $86 million project in 57 lines of code.

I’ll admit — it was a rather click-bait claim. I was essentially saying that I’d reproduced the same license plate scanning and validating technology that the police in Victoria, Australia had just paid $86 million for.

Since then, the reactions have been overwhelming. My article received over 100,000 hits in the first day, and at last glance sits somewhere around 450,000. I’ve been invited to speak on local radio talk shows and at a conference in California. I think someone may have misread Victoria, AU as Victoria, BC.

Although I politely declined these offers, I have met for coffee with various local developers and big name firms alike. It’s been incredibly exciting.

Most readers saw it for what it was: a proof of concept to spark discussion about the use of open source technology, government spending, and one man’s desire to build cool stuff from his couch.

Pedants have pointed out the lack of training, support, and usual enterprise IT cost padders, but it’s not worth anyone’s time exploring these. I’d rather spend this post looking at my results and how others can go about shoring up their own accuracy.

Before we get too deep into the results, I’d like to go over one thing that I feel was lost in the original post. The concept for this project started completely separate from the $86 million BlueNet project. It was by no means an attempt to knock it off.

It started with the nagging thought that since OpenCV exists and the VicRoads website has license plate checks, there must be a way to combine the two or use something better.

It was only when I began my write-up that I stumbled upon BlueNet. While discovering BlueNet and its price tag gave me a great editorial angle, with the code already written. There were bound to be some inconsistencies between the projects.

I also believe part of the reason this blew up was the convenient timing of a report on wasteful government IT spending in Australia. The Federal Government’s IT bill has shot up from $5.9 billion to $10 billion, and it delivered dubious value for that blow out. Media researchers who contacted me were quick to link the two, but this is not something I am quick to encourage.

A Disclaimer

In the spirit of transparency, I must declare something that was also missing from the original post. My previous employer delivered smaller (less than $1 million) IT projects for Victoria Police and other state bodies. As a result, I’ve undergone police checks and completed the forms required to become a VicPol contractor.

This may imply I have an axe to grind or have some specific insider knowledge, but instead I am proud of the projects we delivered. They were both on time and on budget.

Visualizing the Results

The following is a video representation of my results, composited in After Effects for a bit of fun. I recorded various test footage, and this was the most successful clip.

I will go into detail about ideal camera setups, detection regions, and more after the video. It will help you better understand what made this iPhone video I took from through the windscreen a better video than a Contour HD angled out the side window.

An Ethical Dilemma

If you saw the hero graphic of this article or watched the video above, you may have noticed a very interesting development: I caught someone.

Specifically, I caught someone driving a vehicle with a canceled registration from 2016. This could have happened for many reasons, the most innocent of which is a dodgy resale practice.

Occasionally, when the private sale of a vehicle is not done by the book, the buyer and seller may not complete an official transfer of registration. This saves the buyer hundreds of dollars, but the vehicle is still registered to the seller. It’s not unheard of for a seller to then cancel the registration and receive an ad hoc refund of remaining months, also worth hundreds of dollars.

Alternatively, the driver of the vehicle could well be the criminal we suspect that they are.

So, although I jokingly named the project plate-snitch when I set it up on my computer, I’m now faced with the conundrum of whether to report what I saw.

Ultimately, the driver was detected using a prototype of a police-only device. But driving on a 2016 registration (canceled, not expired) is a very deliberate move. Hmm.

Back to the Results

Of the many reactions to my article, a significant amount were quite literal and dubious. Since I said I replicated the software, they asserted that I must have a support center, warranties, and training manuals. One even attempted to replicate my results and hit the inevitable roadblocks of image quality and source material.

Because of this, some implied that I cherry-picked my source images. To that I can only say, “Well, duh.”

When I built my initial proof of concept (again, focusing on validating an idea, not replicating BlueNet), I used a small sample set of less than ten images. Since camera setup is one of, if not the most, important factors in ALPR, I selected them for ideal characteristics that enhance recognition.

At the end of the day, it is very simple to take a fragile proof of concept and break it. The true innovation and challenge comes from taking a proof of concept, and making it work. Throughout my professional career, many senior developers have told me that things can’t be done or at least can’t be done in a timely manner. Sometimes they were right. Often, they were just risk averse.

“Nothing is impossible until it is proven to be.”

Many people bastardize this quote, and you may have seen or heard one of it’s incarnations before. To me, it neatly summarizes a healthy development mindset, in which spiking and validating ideas is almost mandatory to understanding them.

Optimal ALPR Camera Setups

This project is so exciting and different for me because it has a clear success metric — whether the software recognizes the plate. This can only happen with a combination of hardware, software, and networking solutions. After posting my original article, people who sell ALPR cameras quickly offered advice.

Optical Zoom

The most obvious solution in hindsight is the use of an optical zoom. Though I explore other important factors below, none lead to such a sheer increase in recognition as this. In general, professional ALPR solutions are offset at an angle, trained on where the license plate will be, and zoomed into the area to maximize clarity.

This means the more zoom, more pixels to play with.

All the cameras I had at my disposal were of a fixed lens. They included:

  • A Contour HD action camera. These came out in 2009, and I use mine to record my cycling commute and to replay each week’s near death experience.
  • A Fujifilm X100S (famously a fixed prime lens)
  • My iPhone 6+

The featured test run was recorded on my phone. My only method of replicating an optical zoom was using an app to record at 3K instead of 1080p, and then digitally zooming and cropping. Again, more pixels to play with.

Angle & Positioning

The viewing angle of 30° is often referenced as the standard for ideal plate recognition. This is incredibly important when you learn that BlueNet uses an array of cameras. It also makes sense when you consider what a front facing camera would generally see — not very much.

What a front facing ALPR camera sees — not much.

If I had to guess I’d say a mostly forward-facing array would be the ideal setup. It would consist of a single camera pointed dead center as above, two off-center at 30° each side, and a single rear-facing camera. The value in having most of the cameras pointed forward would come from the increased reaction time if the vehicle is traveling in the opposite direction. This would allow a quicker scan, process, and U-turn than if the rear facing cameras picked up a suspect vehicle already ten meters past the police vehicle.

A four camera array would need to be angled similar to this. Icons from Freepik.

A Gymbal

When compositing the video, I thought about stabilizing the footage. Instead I opted to show the bumpy ride for what it was. What you saw was me holding my phone near the windscreen while my wife drove. Check out that rigorous scientific method.

Any production-ready version of a vehicle-mounted ALPR needs some form of stabilisation. Not a hand.

Other Important Factors

Frame Rate

Both the attempt to replicate my project and my recordings since then explored the same misconception that ALPR sampling frame rate may be linked to success. In my experience, this did nothing but waste cycles. Instead, what is incredibly important is the shutter speed creating clean, crisp footage that feeds well into the algorithm.

But I was also testing fairly low-speed footage. At most, two vehicles passing each other in a 60km/h zone created a 120km/h differential. BlueNet, on the other hand, can work up to an alleged 200km/h.

As a way of solving this, a colleague suggested object detection and out-of-band processing. Identify a vehicle and draw a bounding box. Wait for it to come into the ideal recognition angle and zoom. Then shoot a burst of photos for asynchronous processing.

I looked into using OpenCV (node-opencv) for object recognition, but I found something simpler like face detection, taking anywhere from 600–800ms. Not only less than ideal for my use, but pretty poor in general.

Hype-train TensorFlow comes to the rescue. Able to run on-device, there are examples of projects identifying multiple vehicles per frame at an astounding 27.7fps. This version could even expose speed estimations. Legally worthless, but perhaps useful in every day policing (no fps benchmark in readme).

To better explain how high-performance vehicle recognition could couple with slower ALPR techniques, I created another video in After Effects. I imagine that the two working hand-in-hand would look something like this:

Frame Rate vs Shutter Speed

A different manifestation of frame rate is largely influenced upon shutter speed, and more specifically, the rolling shutter issues that plague early or low end digital movie recorders. The following is a snapshot from some Contour HD footage. You can see at only 60km/h the rolling shutter issue makes the footage more or less unusable from an ALPR point of view.

Rolling shutter issues on a Contour HD @ 60km/h.

Adjusting frame rate on both the Contour HD and my iPhone did not result in noticeably less distortion. In theory, a higher shutter speed should produce clearer and crisper images. They’d become increasingly important if you were to chase the 200km/h BlueNet benchmark. Less blur and less rolling shutter distortion would ideally lead to a better read.

Open ALPR Version

One of the more interesting discoveries was that the node-openalpr version I was using is both out-of-date and not nearly as powerful as their proprietary solution. While an open source requirement was certainly a factor, it was amazing how accurately the cloud version could successfully read frames that I couldn’t even identify a plate on.

ALPR Country Training Data

I also found that the main node-openalpr package defaults to US country processing with no way of overriding it. You have to pull down someone else’s fork which allows you to then provide an extra country parameter.

Slimline Australian plates need their own separate country detection to regular Australian plates?

But this doesn’t always help. Using the default US algorithm I was able to produce the most results. Specifying the Australian data set actually halved the number of successful plate reads, and it only managed to find one or two that the US algorithm couldn’t. Providing the separate “Australian Wide Plate” set again halved the count and introduced a single extra plate.

There is clearly a lot to be desired when it comes to Australian-based data sets for ALPR, and I think that the sheer number of plate styles available in Victoria is a contributing factor.

Good luck with that.

Planar Warps

Open ALPR comes with one particular tool to reduce the impact of distortion from both the camera angle and rolling shutter issues. Planar warp refers to a method in which coordinates are passed to the library to skew, translate, and rotate an image until it closely resembles a straight-on plate.

In my limited testing experience, I wasn’t able to find a planar warp that worked at all speeds. When you consider rolling shutter, it makes sense that the distortion grows relative to vehicle speed. I would imagine feeding accelerometer or GPS speed data as a coefficient might work. Or, you know, get a camera that isn’t completely rubbish.

The planar warp tool provided with Open ALPR

What others are doing in the industry

Numerous readers reached out after the last post to share their own experiences and ideas. Perhaps one of the more interesting solutions shared with me was by Auror in New Zealand.

They employ fixed ALPR cameras in petrol stations to report on people stealing petrol. This in itself is not particularly new and revolutionary. But when coupled with their network, they can automatically raise an alert when known offenders have returned, or are targeting petrol stations in the area.

Independent developers in Israel, South Africa, and Argentina have shown interest in building their own hacked-together versions of BlueNet. Some will probably fare better than others, as places like Israel use a seven digit license plates with no alphabet characters.

Key Takeaways

There is simply too much that I’ve learned in the last few weeks of dabbling to fit into one post. While there have been plenty of detractors, I really do appreciate the support and knowledge that has been sent my way.

There are a lot of challenges you will face in trying to build your own ALPR solution, but thankfully a lot of them are solved problems.

To put things in perspective, I’m a designer and front end developer. I’ve spent about ten hours now on footage and code, another eight on video production, and at least another ten on write-ups alone. I’ve achieved what I have by standing on the shoulders of giants. I’m installing libraries built by intelligent people and have leveraged advice from people who sell these cameras for a living.

The $86 million question still remains — if you can build a half-arsed solution that does an okay job by standing on the shoulders of giants, how much more money should you pour in to do a really really good job?

My solution is not even in the same solar system as the 99.999% accurate scanner that some internet commenters seem to expect. But then again, BlueNet only has to meet a 95% accuracy target.

So if $1 million gets you to 80% accuracy, and maybe $10 million gets you to 90% accuracy — when do you stop spending? Furthermore, considering that the technology has proven commercial applications here in Oceania, how much more taxpayer money should be poured into a proprietary, close-sourced solution when local startups could benefit? Australia is supposed to be an “innovation nation” after all.


Remember the $86 million license plate scanner I replicated? was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

How I replicated an $86 million project in 57 lines of code

When an experiment with existing open source technology does a “good enough” job

The Victoria Police are the primary law enforcement agency of Victoria, Australia. With over 16,000 vehicles stolen in Victoria this past year — at a cost of about $170 million — the police department is experimenting with a variety of technology-driven solutions to crackdown on car theft. They call this system BlueNet.

To help prevent fraudulent sales of stolen vehicles, there is already a VicRoads web-based service for checking the status of vehicle registrations. The department has also invested in a stationary license plate scanner — a fixed tripod camera which scans passing traffic to automatically identify stolen vehicles.

Don’t ask me why, but one afternoon I had the desire to prototype a vehicle-mounted license plate scanner that would automatically notify you if a vehicle had been stolen or was unregistered. Understanding that these individual components existed, I wondered how difficult it would be to wire them together.

But it was after a bit of googling that I discovered the Victoria Police had recently undergone a trial of a similar device, and the estimated cost of roll out was somewhere in the vicinity of $86,000,000. One astute commenter pointed out that the $86M cost to fit out 220 vehicles comes in at a rather thirsty $390,909 per vehicle.

Surely we can do a bit better than that.

Existing stationary license plate recognition systems

The Success Criteria

Before getting started, I outlined a few key requirements for product design.

Requirement #1: The image processing must be performed locally

Streaming live video to a central processing warehouse seemed the least efficient approach to solving this problem. Besides the whopping bill for data traffic, you’re also introducing network latency into a process which may already be quite slow.

Although a centralized machine learning algorithm is only going to get more accurate over time, I wanted to learn if an local on-device implementation would be “good enough”.

Requirement #2: It must work with low quality images

Since I don’t have a Raspberry Pi camera or USB webcam, so I’ll be using dashcam footage — it’s readily available and an ideal source of sample data. As an added bonus, dashcam video represents the overall quality of footage you’d expect from vehicle mounted cameras.

Requirement #3: It needs to be built using open source technology

Relying upon a proprietary software means you’ll get stung every time you request a change or enhancement — and the stinging will continue for every request made thereafter. Using open source technology is a no-brainer.

My solution

At a high level, my solution takes an image from a dashcam video, pumps it through an open source license plate recognition system installed locally on the device, queries the registration check service, and then returns the results for display.

The data returned to the device installed in the law enforcement vehicle includes the vehicle’s make and model (which it only uses to verify whether the plates have been stolen), the registration status, and any notifications of the vehicle being reported stolen.

If that sounds rather simple, it’s because it really is. For example, the image processing can all be handled by the openalpr library.

This is really all that’s involved to recognize the characters on a license plate:

A Minor Caveat
Public access to the VicRoads APIs is not available, so license plate checks occur via web scraping for this prototype. While generally frowned upon — this is a proof of concept and I’m not slamming anyone’s servers.

Here’s what the dirtiness of my proof-of-concept scraping looks like:

Results

I must say I was pleasantly surprised.

I expected the open source license plate recognition to be pretty rubbish. Additionally, the image recognition algorithms are probably not optimised for Australian license plates.

The solution was able to recognise license plates in a wide field of view.

Annotations added for effect. Number plate identified despite reflections and lens distortion.

Although, the solution would occasionally have issues with particular letters.

Incorrect reading of plate, mistook the M for an H

But … the solution would eventually get them correct.

A few frames later, the M is correctly identified and at a higher confidence rating

As you can see in the above two images, processing the image a couple of frames later jumped from a confidence rating of 87% to a hair over 91%.

I’m confident, pardon the pun, that the accuracy could be improved by increasing the sample rate, and then sorting by the highest confidence rating. Alternatively a threshold could be set that only accepts a confidence of greater than 90% before going on to validate the registration number.

Those are very straight forward code-first fixes, and don’t preclude the training of the license plate recognition software with a local data set.

The $86,000,000 Question

To be fair, I have absolutely no clue what the $86M figure includes — nor can I speak to the accuracy of my open source tool with no localized training vs. the pilot BlueNet system.

I would expect part of that budget includes the replacement of several legacy databases and software applications to support the high frequency, low latency querying of license plates several times per second, per vehicle.

On the other hand, the cost of ~$391k per vehicle seems pretty rich — especially if the BlueNet isn’t particularly accurate and there are no large scale IT projects to decommission or upgrade dependent systems.

Future Applications

While it’s easy to get caught up in the Orwellian nature of an “always on” network of license plate snitchers, there are many positive applications of this technology. Imagine a passive system scanning fellow motorists for an abductors car that automatically alerts authorities and family members to their current location and direction.

Teslas vehicles are already brimming with cameras and sensors with the ability to receive OTA updates — imagine turning these into a fleet of virtual good samaritans. Ubers and Lyft drivers could also be outfitted with these devices to dramatically increase the coverage area.

Using open source technology and existing components, it seems possible to offer a solution that provides a much higher rate of return — for an investment much less than $86M.

Part 2 — I’ve published an update, in which I test with my own footage and catch an unregistered vehicle, over here:

Remember the $86 million license plate scanner I replicated? I caught someone with it.


How I replicated an $86 million project in 57 lines of code was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

How to deploy a custom domain with the Amplify Console

Add a custom domain to an Amplify Console deployment in just a couple of minutes — let’s take a look at how this works!

What is the Amplify Console?

The Amplify console offers hosting for full stack serverless web apps with continuous Git-based deployment. You connect your Github repo, click deploy, and the application is deployed to a live URL.

Built-in atomic deployments eliminate maintenance windows by ensuring that the web app is only updated when the entire deployment has finished.

If you are launching a project with an Amplify backend, the console will also give you the option of deploying and maintaining the Amplify project.

Adding a custom domain

After you’ve deployed an application, the next step is deploying your app to a custom domain purchased through domain registrars such as GoDaddy or Google Domains.

When you initially deploy your web app with the Amplify Console, it is hosted at a location similar to this:

https://branch-name.d1m7bkiki6tdw1.amplifyapp.com

When you use a custom domain, users will be able to easily access your app that is hosted from a vanity URL, such as the following:

https://www.myawesomedomain.com

Let’s learn how to do this!

Launching the app in the Amplify Console

If you already have an application launched in the Amplify Console, you can skip this step and go directly the next step — adding the custom domain.

There is also a ready-made Gatsby Blog to get started quickly. Just click here and then jump ahead to the next step to adding the custom domain.

To deploy an app already in your GitHub account, let’s launch a new application in the Amplify Console. The first step is to direct your browser to https://console.aws.amazon.com/amplify and click GET STARTED under the Deploy section.

Next, connect the Git repository you’d like to launch and select the branch, then click Next. Accept the default build settings, then click Save and deploy.

Now your application is launched and we can move on to setting it up in a custom domain.

Adding the custom domain

In the AWS dashboard, go to Route53 & click on Hosted Zones. Choose Create Hosted Zone. From there, enter your domain name & click Create.

ProTip: Be sure to enter your domain name as is, without www. E.g. myawesomedomain.com

Now in Route53 dashboard you should be given 4 nameservers.

In your hosting account (GoDaddy, Google Domains, etc..), set these custom nameservers in your DNS setting for the domain you’re using.

These nameservers should look something like ns-1355.awsdns-41.org, ns-1625.awsdns-11.co.uk, etc…

Next, in the Amplify Console, click Domain Management in the left menu. Next, click the Add Domain button.

Here, the dropdown menu should show you the domain you have in Route53. Choose this domain & click Configure domain.

This should deploy the app to your domain (this will take between 5–20 minutes). The last thing is to set up redirects. Click on Rewrites & redirects.

Make sure the redirect for the domain looks like this (i.e. redirect the https://websitename to https://www.websitename):

That’s it! Once the DNS propagates, you should see your domain live at the URL that you have set up in the above steps.

My Name is Nader Dabit. I am a Developer Advocate at Amazon Web Services working with projects like AWS AppSync and AWS Amplify. I specialize in cross-platform & cloud-enabled application development.


How to deploy a custom domain with the Amplify Console was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.