How Machine Learning and the Cloud Can Rescue IT From the Plumbing...

Artificial Intelligence

How Machine Learning and the Cloud Can Rescue IT From the Plumbing Business

from Amazon Web Services (AWS)

By Andrew Barbour     Feb 19, 2019

How Machine Learning and the Cloud Can Rescue IT From the Plumbing Business

Many educational institutions maintain their own data centers. But to Jeff Olson, chief data officer and senior VP of technology strategy at the College Board, all those humming racks of servers are just plumbing—and he doesn't want to be in the plumbing business. He would rather focus on how the College Board, which administers the PSAT, SAT, and Advanced Placement Tests, can help students reach their educational goals. "We need to minimize the amount of work we do to keep systems up and running, and spend more energy innovating on things that matter to people," he says.

That's why the College Board has pulled the plug on much of its IT plumbing in favor of the advanced capabilities offered by the cloud. These capabilities go far beyond simple cloud storage to encompass machine learning and new computing environments that allow the not-for-profit organization to identify opportunities and accelerate program development.

EdSurge spoke recently with Olson about how machine learning and these new advances in cloud computing are helping the College Board support students on their path to college.

EdSurge: First off, what's the difference between machine learning (ML) and artificial intelligence (AI)?

Jeff Olson: That's actually the setup for a joke going around the data science community. The punchline? If it's written in Python or R, it's machine learning. If it's written in PowerPoint, it's AI.

As I see it, machine learning is in practical use in a lot of places, whereas AI conjures up all these fantastic thoughts in people.

So what are some of the practical uses for machine learning at the College Board?

Some years ago, we were trying to figure out when to offer the SAT. The College Board had always offered the SAT at the same times of the year, but we wondered if it might be time to make a change. So we used Amazon Web Services (AWS) machine learning services including Amazon SageMaker to generate and then analyze many millions of calendars—every permutation you could think of. It recommended that we create a new test date in August, which seemed inconvenient for students because school is out then. We created an August date anyway, and it proved so popular that there weren't enough seats the first year. In our nearly 100 years of doing the SAT, we had never considered August.

Another example is something I call error forgiveness. I'm not talking about errors in the test questions themselves, but errors in responses to administrative questions. For instance, it's very common for people to accidentally put the current year as their birth year. Or, at the beginning of the year, they will write the date using the previous year. Instead of having these errors cause problems for students when they show up for the test, we can catch them earlier by using AWS to notice the pattern and fix the problem.

These are unsexy examples, but it's important to talk about grounded applications if only to get away from the perception that machine learning is going to produce this "everything" machine. One area where we are doing something a bit radical, though, is in serverless architecture.

What is serverless architecture, and why are you excited about it?

Instead of having a machine running all the time, you just run the code necessary to do what you want—there is no persisting server or container. There is only this fleeting moment when the code is being executed. It's called Function as a Service, and AWS pioneered it with a service called AWS Lambda. It allows an organization to scale up without planning ahead.

But there's actually something more radical than serverless architecture, which some people are calling radically cloud native. It refers to the ability to put together applications using the managed services provided by AWS.

At the risk of getting lost in the tech weeds here, what kind of managed services are we talking about?

Everyone needs the same basic services to build software: database, authentication, networking and load balancing, login and log analysis. AWS saw that it could make these services available and connect them together via API (Application Programming Interface). The whole premise of using managed services—as opposed to servers or containers—is that you don't recapitulate work that isn't important to your customer.

At the College Board, for instance, our cloud-native architecture is a homegrown combination of API-connected AWS services. We call it Catapult, but it makes heavy use of Amazon Cognito, which is a device-agnostic service for login and authentication; Amazon S3, which is a simple storage solution; and Amazon DynamoDB, its NoSQL database service. Together, they make building a new application incredibly simple. They allow us to focus on the value of the software that we're delivering to users.

Do you have an example of how the College Board used this radically cloud-native approach?

We recently used Catapult to develop a new program called the College Board Opportunity Scholarship, which will award $25 million to students over five years. The idea is to give students an incentive to complete the various tasks needed to navigate the transition from high school to college, like taking the SAT, making and refining a college list, filling out the FAFSA aid form and actually applying to college. For each step they complete, students earn a chance to win a scholarship, and students who complete all six steps qualify to win $40,000.

We knew that we were going to announce the program in the fall and launch it on December 11, which posed two problems. First was the tight development timeline, but we were able to build something really robust in just six months. Without Catapult, it would have taken much longer and we would have lost a year. The Class of 2020 would have missed out on the benefits of the scholarships entirely.

The product was also timed to launch on the same day we release PSAT scores to roughly five million students. So we knew that the program was going to see an enormous amount of usage on debut, which is another reason why we decided to build the program in this radically cloud-native way. It needed to be able to scale in the same way Netflix can scale. If every one of those five million students had come at the same time, we still would have been able to withstand it. This architecture has a real scale advantage.

I know you hit your launch deadline, but was your team's hair on fire the whole way?

Whenever you launch something new, there's always a rush to the deadline—you expect people to be pulling late nights at the end. But nobody had to work on the weekend before launch, which is unheard of in software development. The team took a lot of pride in that. We were also able to include a lot more features than would have been possible otherwise and to test it heavily on a wide variety of mobile devices. To our knowledge, we launched with no bugs. Those are all great benefits.

How do you think machine learning and Function as a Service will impact higher education in general?

The radical nature of this innovation will make a lot of systems that were built five or 10 years ago obsolete. Once an organization comes to grips with Function as a Service (FaaS) as a concept, it's a pretty simple step for that institution to stop doing its own plumbing. FaaS will help accelerate innovation in education because of the API economy.

If the campus IT department will no longer be taking care of the plumbing, what will its role be?

I think IT will be curating the inter-operation of services, some developed locally but most purchased from the API economy. It's happening already. You don't build your own payment processing, you use Stripe or Braintree. You don't build your own messaging service, you use Twilio. You don't build your own identity and authentication service, you use Cognito.

As a result, you write far less code and have fewer security risks, so you can innovate faster. A succinct machine-learning algorithm with fewer than 500 lines of code can now replace an application that might have required millions of lines of code. Second, it scales. If you happen to have a gigantic spike in traffic, it deals with it effortlessly. If you have very little traffic, you incur a negligible cost.

Jeff Olson on Insomnia Reading and His Podcast Diet

"I read for two to three hours every day in the early morning, starting around 4 am,” says Jeff Olson. “In my family, we call this Insomnia Reading.” He reads up on software architecture, data science, and other interests and keeps a few curated Twitter lists to stay current. “I also love to save articles to the magnificent Voice Dream Reader iOS application and then have it read them aloud to me while I'm walking or exercising.”

Olson says he’s also “a podcast omnivore” and shares the highlights of his tech diet:

  • a16z, which covers technology, cultural trends and news, with an emphasis on “software that eats the world”
  • AWS Podcast, featuring AWS news and tech tips, as well as interviews with AWS partners and startups
  • Data Journeys, which is aimed at aspiring data scientists looking to make an impact in the world
  • Data Skeptic, a weekly podcast explaining complex data science concepts
  • DataFramed, which examines data science through the prism of the problems it can help solve
  • Linear Digressions, covering machine learning and data science
  • Not So Standard Deviations, which discusses the latest happenings in data science in academia and industry
  • Software Engineering Radio, a podcast for professional software developers
Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up