Amazon Web Services adds new features to SageMaker and makes its AI-powered transcription services generally available

Matt Wood, AWS’s general manager for deep learning and AI. (AWS photo)

Developers using Amazon SageMaker to build machine-learning applications will now be able to take advantage of new computing resources and experiment with their models on local machines, Amazon Web Services announced Wednesday at the AWS Summit in San Francisco.

SageMaker, introduced last year at AWS re:Invent 2017, is a cloud service that makes it easier for data scientists to build their models without calling the IT department every few minutes. It also allows developers who lack hardcore data science skills to experiment with machine-learning models in their applications, and both groups will be able to run those models on almost all of the instance types offered by AWS.

AWS is also open-sourcing the containers it built for two popular machine-learning frameworks, MXNet and Tensorflow. This will allow developers to customize those containers to better suit their own models and run them on local machines, rather than spending a bunch of money tweaking those models on the cloud.

“The dirty secret of machine learning is right now, it’s kind of like a primordial soup,” said Matt Wood, general manager of deep learning and AI for AWS, on stage in San Francisco. Most developers are just getting started with the technology, and different types of datasets require different machine-learning approaches to get the best results.

AWS also announced that two of its AI-powered cloud services — Amazon Transcribe and Amazon Translate — are now generally available for its customers to use text-to-speech and language translation services in their apps. AI-powered services continue to be a big competitive area for cloud computing this year, fresh off new services announced by Google last week and ahead of Microsoft’s big Build developer conference in May.