New AWS Services
Written by Kay Ewbank   
Tuesday, 14 April 2015

Amazon has announced Elastic File System, along with a Machine Learning service, at its AWS Summit in San Francisco.


Users of Amazon Web Services (AWS) can currently make use of Simple Storage Service (S3) for object storage, Elastic Block Store (EBS) for SAN-style block storage, or Glacier for archived storage.

As outlined by Andy Jassy in his announcement of the Amazon Elastic File Service at the 2015 AWS Summit, in future these will be supplemented by a managed service that can support thousands of concurrent EC2 client connections, making it ideal for a uses cases that require on-demand scaling of file system capacity and performance.


Amazon EFS has the ability to scale to petabyte level, provides multiple EC2 instances with low-latency, shared access to a fully-managed file system via NFSv4 (Network File System version 4 ) protocol.

Writing about the new service on the AWS blog, Amazon’s Jeff Barr says the company expects:

“to see EFS used for content repositories, development environments, web server farms, home directories, and Big Data applications, to name just a few.”




The advantage of the SSD-based file systems is that they highly available and highly durable because files, directories, and links are stored redundantly across multiple Availability Zones within an AWS region. They can grow or shrink as needed so there’s no need to pre-provision capacity, and the cost will be 30 cents per GB for the amount of storage you’ve actually used. You’ll be able to create them using the AWS Management Console, the AWS Command Line Interface (CLI), and a simple set of APIs. Security is handled via integration with AWS Identity and Access Management and Amazon VPC security groups.

EFS is being initially previewed in the Oregon AWS region and you can request access to the preview.

The introduction of Amazon Machine Learning was also announced at the AWS Summit. This is a new AWS service that you can to create predictions based on your data. The idea is that you build and fine-tune predictive models using large amounts of data, then use Amazon Machine Learning to make predictions in batch mode or in real-time. What Amazon says is different about their service is that it can be used by developers whether or not they have experience in building and using machine learning models as this promo video indicates:



Amazon has been using machine learning for aspects of its online book sales business to drive the recommendations made to customers, as well as fraud detection. The experience gained in these areas has enabled the company to develop tools that simplify the tasks of creating machine learning models. The new service can be used to automatically pull data from S3, Amazon Redshift, or MySQL databases hosted on the AWS Relational Database Service, to run that data through the machine learning algorithms, and to create predictive models based on the data.

Developers who have no experience in statistics, data analysis, or machine learning can still create the models using the AWS Management Console or the service's APIs.

The service will be charged by the number of computing hours required to analyze the data and build the models, along with the predictions produced. Batch predictions cost 10 cents per 1,000, and real-time predictions are $0.0001 per prediction. 




Azul Intelligence Cloud Expands Support

Azul has announced that its cloud analytics solution, Azul Intelligence Cloud, now supports Oracle JDK and any OpenJDK-based distribution. More DevOps teams will now benefit from its ability to b [ ... ]

Google Introduces PaliGemma, A New Visual Language Model

Last week's Google I/O saw the introduction of PaliGemma, an open vision-language model (VLM), together with some details of what's coming in Gemma 2. 

More News


raspberry pi books



or email your comment to:

Last Updated ( Tuesday, 14 April 2015 )