Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-amazon-rolls-out-aws-amplify-console-a-deployment-and-hosting-service-for-mobile-web-apps-at-reinvent-2018
Amrata Joshi
27 Nov 2018
3 min read
Save for later

Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018

Amrata Joshi
27 Nov 2018
3 min read
On day 1 of AWS re:Invent 2018, the team at Amazon released AWS Amplify Console, a continuous deployment and hosting service for mobile web applications. The AWS Amplify Console helps in avoiding downtime during application deployment and simplifies the deployment of the application’s front end and backend. Features of AWS Amplify Console Simplified continuous workflows By connecting AWS Amplify Console to the code repository, the frontend and backend are deployed in a single workflow on every code commit. This lets the web application to get updated only after the deployment is successfully completed by eliminating inconsistencies between the application’s frontend and backend. Easy Access AWS Amplify Console makes the building, deploying, and hosting of mobile web applications easier. It also lets users access the features faster. Easy custom domain setup One can set up custom domains managed in Amazon Route 53 with a single click and also get a free HTTPS certificate. If one manages the domain in Amazon Route 53, the Amplify Console automatically connects the root, subdomains and branch subdomains. Globally available The apps are served via Amazon's reliable content delivery network with 144 points of presence globally. Atomic deployments In AWS Amplify Console, the atomic deployments eliminate the maintenance windows and the scenarios where files fail to upload properly. Password protection The Amplify Console comes with a password to protect the web app and one easily work on new features without making them publicly accessible. Branch deployments With Amplify Console, one can work on new features without impacting the production. Also, the users can create branch deployments linked to each feature branch. Other features   The Amplify Console automatically detects the front end build settings along with any backend functionality provisioned with the Amplify CLI when connected to a code repository. With AWS Amplify Console, users can easily manage the production and staging environments for front-end and backend by connecting new branches. With AWS Amplify Console, one get screenshots of the app, rendered on different mobile devices to highlight layout issues. Users can now set up rewrites and redirects to maintain SEO rankings. Users can build web apps with static and dynamic functionality. One can deploy SSGs (Service Selection Gateway) with free SSL on the AWS Amplify Console. Check out the official announcement to know more about AWS Amplify Console. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store
Read more
  • 0
  • 0
  • 14314

article-image-day-1-at-the-amazon-re-invent-conference-aws-robomaker-fully-managed-sftp-service-for-amazon-s3-and-much-more
Melisha Dsouza
27 Nov 2018
6 min read
Save for later

Day 1 at the Amazon re: Invent conference - AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more!

Melisha Dsouza
27 Nov 2018
6 min read
Looks like Christmas has come early this year for AWS developers! Following Microsoft’s Surface devices and Amazon’s wide range of Alex products, the latter has once again made a series of big releases, at the Amazon re:Invent 2018 conference. These announcements include an AWS RoboMaker to help developers test and deploy robotics applications, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, Amazon EC2 C5n Instances Featuring 100 Gbps of Network Bandwidth and much more! Let’s take a look at what developers can expect from these releases. #1 AWS RoboMaker helps developers develop, test, deploy robotics applications at scale The AWS RoboMaker allows developers to develop, simulate, test, and deploy intelligent robotics applications at scale. Code can be developed inside of a cloud-based development environment and can be tested in a Gazebo simulation. Finally, they can deploy the finished code to a fleet of one or more robots. RoboMaker uses an open-source robotics software framework, Robot Operating System (ROS), with connectivity to cloud services. The service suit includes AWS machine learning services, monitoring services, and analytics services that enable a robot to stream data, navigate, communicate, comprehend, and learn. RoboMaker can work with robots of many different shapes and sizes running in many different physical environments. After a developer designs and codes an algorithm for the robot, they can also monitor how the algorithm performs in different conditions or environments. You can check an interesting simulation of a Robot using Robomaker at the AWS site. To learn more about ROS, read The Open Source Robot Operating System (ROS) and AWS RoboMaker. #2 AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3 AWS Transfer for SFTP is a fully managed service that enables the direct transfer of files to and fro Amazon S3 using the Secure File Transfer Protocol (SFTP). Users just have to create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. AWS allows users to migrate their file transfer workflows to AWS Transfer for SFTP- by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53. Along with AWS services, acustomer'ss data in S3 can be used for processing, analytics, machine learning, and archiving. Along with control over user identity, permissions, and keys; users will have full access to the underlying S3 buckets and can make use of many different S3 features including lifecycle policies, multiple storage classes, several options for server-side encryption, versioning, etc. On the outbound side, users can generate reports, documents, manifests, custom software builds and so forth using other AWS services, and then store them in S3 for each, controlled distribution to your customers and partners. #3 EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors Amazon has launched EC2 instances powered by Arm-based AWS Graviton Processors. These are built around Arm cores. The A1 instances are optimized for performance and cost and are a great fit for scale-out workloads where the load has to be shared across a group of smaller instances. This includes containerized microservices, web servers, development environments, and caching fleets. AWS Graviton are custom designed by AWS and deliver targeted power, performance, and cost optimizations. A1 instances are built on the AWS Nitro System, that  maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. #4 Introducing Amazon EC2 C5n Instances featuring 100 Gbps of Network Bandwidth AWS announced the availability of C5n instances that can utilize up to 100 Gbps of network bandwidth to provide a significantly higher network performance across all instance sizes, ranging from 25 Gbps of peak bandwidth on smaller instance sizes to 100 Gbps of network bandwidth on the largest instance size. They are powered by 3.0 GHz Intel® Xeon® Scalable processors (Skylake) and provide support for the Intel Advanced Vector Extensions 512 (AVX-512) instruction set. These instances also feature 33% higher memory footprint compared to C5 instances and are ideal for applications that can take advantage of improved network throughput and packet rate performance. Based on the next generation AWS Nitro System, C5n instances make 100 Gbps networking available to network-bound workloads.  Workloads on C5n instances take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC). The improved network performance will accelerate data transfer to and from S3, reducing the data ingestion wait time for applications and speeding up delivery of results. #5  Introducing AWS Global Accelerator AWS Global Accelerator is a  a network layer service that enables organizations to seamlessly route traffic to multiple regions, while improving availability and performance for their end users. It supports both TCP and UDP protocols, and performs a health check of a user’s target endpoints while routing traffic away from unhealthy applications. AWS Global Accelerator uses AWS’ global network to direct internet traffic from an organization's users to their applications running in AWS Regions  based on a users geographic location, application health, and routing policies that can be configured. You can head over to the AWS blog to get an in-depth view of how this service works. #6 Amazon’s  ‘Machine Learning University’ In addition to these announcements at re:Invent, Amazon also released a blog post introducing its ‘Machine Learning University’, where the company announced that the same machine learning courses used to train engineers at Amazon can now be availed by all developers through AWS. These courses, available as part of a new AWS Training and Certification Machine Learning offering, will help organizations accelerate the growth of machine learning skills amongst their employees. With more than 30 self-service, self-paced digital courses and over 45 hours of courses, videos, and labs, developers can be rest assured that ML fundamental and  real-world examples and labs, will help them explore the domain. What’s more? The digital courses are available at no charge and developers only have to pay for the services used in labs and exams during their training. This announcement came right after Amazon Echo Auto was launched at Amazon’s Hardware event. In what Amazon defines as ‘Alexa to vehicles’, the Amazon Echo Auto is a small dongle that plugs into the car’s infotainment system, giving drivers the smart assistant and voice control for hands-free interactions. Users can ask for things like traffic reports, add products to shopping lists and play music through Amazon’s entertainment system. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS
Read more
  • 0
  • 0
  • 11301

article-image-introducing-tigergraph-cloud-a-database-as-a-service-in-the-cloud-with-ai-and-machine-learning-support
Savia Lobo
27 Nov 2018
3 min read
Save for later

Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support

Savia Lobo
27 Nov 2018
3 min read
Today, TigerGraph, the world’s fastest graph analytics platform for the enterprise, introduced TigerGraph Cloud, the simplest, most robust and cost-effective way to run scalable graph analytics in the cloud. With TigerGraph Cloud, users can easily get their TigerGraph services up and running. They can also tap into TigerGraph’s library of customizable graph algorithms to support key use cases including AI and Machine Learning. It provides data scientists, business analysts, and developers with the ideal cloud-based service for applying SQL-like queries for faster and deeper insights into data. It also enables organizations to tap into the power of graph analytics within hours. Features of TigerGraph Cloud Simplicity It forgoes the need to set up, configure or manage servers, schedule backups or monitoring, or look for security vulnerabilities. Robustness TigerGraph relies on the same framework providing point-in-time recovery, powerful configuration options, and stability that has been used for its own workloads over several years. Application Starter Kits It offers out-of-the-box starter kits for quicker application development for cases such as Anti-Fraud, Anti-Money Laundering (AML), Customer 360, Enterprise Graph analytics and more. These starter kits include graph schemas, sample data, preloaded queries and a library of customizable graph algorithms (PageRank, Shortest Path, Community Detection, and others). TigerGraph makes it easy for organizations to tailor such algorithms for their own use cases. Flexibility and elastic pricing Users pay for exactly the hours they use and are billed on a monthly basis. Spin up a cluster for a few hours for minimal cost, or run larger, mission-critical workloads with predictable pricing. This new cloud offering will also be available for production on AWS, with other cloud availability forthcoming. Yu Xu, founder and CEO, TigerGraph, said, “TigerGraph Cloud addresses these needs, and enables anyone and everyone to take advantage of scalable graph analytics without cloud vendor lock-in. Organizations can tap into graph analytics to power explainable AI - AI whose actions can be easily understood by humans - a must-have in regulated industries. TigerGraph Cloud further provides users with access to our robust graph algorithm library to support PageRank, Community Detection and other queries for massive business advantage.” Philip Howard, research director, Bloor Research, said, “What is interesting about TigerGraph Cloud is not just that it provides scalable graph analytics, but that it does so without cloud vendor lock-in, enabling companies to start immediately on their graph analytics journey." According to TigerGraph, “Compared to TigerGraph Cloud, other graph cloud solutions are up to 116x slower on two hop queries, while TigerGraph Cloud uses up to 9x less storage. This translates into direct savings for you.” TigerGraph also announces New Marquee Customers TigerGraph also announced the addition of new customers including Intuit, Zillow and PingAn Technology among other leading enterprises in cybersecurity, pharmaceuticals, and banking. To know more about TigerGraph Cloud in detail, visit its official website. MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’  
Read more
  • 0
  • 0
  • 10136

article-image-gradle-5-0-released-with-faster-builds-incremental-java-compilation-and-annotation-processing
Amrata Joshi
27 Nov 2018
4 min read
Save for later

Gradle 5.0 released with faster builds, incremental java compilation, and annotation processing

Amrata Joshi
27 Nov 2018
4 min read
The team at Gradle has now released Gradle 5.0 after Gradle 4.9 was released in July this year. Gradle 5.0 is faster, safer and more capable than the previous ones. Gradle is a build tool which accelerates developer productivity as it helps teams build, automate and deliver software faster. This tool focuses on build automation and support for multi-language development. Improvements in Gradle 5.0 Gradle 5.0 comes incremental compilation and annotation processing to enhance caching and up-to-date checking. Gradle 5.0 also brings features such as Kotlin DSL, dependency version alignment, version locking, task timeouts, Java 11 support, and more. The Kotlin DSL helps the IDE users in code completion and refactoring. Faster builds with build cache Users can experience faster builds the moment they upgrade to Gradle 5.0. Gradle 5.0 allows developers and business executives to build only what is needed by using the build cache and incremental processing features. The build cache reuses the results of previous executions and makes the process faster. It also reduces the build time by approximately 90%. Incremental Java compilation and annotation processing Gradle 5.0 features an incremental compiler. Now, there is no need for CompileJava tasks to recompile all the source files except for the the first time This compiler is default in this version and is highly optimized. It also supports incremental annotation processing which increases the effectiveness of incremental compilation in the presence of annotation processors. Users have to upgrade to the latest version (5.0) of the processors to experience the annotation processing. The new annotationProcessor configuration is used to manage the annotation processors and for putting them on the annotation processor path. Fine-grained transitive dependency management Gradle 5.0 comes with new features for customizing dependencies and features for improved POM and BOM support. Gradle 5.0 supports dependency constraints that are used to define versions or version ranges to restrict direct and transitive dependency versions. In this version, the platform definitions or Maven BOM dependencies are natively supported which allows the use of Spring Boot platform definition without using an external plugin. The dependency alignment aligns the modules in a logical group. With this release, the dynamic dependency versions can now be locked for better build reproducibility. This version can import bill of materials (BOM) files. Writing Gradle build logic Users can now write Gradle build scripts in Kotlin. The functionality of Static-typing in Kotlin allows tools to provide better IDE assistance to the users. More memory efficient Gradle execution The lower memory requirements and cache cleanup reduces Gradle’s overhead on the system. In Gradle 5.0, many caching mechanisms have been optimized for reducing the default memory for Gradle processes. New Gradle invocation options This version supports JUnit 5: JUnit Platform, JUnit Jupiter, and JUnit Vintage which helps in enabling test grouping and filtering. The tasks for non-interactive environments like continuous integration execution group the log messages. It’s now easy to identify if a test has failed with arich command-line console as it shows a colored build status. One can now work on interdependent projects with the help of composite builds in Gradle 5.0. This release of Gradle supports custom arguments which help in running Java applications faster and easier. New Gradle task and plugin APIs This version of Gradle features a new Worker API for safe parallel and asynchronous execution. Gradle 5.0’s new Configuration Avoidance APIs allow  users to configure projects together. The task timeout API helps to specify a timeout duration for a task, after which it will be interrupted. Custom CLI args in Gradle 5.0 helps the users to configure their custom tasks.   To know more about Gradle 5.0. check out Gradle’s official blog. Gradle 4.9 released! Android Studio 3.2 Beta 5 out, with updated Protobuf Gradle plugin Setting Gradle properties to build a project [Tutorial]
Read more
  • 0
  • 0
  • 4038

article-image-facebook-ai-research-and-nyu-school-of-medicine-announces-new-open-source-ai-models-and-mri-dataset-as-part-of-their-fastmri-project
Natasha Mathur
27 Nov 2018
3 min read
Save for later

Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset as part of their FastMRI project

Natasha Mathur
27 Nov 2018
3 min read
Facebook AI Research (FAIR) and NYU school of medicine announced yesterday that they're releasing new open source AI research models and data as a part of FastMRI. FastMRI is a new collaborative research project by Facebook and NYU School of medicine, that was announced back in August this year.   FastMRI makes use of artificial intelligence (AI) to make the (MRI) scans up to 10 times faster. By releasing these new AI models and the MRI data, the FastMRI team aims to help improve diagnostic imaging technology, which in turn can increase patients’ access to more powerful and life-saving technology. The latest release explores new AI models, and the first large-scale MRI data set for reconstructing MRI scans. Let’s have a look at these key releases. First large-scale database for MRI scans The fastMRI team has come out with baseline models for ML-based image reconstruction from k-space data subsampled at 4x and 8x scan accelerations. A common challenge faced by AI researchers in the field of MR reconstruction is consistency, as they use a variety of datasets for training AI systems. This is why the latest and the largest open source MRI dataset will help tackle this problem of MR image reconstruction by providing an industry-wide and benchmark ready dataset. This dataset comprises approximately 1.5 million MR images drawn from 10,000 scans, as well as raw measurement data from nearly 1,600 scans. NYU fully anonymized the data set, as well as the metadata and image content manually.  It includes the k-space data collected during scanning. NYU School of Medicine has decided to offer researchers with unprecedented access to data so that they can easily train their models, validate their performance, and get a general idea on how image reconstruction techniques could be used in real-world conditions. The k-space data in this data set is derived from MR devices comprising multiple magnetic coils. It also comprises data simulating the measurements from single-coil machines. AI models, baselines, and results leaderboard FastMRI team mainly focused on two tasks, namely, single-coil reconstruction and multi-coil reconstruction. In both the single-coil and multi-coil deep learning baselines, the AI models are based on u-nets, a convolutional network architecture developed specifically for image segmentation in biomedical applications. U-nets also has a proven track record with an image-to-image prediction. Moreover, a baseline for both classical and non-AI based reconstruction methods has been developed. A separate baseline comprising deep learning models has also been created. Apart from that, FAIR has created a leaderboard for the consistent measurement of MR progress and reconstruction results. The team has already added the baseline models to start with. Researchers can further add improved results as they begin generating and submitting the results to conferences and journals with the help of the fastMRI data set. It will also help the researchers in evaluating their results against the consistent metrics and to figure out how different approaches compare. “Our priority for the next phase of this collaboration is to use the experimental foundations we’ve established — the data and baselines — to further explore AI-based image reconstruction techniques. Additionally, any progress that we make at FAIR and NYU School of Medicine will be part of a larger collaboration that spans multiple research communities” says the FAIR team. For more information, check out the official blog post. Facebook AI researchers investigate how AI agents can develop their own conceptual shared language Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation and hate speech on the platform Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job
Read more
  • 0
  • 0
  • 11843

article-image-researchers-develop-new-brain-computer-interface-that-lets-paralyzed-patients-use-tablets
Sugandha Lahoti
27 Nov 2018
3 min read
Save for later

Researchers develop new brain-computer interface that lets paralyzed patients use tablets

Sugandha Lahoti
27 Nov 2018
3 min read
Researchers have developed a new iBCI (intracortical brain-computer interface) that allows people with paralysis to control an unmodified, commercially available tablet. This research was based on the fact that most general-purpose computers have been difficult to use for people with some form of paralysis.  In their study, three research participants with tetraplegia who had multielectrode arrays implanted in motor cortex as part of the BrainGate2 clinical trial were invited. Using the iBCI, their neural activity was decoded in real time with a point-and-click wireless Bluetooth mouse. This allowed participants to use common and recreational applications (web browsing, email, chatting, playing music on a piano application, sending text messages, etc.). iBCI also allowed two participants to “chat” with each other in real time.   The architecture of the setup  Participants used seven common applications on the tablet: an email client, a chat program, a web browser, a weather program, a news aggregator, a video sharing program, and a streaming music program.   The system consisted of a NeuroPort recording system to record neural signals from the participant’s motor cortex.   These signals were routed into a real-time computer running the xPC/Simulink Real-Time operating system for processing and decoding. The output of the decoding algorithm was passed to a Bluetooth interface configured to work as a wireless computer mouse using the Bluetooth Human Interface Device (HID) Profile.   This virtual Bluetooth mouse was paired with a commercial Android tablet device with no modifications to the operating system.   Participants performed real-time “point-and-click” control over a cursor that appeared on the tablet computer once paired through the Bluetooth interface.  The cursor movements and clicks by participants were decoded from neural activity using Kalman filters. 2D cursor velocities were estimated using a Recalibrated Feedback Intention Trained Kalman Filter (ReFIT-KF) and a cumulative closed-loop decoder. Click intentions were classified using a hidden Markov model and a linear discriminant analysis classifier.  Future work  The researchers want to expand the control stock with additional decoded signals, leveraging more optimized keyboard layouts, exploring accessibility features, and controlling other devices and operating systems. They also want to extend the output of the iBCI to support additional dimensions that may be used to command advanced cursor features.   For detailed analysis, go through the research paper.  What if buildings of the future could compute? European researchers make a proposal.  Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job  Mozilla introduces LPCNet: A DSP and deep learning-powered speech synthesizer for lower-power devices 
Read more
  • 0
  • 0
  • 9999
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-rust-beta-2018-is-here
Prasad Ramesh
27 Nov 2018
2 min read
Save for later

Rust Beta 2018 is here

Prasad Ramesh
27 Nov 2018
2 min read
An announcement post yesterday said that Rust 2018 beta is now in the final phase before release. A new beta has just been released with updates. After bug fixes, the final release will take place on December 6. In comparison to the Rust 2018 Edition Preview 2, the new Rust 1.31.0 beta includes all of the features stabilized in v1.31.0 207 and many bug fixes. Those new features are as follows. Changes in Rust Beta The new lifetime elision rules now allow eliding lifetimes in functions and impl headers. Lifetimes are still needed to be defined in structs. const functions can now be defined and used. These const functions right now are a strict minimal subset of the const fn RFC. Tool lints can now be used, which allow scoping lints from external tools by using attributes. With this release, the #[no_mangle] and #[export_name] attributes can be located anywhere in a crate. Previously they could only be located in exported functions. Parentheses can now be used in pattern matches. The compiler change includes updating musl to 1.1.20. There are some library changes and API stabilizations. Now, cargo will download crates in parallel using HTTP/2 protocol. The packages in Cargo.toml can also be renamed now. You can know more about these changes on GitHub. Changes in tooling Rust Beta 2018 also includes a number of improvements in the area of tooling. Rustfmt is now at version 1.0. RLS and Clippy will no longer be installed via “preview” components after a rustup update. The developers have listed two focus areas to find the bugs, namely the module system implementation and the RLS. Work for next release In Rust Preview 2, two variants for the module system were evaluated—“anchored paths” vs “uniform paths”. This evaluation continues in this beta release. This means that the compiler accepts only code that both variants would accept. You can read the announcement post for more details. Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes GitHub Octoverse: The top programming languages of 2018 Red Hat announces full support for Clang/LLVM, Go, and Rust
Read more
  • 0
  • 0
  • 11717

article-image-amazon-reinvent-2018-aws-snowball-edge-comes-with-a-gpu-option-and-more-computing-power
Bhagyashree R
27 Nov 2018
2 min read
Save for later

Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power

Bhagyashree R
27 Nov 2018
2 min read
Amazon re:Invent 2018 commenced yesterday at Las Vegas. This five-day event will comprise of various sessions, chalk talks, and hackathons covering AWS core topics. Amazon is also launching several new products and making some crucial announcements. Adding to this list, yesterday, Amazon announced that AWS Snowball Edge will now come with two options: Snowball Edge Storage Optimized and Snowball Edge Compute Optimized. Snowball Edge Compute Optimized, in addition to more computing power, comes with an optional GPU support. What is AWS Snowball Edge? AWS Snowball Edge is a physical appliance that is used for data migration and edge computing. It supports specific Amazon EC2 instance types and AWS Lambda functions. With Snowball Edge, customers can develop and test in AWS. The applications can then be deployed on remote devices to collect, pre-process, and return the data. Common use cases include data migration, data transport, image collation, IoT sensor stream capture, and machine learning. What is new in Snowball Edge? Snowball Edge will now come in two options: Snowball Edge Storage Optimized: This option provides 100 TB of capacity and 24 vCPUs, well suited for local storage and large-scale data transfer. Snowball Edge Compute Optimized: There are two variations of this option, one is without GPU and the other is with GPU. Both come with 42 TB of S3-compatible storage and 7.68 TB of NVMe SSD storage. You will also be able to run any combination of the instances that consume up to 52 vCPUs and 208 GiB of memory. The main highlight here is the support for an optional GPU. With Snowball Edge with GPU, you can do things like real-time full-motion video analysis and processing, machine learning inferencing, and other highly parallel compute-intensive work. In order to gain access to the GPU, you need to launch an sbe-g instance. You can select the “with GPU” option using the console: Source: Amazon The following are the specifications of the instances: Source: Amazon You can read more about the re:Invent announcements regarding Snowball Edge on AWS website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS announces more flexibility its Certification Exams, drops its exam prerequisites Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources
Read more
  • 0
  • 0
  • 12408

article-image-amazon-reinvent-2018-aws-key-management-service-kms-custom-key-store
Sugandha Lahoti
27 Nov 2018
3 min read
Save for later

Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store

Sugandha Lahoti
27 Nov 2018
3 min read
At the ongoing Amazon re:Invent 2018, Amazon announced that AWS Key Management Service (KMS) has integrated with AWS CloudHSM. Users now have the option to create their own KMS custom key store. They can generate, store, and use their KMS keys in hardware security modules (HSMs) through the KSM. The KMS customer key store satisfies compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs). It supports AWS services and encryption toolkits that are integrated with KMS. Previously, AWS CloudHSM was not widely integrated with other AWS managed services. So, if someone required direct control of their HSMs but still wanted to use and store regulated data in AWS managed services, they had to choose between changing those requirements, not using a given AWS service, or building their own solution. With custom key store, users can configure their own CloudHSM cluster and authorize KMS to use it as a dedicated key store for keys rather than the default KMS key store. On using a KMS CMK in a custom key store, the cryptographic operations under that key are performed exclusively in the developer’s own CloudHSM cluster. Master keys that are stored in a custom key store are managed in the same way as any other master key in KMS and can be used by any AWS service that encrypts data and that supports KMS customer managed CMKs. The use of a custom key store does not affect KMS charges for storing and using a CMK. However, it does come with an increased cost and potential impact on performance and availability. Things to consider before using a custom key store Each custom key store requires the CloudHSM cluster to contain at least two HSMs. CloudHSM charges vary by region and the pricing comes to at least $1,000 per month, per HSM, if each device is permanently provisioned. The number of HSMs determines the rate at which keys can be used. Users should keep in mind the intended usage patterns for their keys and ensure appropriate provisioning of HSM resources. The number of HSMs and the use of availability zones (AZs) impacts the availability of a cluster. Configuration errors may result in a custom key store being disconnected, or key material being deleted. Users need to manually setup HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which users should have the appropriate resources and organizational controls in place. Read more about the KMS custom key stores on Amazon. How Amazon is reinventing Speech Recognition and Machine Translation with AI AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources.
Read more
  • 0
  • 0
  • 11344

article-image-introducing-strato-pi-an-industrial-raspberry-pi
Prasad Ramesh
26 Nov 2018
4 min read
Save for later

Introducing Strato Pi: An industrial Raspberry Pi

Prasad Ramesh
26 Nov 2018
4 min read
Italian companies have designed Strato Pi, a Raspberry Pi based board intended to be used in industrial applications. It can be used in areas where a higher level of reliability is required. Source: sferlabs website Strato Pi features The board is roughly the same size of Regular Raspberry Pi 2/3 and is engineered to work in an industrial environment that demands more rugged devices. Power supply that can handle harsh environments The Strato Pi can accept a power supply from a wide range and can handle substantial amounts of ripple, noise and voltage fluctuations. The power supply circuit is heavily protected and filtered with oversized electrolytic capacitors, diodes, inductors, and a high efficiency voltage regulator. The power converter is based on PWN converted integrated circuits which can provide up to 95% power efficiency and up to 3A continuous current output. Over current limiting, over voltage protection and thermal shutdown are also built-in. The board is also protected against reverse polarity with resettable fuses. There is surge protection up to ±500V/2ohms 1.2/50μs which ensures reliability even in harsh environments. UPS to safeguard against power failure In database and data collection applications, supper power interruption may cause data loss. To tackle this Strato Pi has an integrated power supply that gives enough time to save data and shutdown when there is a power failure. The battery power supply stage of the board supplies power to the Strato Pi circuits without any interruption even when the main power supply fails. This stage also charges the battery via a high efficiency step-up converter to generate the optimal charging voltage independent of the main power supply voltage value. Built-in real time clock The Strato Pi has a built-in battery-backed real time clock/calendar. It is directly connected to the Raspberry Pi via the I2C bus interface. This shows the correct time even when there is no internet connection. This real time clock is based on the MCP79410 general purpose Microchip RTCC chip. A replaceable CR1025 battery acts as backup power source when the main power is not available. In always powered on state, the battery can last over 10 years. Serial Port Strato Pi uses the interface circuits of the RS-232 and RS-485 serial ports. They are insulated from the main and battery power supply voltages which avoids failures due to ground loops. A proprietary algorithm powered micro-controller, automatically manages the data direction of RS-485. Without any special configuration, the baud rate and the number of bits are taken into account. Thus, the Raspberry board can communicate through its TX/RX lines without any other additional signal. Can Bus The Controller Area Network (CAN) bus is widely used and is based on a multi-master architecture. This board implements an easy to use CAN bus controller. It has both RS-485 and CAN bus ports which can be used at the same time. CAN specification version 2.0B can be used and support of up to 1 Mbps is available. A hardware watchdog A hardware watchdog is an electronic circuit that can automatically reset the processor if there is a software hang. This is implemented with the help of the on board microcontroller. This is independent of  the Raspberry Pi’s internal CPU watchdog circuit. The base variant starts at roughly $88. They also have a mini and products like a prebuilt server. For more details on Strato Pi, sferlabs website. Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25 Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi [Tutorial]
Read more
  • 0
  • 0
  • 26949
article-image-babysitters-now-must-pass-perdictims-ai-assessment-to-be-perfect-to-get-the-job
Natasha Mathur
26 Nov 2018
4 min read
Save for later

Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job

Natasha Mathur
26 Nov 2018
4 min read
AI is everywhere, and now it's helping parents determine whether a potential babysitter for their toddler is a right fit for hire or not. Predictim is an online service that uses advanced AI to analyze the risk levels attached to a babysitter. It gives you an overall risk score for the babysitter along with complete details on the babysitter by scanning their social media profiles using language processing algorithms. Predictim’s algorithms analyze “billions” of data points dating back to years in a person’s online profile. It then delivers an evaluation within minutes of a babysitter’s predicted traits, behaviors, and areas of compatibility based on her digital history. It uses language-processing algorithms and computer vision to assess babysitters' Facebook, Twitter and Instagram posts for clues about their offline life.  Predictim assesses the babysitters based on four different personality features like bullying/harassment, bad attitude, explicit content, and drug abuse. This is what’s making this service appealing for parents as determining all these details about a potential babysitter is not possible with just a standard background check. “The current background checks parents generally use don’t uncover everything that is available about a person. Interviews can’t give a complete picture. A seemingly competent and loving caregiver with a ‘clean’ background could still be abusive, aggressive, a bully, or worse. That’s where Predictim’s solution comes in”, said Sal Parsa, co-founder, Predictim.   Criticism towards Predictim   Now, although the services are radically transforming how companies approach hiring and reviewing workers, it also poses significant risks. In a post by Drew Harwell, reporter, Washington Post, Predictim depends on black-box algorithms and is not only prone to biases over how an ideal babysitter should behave, look or share (online) but its personality scan results are also not always accurate. The software might misunderstand a person’s personality based on her/his social media use. An example presented by Harwell is that of a babysitter who was flagged for possible bullying behavior. The mother who had hired the babysitter said that she couldn’t figure out if the software was making that analysis based on an old movie quote, song lyric or if it actually found occurrences of bullying language. Moreover, there are no phrases, links or details provided to the parents that indicate the non-credibility of a babysitter. Harwell also points out that hiring and recruiting algorithms have been “shown to hide the kinds of subtle biases that could derail a person's career”.  An example given by Harwell is that of Amazon who scrapped its sexist AI algorithm last month, as it unfairly penalized the female candidates.  Kate Crawford, co-founder, AI Now institute tweeted out against Predictim, calling it “bollocks AI system”:  https://twitter.com/katecrawford/status/1066450509782020098 https://twitter.com/katecrawford/status/1066359192301256706 But, the Predictim team is set on expanding its capabilities. They’re preparing for nationwide expansion as  Sittercity, a popular online babysitter marketplace, is planning to launch a pilot program next year with Predictim's automated ratings on the site's sitter screenings and background checks. They’re also currently looking into gaining the psychometric data via the babysitter’s social media profiles to dig even deeper into the details about a babysitter’s private life. This has raised many privacy-related questions in support of the babysitters it could indirectly force a babysitter to provide the parent with all the personal details of her life to get a job, that she might not be comfortable sharing otherwise.    However, some people think differently and are more than okay asking babysitters for their personal data. An example given by Harwell is of a mother of two, who believes that “babysitters should be willing to share their personal information to help with parents’ peace of mind. A background check is nice, but Predictim goes into depth, really dissecting a person — their social and mental status. 100 percent of the parents are going to want to use this. We all want the perfect babysitter.” Now, despite parents wanting the “perfect babysitter”, the truth of the matter is that Predictim’s AI algorithms are not “perfect” and need to be more efficient so that they don’t project their unfair biases on the babysitters.  Predictim needs to make sure that it caters its services not just for the benefit of the parents but also takes into consideration the needs of babysitters.   Google’s Pixel camera app introduces Night Sight to help click clear pictures with HDR+ Blackberry is acquiring AI & cybersecurity startup, Cylance, to expand its next-gen endpoint solutions like its autonomous cars’ software Facebook AI researchers investigate how AI agents can develop their own conceptual shared language
Read more
  • 0
  • 0
  • 10049

article-image-satpy-0-10-0-python-library-for-manipulating-meteorological-remote-sensing-data-released
Amrata Joshi
26 Nov 2018
2 min read
Save for later

SatPy 0.10.0, python library for manipulating meteorological remote sensing data, released

Amrata Joshi
26 Nov 2018
2 min read
SatPy is a python library used for reading and manipulating meteorological remote sensing data and writing them to various image/data file formats. Last week, the team at Pytroll announced the release of SatPy 0.10.0. SatPy is responsible for making RGB composites directly from satellite instrument channel data or from higher level processing output. It also  makes data loading, manipulating, and analysis easy. https://twitter.com/PyTrollOrg/status/1066865986953986050 Features of SatPy 0.10.0 This version comes with two luminance sharpening compositors, LuminanceSharpeninCompositor and SandwichCompositor. The LuminanceSharpeninCompositor replaces the luminance via RGB. The SandwichCompositor multiplies the RGB channels with the reflectance. SatPy 0.10.0 comes with check_satpy function for finding missing dependencies. This version also allows writers to create output directories in case, they don't exist. In case of multiple matches, SatPy 0.10.0 helps in improving the handling of dependency loading. This version also supports the new olci l2 datasets used for olci l2 reader. Olci is used for ocean and land processing. Since yaml is the new format for area definitions in SatPy 0.10.0, areas.def has been replaced with areas.yaml In SatPy 0.10.0, filenames are used as strings by File handlers. This version also allows readers to accept pathlib.Path instances as filenames. With this version, it is easier to configure in-line composites. A README document has been added to the setup.py description. Resolved issues in SatPy 0.10.0 The issue with resampling a user-defined scene has been  resolved. Native resampler now works with DataArrays. It is now possible to review subclasses of BaseFileHander. Readthedocs builds are now working. Custom string formatter has been added in this version for lower/upper support. The inconsistent units of geostationary radiances have been resolved. Major Bug Fixes A discrete data type now gets preserved through resampling. Native resampling has been fixed. The slstr reader has been fixed for consistency. Masking in DayNightCompositor has been fixed. The problem with attributes not getting preserved while adding overlays or decorations has now been fixed. To know more about this news, check out the official release notes. Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript Spotify releases Chartify, a new data visualization library in python for easier chart creation Google releases Magenta studio beta, an open source python machine learning library for music artists
Read more
  • 0
  • 0
  • 17131

article-image-googles-global-coding-competitions-code-jam-hashcode-and-kick-start-come-together-on-a-single-website
Amrata Joshi
26 Nov 2018
3 min read
Save for later

Google’s global coding competitions, Code Jam, HashCode and Kick Start come together on a single website

Amrata Joshi
26 Nov 2018
3 min read
Last week, Google brought the popular coding competitions Code Jam, HashCode and Kick Start together on a single website. This brand new UI will make the navigation better to make it user friendly. The user profile will now show notifications which will make the user experience better. Code Jam Google’s global coding competition, Code Jam, gives an opportunity to programmers around the world to solve tricky algorithmic puzzles. The first round includes three sub rounds. Next, the top 1,500 participants from each sub-round then get a chance to compete for a spot in round 2. Top 1,000 contestants are chosen out of them and they get an opportunity to move to the third round. Top 25 contestants will get selected from the third round and they will compete for the finals. The winners get the championship title and $15,000. HashCode HashCode is a team-based programming challenge organized by Google for students and professionals around the world. After registering for the contest, the participants will get an access to the Judge System. The Judge System is an online platform where one can form the team, join a hub, practice, and compete during the rounds. One can choose their team and programming language and the HashCode team assigns an engineering problem to the teams by live streaming on Youtube. The teams can compete either from a local hub or any another location of their choice. The selected teams will compete for the final round at Google’s office. Kick Start Kick Start, also a global online coding competition, consists of a variety of algorithmic challenges designed by Google engineers. Participants can either participate in one of the online rounds or in all of them. The top participants will get a chance to be interviewed at Google. The best part about KickStart is that it is open to all participants and there is no pre-qualification needed. If you are competing in a coding competition for the first time, then KickStart is the best option. What can you expect with this unified interface? Some good competition and some amazing insights coming from each of the rounds. Personalized certificate of completion. A chance to practice coding and experience new challenges A lot of opportunities To stay updated with the registration dates and details, one can sign up on Google’s coding competition’s official page. To know more about the competitions, check out Google’s blog. Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax” Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Google Dart 2.1 released with improved performance and usability
Read more
  • 0
  • 0
  • 6463
article-image-introducing-automatic-dashboards-by-amazon-cloudwatch-for-monitoring-all-aws-resources
Savia Lobo
26 Nov 2018
1 min read
Save for later

Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources

Savia Lobo
26 Nov 2018
1 min read
Last week, Amazon CloudWatch, a monitoring and management service, introduced Automatic Dashboards for monitoring all the AWS resources. These Automatic Dashboards are available in AWS public regions with no additional charges. Through CloudWatch Automatic Dashboards, users can now get aggregated views of health and performance of all the AWS resources. This allows users to quickly monitor, explore user accounts and resource-based view of metrics and alarms, and easily drill-down to understand the root cause of performance issues. Once identified, users can quickly act by going directly to the AWS resource. Features of these Automatic Dashboards are: They are pre-built with AWS services recommended best practices They remain resource aware These dashboards are dynamically updated to reflect the latest state of important performance metrics Users can filter and troubleshoot to a specific view without additional code to reflect the latest state of one's AWS resources. To know more about Automatic Dashboards in detail, visit its official website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Amazon announces Corretto, an open source, production-ready distribution of OpenJDK backed by AWS AWS announces more flexibility its Certification Exams, drops its exam prerequisites
Read more
  • 0
  • 0
  • 11274

article-image-apple-app-store-antitrust-case-to-be-heard-by-u-s-supreme-court-today
Sugandha Lahoti
26 Nov 2018
2 min read
Save for later

Apple app store antitrust case to be heard by U.S. Supreme Court today

Sugandha Lahoti
26 Nov 2018
2 min read
An antitrust case filed against Apple which accuses the company of breaking antitrust laws by monopolizing the market for iPhone apps will be heard in the U.S. Supreme Court today. According to a report by Reuters, Apple collects the payments from iPhone users on it's Apple app store, keeping a 30 percent commission on each purchase, leading to inflated prices compared to if apps were available from other sources. This results in customers having to pay more than they should. The antitrust lawsuit dates to 2011 and alleges that Apple has created a monopoly by allowing apps to be sold only through its App Store and to charge excessive commissions. Apple is appealing a lower-court decision saying that its practices are not monopolizing. It argues that they are only acting as an agent for developers who sell to consumers via the Apple App Store, not a distributor. If the supreme court favors customers it would “threaten the burgeoning field of e-commerce”, says Apple. In its defense, Apple has cited a 1977 Supreme Court ruling as part of its defense. Reuters reports: Apple has seized upon a 1977 Supreme Court ruling that limited damages for anti-competitive conduct to those directly overcharged instead of indirect victims who paid an overcharge passed on by others. Part of the concern, the court said in that case, was to free judges from having to make complex calculations of damages. Apple is backed by the attorneys general of 30 states including California, Texas, Florida and New York. The U.S. Chamber of Commerce business group who is also backing Apple, says “The increased risk and cost of litigation will chill innovation, discourage commerce, and hurt developers, retailers, and consumers alike.” The nine justices of the U.S. Supreme Court will hear arguments in Apple’s bid to escape damages today. The justices will ultimately decide a broader question: Can consumers even sue for damages in an antitrust case like this one? writes Reuters. Apple has quietly acquired privacy-minded AI startup Silk Labs, reports Information. The White House is reportedly launching an antitrust investigation against social media companies Tim Cook criticizes Google for their user privacy scandals but admits to taking billions from Google Search
Read more
  • 0
  • 0
  • 10922
Modal Close icon
Modal Close icon