Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-day-1-at-the-amazon-re-invent-conference-aws-robomaker-fully-managed-sftp-service-for-amazon-s3-and-much-more
Melisha Dsouza
27 Nov 2018
6 min read
Save for later

Day 1 at the Amazon re: Invent conference - AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more!

Melisha Dsouza
27 Nov 2018
6 min read
Looks like Christmas has come early this year for AWS developers! Following Microsoft’s Surface devices and Amazon’s wide range of Alex products, the latter has once again made a series of big releases, at the Amazon re:Invent 2018 conference. These announcements include an AWS RoboMaker to help developers test and deploy robotics applications, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, Amazon EC2 C5n Instances Featuring 100 Gbps of Network Bandwidth and much more! Let’s take a look at what developers can expect from these releases. #1 AWS RoboMaker helps developers develop, test, deploy robotics applications at scale The AWS RoboMaker allows developers to develop, simulate, test, and deploy intelligent robotics applications at scale. Code can be developed inside of a cloud-based development environment and can be tested in a Gazebo simulation. Finally, they can deploy the finished code to a fleet of one or more robots. RoboMaker uses an open-source robotics software framework, Robot Operating System (ROS), with connectivity to cloud services. The service suit includes AWS machine learning services, monitoring services, and analytics services that enable a robot to stream data, navigate, communicate, comprehend, and learn. RoboMaker can work with robots of many different shapes and sizes running in many different physical environments. After a developer designs and codes an algorithm for the robot, they can also monitor how the algorithm performs in different conditions or environments. You can check an interesting simulation of a Robot using Robomaker at the AWS site. To learn more about ROS, read The Open Source Robot Operating System (ROS) and AWS RoboMaker. #2 AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3 AWS Transfer for SFTP is a fully managed service that enables the direct transfer of files to and fro Amazon S3 using the Secure File Transfer Protocol (SFTP). Users just have to create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. AWS allows users to migrate their file transfer workflows to AWS Transfer for SFTP- by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53. Along with AWS services, acustomer'ss data in S3 can be used for processing, analytics, machine learning, and archiving. Along with control over user identity, permissions, and keys; users will have full access to the underlying S3 buckets and can make use of many different S3 features including lifecycle policies, multiple storage classes, several options for server-side encryption, versioning, etc. On the outbound side, users can generate reports, documents, manifests, custom software builds and so forth using other AWS services, and then store them in S3 for each, controlled distribution to your customers and partners. #3 EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors Amazon has launched EC2 instances powered by Arm-based AWS Graviton Processors. These are built around Arm cores. The A1 instances are optimized for performance and cost and are a great fit for scale-out workloads where the load has to be shared across a group of smaller instances. This includes containerized microservices, web servers, development environments, and caching fleets. AWS Graviton are custom designed by AWS and deliver targeted power, performance, and cost optimizations. A1 instances are built on the AWS Nitro System, that  maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. #4 Introducing Amazon EC2 C5n Instances featuring 100 Gbps of Network Bandwidth AWS announced the availability of C5n instances that can utilize up to 100 Gbps of network bandwidth to provide a significantly higher network performance across all instance sizes, ranging from 25 Gbps of peak bandwidth on smaller instance sizes to 100 Gbps of network bandwidth on the largest instance size. They are powered by 3.0 GHz Intel® Xeon® Scalable processors (Skylake) and provide support for the Intel Advanced Vector Extensions 512 (AVX-512) instruction set. These instances also feature 33% higher memory footprint compared to C5 instances and are ideal for applications that can take advantage of improved network throughput and packet rate performance. Based on the next generation AWS Nitro System, C5n instances make 100 Gbps networking available to network-bound workloads.  Workloads on C5n instances take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC). The improved network performance will accelerate data transfer to and from S3, reducing the data ingestion wait time for applications and speeding up delivery of results. #5  Introducing AWS Global Accelerator AWS Global Accelerator is a  a network layer service that enables organizations to seamlessly route traffic to multiple regions, while improving availability and performance for their end users. It supports both TCP and UDP protocols, and performs a health check of a user’s target endpoints while routing traffic away from unhealthy applications. AWS Global Accelerator uses AWS’ global network to direct internet traffic from an organization's users to their applications running in AWS Regions  based on a users geographic location, application health, and routing policies that can be configured. You can head over to the AWS blog to get an in-depth view of how this service works. #6 Amazon’s  ‘Machine Learning University’ In addition to these announcements at re:Invent, Amazon also released a blog post introducing its ‘Machine Learning University’, where the company announced that the same machine learning courses used to train engineers at Amazon can now be availed by all developers through AWS. These courses, available as part of a new AWS Training and Certification Machine Learning offering, will help organizations accelerate the growth of machine learning skills amongst their employees. With more than 30 self-service, self-paced digital courses and over 45 hours of courses, videos, and labs, developers can be rest assured that ML fundamental and  real-world examples and labs, will help them explore the domain. What’s more? The digital courses are available at no charge and developers only have to pay for the services used in labs and exams during their training. This announcement came right after Amazon Echo Auto was launched at Amazon’s Hardware event. In what Amazon defines as ‘Alexa to vehicles’, the Amazon Echo Auto is a small dongle that plugs into the car’s infotainment system, giving drivers the smart assistant and voice control for hands-free interactions. Users can ask for things like traffic reports, add products to shopping lists and play music through Amazon’s entertainment system. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS
Read more
  • 0
  • 0
  • 11301

article-image-introducing-automatic-dashboards-by-amazon-cloudwatch-for-monitoring-all-aws-resources
Savia Lobo
26 Nov 2018
1 min read
Save for later

Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources

Savia Lobo
26 Nov 2018
1 min read
Last week, Amazon CloudWatch, a monitoring and management service, introduced Automatic Dashboards for monitoring all the AWS resources. These Automatic Dashboards are available in AWS public regions with no additional charges. Through CloudWatch Automatic Dashboards, users can now get aggregated views of health and performance of all the AWS resources. This allows users to quickly monitor, explore user accounts and resource-based view of metrics and alarms, and easily drill-down to understand the root cause of performance issues. Once identified, users can quickly act by going directly to the AWS resource. Features of these Automatic Dashboards are: They are pre-built with AWS services recommended best practices They remain resource aware These dashboards are dynamically updated to reflect the latest state of important performance metrics Users can filter and troubleshoot to a specific view without additional code to reflect the latest state of one's AWS resources. To know more about Automatic Dashboards in detail, visit its official website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Amazon announces Corretto, an open source, production-ready distribution of OpenJDK backed by AWS AWS announces more flexibility its Certification Exams, drops its exam prerequisites
Read more
  • 0
  • 0
  • 11274

article-image-baidu-open-sources-openedge-to-create-a-lightweight-secure-reliable-and-scalable-edge-computing-community
Melisha Dsouza
16 Jan 2019
2 min read
Save for later

Baidu open sources ‘OpenEdge’ to create a ‘lightweight, secure, reliable and scalable edge computing community’

Melisha Dsouza
16 Jan 2019
2 min read
On 9th January, at CES 2019, Chinese technology giant Baidu Inc. announced the open sourcing of its edge computing platform called ‘OpenEdge’ that can be used by developers to extend cloud computing to their edge devices “Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy. By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications,” said Baidu VP and GM of Baidu Cloud Watson Yin. “ Baidu said that systems built using OpenEdge will automatically be enabled with features like artificial intelligence, cloud synchronization, data collection, function compute and message distribution.OpenEdge is a component of the Baidu Intelligent Edge platform (BIE). The BIE offers tools to manage edge nodes, resources such as certifications, passwords and program code and other functions. BIE is designed to run on the Baidu cloud and supports common AI frameworks such as the Baidu-developed PaddlePaddle and TensorFlow. Developers can, therefore, use Baidu’s cloud to train AI models and then deploy them to the systems that are built using OpenEdge. According to TechRepublic, OpenEdge also gives developers the ability to exchange data with Baidu ABC Intelligent Cloud, perform filtering calculation on sensitive data and provide real-time feedback control when a network connection is unstable. A company spokesperson told Techcrunch that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud. You can head over to GitHub to know more about this release. Unity and Baidu collaborate for simulating the development of autonomous vehicles Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system  
Read more
  • 0
  • 0
  • 11220

article-image-g-suite-administrators-passwords-were-unhashed-for-14-years-notifies-google
Vincy Davis
22 May 2019
3 min read
Save for later

G Suite administrators' passwords were unhashed for 14 years, notifies Google

Vincy Davis
22 May 2019
3 min read
Today, Google notified its G Suite administrators that some of their passwords were being stored in an encrypted internal system unhashed, i.e., in plaintext, since 2005. Google also states that the error has been fixed and this issue had no effect on the free consumer Google accounts. In 2005, Google had provided G Suite domain administrators with tools to set and recover passwords. This tool enabled administrators to upload or manually set user passwords for their company’s users. This was made possible for helping onboard new users with their account information on their first day of work, and for account recovery. However, this action led to admin console storing a copy of the unhashed password. Google has made it clear that these unhashed passwords were stored in a secure encrypted infrastructure. Google is now working with enterprise administrators to ensure that the users reset their passwords. They are also conducting a thorough investigation and have assured users that no evidence of improper access or misuse of the affected passwords have been identified till now. Google has around 5 million users using G Suite. Out of an abundance of caution, the Google team will also reset accounts of those who have not done it themselves. Additionally, Google has also admitted to another mishap. In January 2019, while troubleshooting new G Suite customer sign-up flows, an accidentally stored subset of unhashed passwords was discovered. Google claims these unhashed passwords were stored for only 14 days and in a secure encrypted infrastructure. This issue has also been fixed and no evidence of improper access or misuse of the affected passwords have been found. In the blogpost, Suzanne Frey, VP of Engineering and Cloud Trust, has given a detailed account of how Google stores passwords for consumers & G Suite enterprise customers. Google is the latest company to have admitted storing sensitive data in plaintext. Two months ago, Facebook had admitted to have stored the passwords of hundreds of millions of its users in plain text, including the passwords of Facebook Lite, Facebook, and Instagram users. Read More: Facebook accepts exposing millions of user passwords in a plain text to its employees after security researcher publishes findings Last year, Twitter and GitHub also admitted to similar security lapses. https://twitter.com/TwitterSupport/status/992132808192634881 https://twitter.com/BleepinComputer/status/991443066992103426 Users are shocked that it took Google 14 long years to identify this error. Others are concerned if even a giant company like Google cannot secure its passwords in 2019, what can be expected from other companies. https://twitter.com/HackingDave/status/1131067167728984064 A user on Hacker News comments, “Google operates what is considered, by an overwhelming majority of expert opinion, one of the 3 best security teams in the industry, likely exceeding in so many ways the elite of some major world governments. And they can't reliably promise, at least not in 2019, never to accidentally durably log passwords. If they can't, who else can? What are we to do with this new data point? The issue here is meaningful, and it's useful to have a reminder that accidentally retaining plaintext passwords is a hazard of building customer identity features. But I think it's at least equally useful to get the level set on what engineering at scale can reasonably promise today.” To know more about this news in detail, head over to Google’s official blog. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model
Read more
  • 0
  • 0
  • 11176

article-image-google-kubernetes-engine-was-down-last-friday-users-left-clueless-of-outage-status-and-rca
Melisha Dsouza
12 Nov 2018
3 min read
Save for later

Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA

Melisha Dsouza
12 Nov 2018
3 min read
On the 9th of November, at 4.30 am US/Pacific time,  the Google Kubernetes Engine faced a service disruption. It was questionable whether or not a user would be able to launch a node pool through Cloud Console UI. The team responded to the issue saying that they would get back to users with more information by Friday, 9th November 04:45 am US/Pacific time. However, this was not solved by the given time. Another status update was posted by the team assuring users that mitigation work was underway by the Engineering Team. Users were to be posted with another update by 06:00 pm US/Pacific with current details. In the meantime, affected customers were advised to use gcloud command to create new Node Pools. An update for the issue being finally resolved was posted on Sunday, the 11th of November, stating that services were restored on Friday at 14:30 US/Pacific time.  . However, no proper explanation has been provided regarding what led to the service disruption. They did mention that an internal investigation of the issue will be done and appropriate improvements to the systems will be implemented to help prevent or minimize future recurrence of the issue. According to a user’s summary on Hacker News, “Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems. Some users here are reporting that they have received no response from GCP support, even over a time span of 40+ hours since the support request was submitted.” According to another user, “When everything works, GCP is the best. Stable, fast, simple, reliable. When things stop working, GCP is the worst. They require way too much work before escalating issues or attempting to find a solution”. We can’t help but agree looking at the timeline of the service downtime. Users have also expressed disappointment over how the outage was managed. Source:Hacker News With users demanding a root cause analysis of the situation, it is only fitting that Google provides one so users can trust the company better. You can check out Google Cloud’s blog post detailing the timeline of the downtime. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]  
Read more
  • 0
  • 0
  • 11155

article-image-oracle-introduces-oracle-cloud-native-framework-at-kubeconcloudnativecon-2018
Amrata Joshi
12 Dec 2018
3 min read
Save for later

Oracle introduces Oracle Cloud Native Framework at KubeCon+CloudNativeCon 2018

Amrata Joshi
12 Dec 2018
3 min read
Yesterday, at the ongoing KubeCon + CloudNativeCon North America 2018, the Oracle team introduced the Oracle Cloud Native Framework. This framework provides developers with a cloud native solution for public cloud, on premises and hybrid cloud deployments. The Oracle Cloud Native Framework supports modern cloud native and traditional applications like, WebLogic, Java, and database. It comprises of the recently announced Oracle Linux Cloud Native Environment and Oracle cloud infrastructure native services. The Oracle Cloud Native Framework supports both dev and ops so it can be used by startups and enterprises. What’s new in Oracle Cloud Native Framework? Application definition & development Oracle Functions: It is a serverless cloud service based on the open source Fn Project that can run on-premises, in a data center, or on any cloud. With Oracle Functions, developers can seamlessly deploy and execute function-based applications without the hassle of managing compute infrastructure. It is Docker container-based and follows the pay-per-use method. Streaming: It is a highly scalable and multi-tenant streaming platform that makes the process of collecting and managing streaming data easy. It also enables applications like security, supply chain and IoT, where large amounts of data gets collected from various sources and is processed in real time. Provisioning Resource Manager: It is a managed service that provisions Oracle Cloud Infrastructure resources and services. It reduces configuration errors while increasing productivity by managing infrastructure as code. Observability & Analysis Monitoring: It is an integrated service that helps in reporting metrics from all resources and services in Oracle Cloud Infrastructure. It uses predefined metrics and dashboards, or service API for getting a holistic view of the performance, health, and capacity of the system. This monitoring service uses alarms for tracking metrics and takes action when they vary or exceed defined thresholds. Notification Service: It is a scalable service that broadcasts messages to distributed components like, PagerDuty and email. The notification service helps users to deliver messages about Oracle Cloud Infrastructure to a large numbers of subscribers. Events: It can store information to object storage and trigger functions to take actions. It also enables users to react to changes in the state of Oracle Cloud Infrastructure resources. The Oracle Cloud Native Framework provides cloud-native capabilities and offerings to the customers by using the open standards established by CNFC. Don Johnson, executive vice president, product development, Oracle Cloud Infrastructure said, “With the growing popularity of the CNCF as a unifying and organizing force in the cloud native ecosystem and organizations increasingly embracing multi cloud and hybrid cloud models, developers should have the flexibility to build and deploy their applications anywhere they choose without the threat of cloud vendor lock-in. Oracle is making this a reality,” To know more about this news, check out the press release. Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Red Hat acquires Israeli multi-cloud storage software company, NooBaa Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format
Read more
  • 0
  • 0
  • 11064
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-platform9-open-sources-klusterkit-to-simplify-the-deployment-and-operations-of-kubernetes-clusters
Bhagyashree R
16 Apr 2019
3 min read
Save for later

Platform9 open sources Klusterkit to simplify the deployment and operations of Kubernetes clusters

Bhagyashree R
16 Apr 2019
3 min read
Today, Platform9 open sourced Klusterkit under the Apache 2.0 license. It is a set of three open source tools that can be used separately or in tandem to simplify the creation and management of highly-available, multi-master, production-grade Kubernetes clusters on-premise, air-gapped environments. Tools included in Klusterkit ‘etcdadm’ Inspired by the ‘kubeadm’ command, ‘etcdadm’ is a command-line interface (CLI) for operating an etcd cluster. It makes the creation of a new cluster, addition of a new member, or the removal of a member from an existing cluster easier. It is adopted by Kubernetes Cluster Lifecycle SIG,  a group that focuses on deployment and upgrades of clusters. ‘nodeadm’ This is a CLI node administration tool that complements kubeadm by deploying all the dependencies required by kubeadm. You can easily deploy a Kubernetes control plane or nodes on any machine running Linux with the help of this tool. ‘cctl’ This is a cluster lifecycle management tool based on Kubernetes community's Cluster API spec. It uses the other two tools in Klusterkit to easily deploy and maintain highly-available Kubernetes clusters in on-premises, even air-gapped environments. Features of Klusterkit It comes with multi-master (K8s HA) support Users can deploy and manage secure etcd clusters It provides rolling upgrade and rollback capability It works in air-gapped environments Users can backup and recover etcd clusters from quorum loss You can control plane protection from low memory/ low CPU conditions. Klusterkit solution architecture Source: Platform 9 Klusterkit stores the metadata of the Kubernetes cluster you build, in a single file named ‘cctl-state.yaml’. You can invoke the cctl CLI to orchestrate the lifecycle of a Kubernetes cluster from any machine which contains this state file. For performing CRUD operations on clusters, cctl implements and calls into the cluster-api interface as a library. It uses ssh-provider, the machine controller for the cluster-api reference implementation. The ssh-provider then, in turn, calls etcdadm and nodeadm to perform cluster operations. In an email sent to us, Arun Sriraman, Kubernetes Technical Lead Manager at Platform9, explaining the importance of Klusterkit, said, “Klusterkit presents a powerful, yet easy-to-use Kubernetes toolset that complements community efforts like Cluster API and kubeadm to allow enterprises a path to modernize applications to use Kubernetes, and run them anywhere -- even in on-premise, air-gapped environments.” To know more in detail, check out the documentation on GitHub. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot
Read more
  • 0
  • 0
  • 11019

article-image-bandwidth-alliance-cloudflare-collaborates-with-microsoft-ibm-and-others-for-saving-bandwidth
Prasad Ramesh
27 Sep 2018
2 min read
Save for later

Bandwidth Alliance: Cloudflare collaborates with Microsoft, IBM and others for saving bandwidth

Prasad Ramesh
27 Sep 2018
2 min read
Cloudflare, a content delivery network service provider, formed a new group yesterday called as the Bandwidth Alliance to reduce bandwidth cost of many cloud users. Cloudflare will provide heavy discounts or free services on bandwidth charges to organizations who are both Cloudflare customers and cloud providers part of this alliance. Current bandwidth charges Hosting on most cloud providers includes data transfer charges, known as bandwidth or egress charges. These charges include the cost of delivering traffic from the cloud to the consumer. However, while using a CDN like Cloudflare, the cost of data transfer is additional over the content delivery cost. This extra charge makes sense if the data has to cross thousands of miles where an infrastructure needs to be maintained across this distance. To do all this, there is a costing involved, which further gets added to customer’s final bill. The Bandwidth Alliance aims to eliminate these additional charges and provide more affordable cloud services. What is the bandwidth alliance? Traffic that is delivered to users through Cloudflare passes across a Private Network Interface (PNI). The PNI usually is within the same facility formed with a fiber optic cable between routers for the two networks. If there’s no transit provider, nor a middleman for maintaining infrastructure, there is no additional cost for Cloudflare or the cloud provider. Cloud service providers use the PNI’s to deeply interconnect with third party networks and Cloudflare. Cloudflare carries the traffic automatically from the user’s location to the Cloudflare data center nearest to the cloud provider then over the PNIs. Cloudflare has heavily peered networks allowing traffic to be carried over the free interconnected links. Thus, Cloudflare came up with Bandwidth Alliance to provide the mutual customers with lower costs. They teamed up with some cloud providers to see if they can make use of their huge interconnects to benefit the end customers. Some of the current members include Automattic, Backblaze, DigitalOcean, DreamHost, IBM Cloud, linode, Microsoft Azure, Packet, Scaleway, and Vapor. The alliance is open for inclusion of more cloud providers. You can read more in the official Cloudflare Blog. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Microsoft Ignite 2018: New Azure announcements you need to know Google introduces Cloud HSM beta hardware security module for crypto key security
Read more
  • 0
  • 0
  • 11013

article-image-aws-iot-greengrass-extends-functionality-with-third-party-connectors-enhanced-security-and-more
Savia Lobo
27 Nov 2018
3 min read
Save for later

AWS IoT Greengrass extends functionality with third-party connectors, enhanced security, and more

Savia Lobo
27 Nov 2018
3 min read
At the AWS re:Invent 2018, Amazon announced new features to its AWS IoT Greengrass. These latest features allow users to extend the capabilities of AWS IoT Greengrass and its core configuration options, which include: connectors to third-party applications and AWS services hardware root of trust private key storage isolation and permission settings  New features of the AWS IoT Greengrass AWS IoT Greengrass connectors With the new updated features AWS IoT Greengrass connectors, users can easily build complex workflows on AWS IoT Greengrass without having to worry about understanding device protocols, managing credentials, or interacting with external APIs. These connectors allow users to connect to third-party applications, on-premises software, and AWS services without writing code. Re-use common business logic Users can now re-use common business logic from one AWS IoT Greengrass device to another through the ability to discover, import, configure, and deploy applications and services at the edge. They can even use AWS Secrets Manager at the edge to protect keys and credentials in the cloud and at the edge. Secrets can be attached and deployed from AWS Secrets Manager to groups via the AWS IoT Greengrass console. Enhanced security AWS IoT Greengrass now provides enhanced security with hardware root of trust private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. Users can also use the hardware secure element to protect secrets deployed to the AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. Deploy AWS IoT Greengrass to another container environment With the new configuration option, users can deploy AWS IoT Greengrass to another container environment and directly access device resources such as Bluetooth Low Energy (BLE) devices or low-power edge devices like sensors. They can even run AWS IoT Greengrass on devices without elevated privileges and without the AWS IoT Greengrass container at a group or individual AWS Lambda level. Users can also change their identity associated with an individual AWS Lambda, providing more granular control over permissions. To know more about other updated features, head over to AWS IoT Greengrass website. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 11007

article-image-dropbox-purchases-workflow-and-esignature-startup-hellosign-for-250m
Melisha Dsouza
29 Jan 2019
2 min read
Save for later

Dropbox purchases workflow and eSignature startup ‘HelloSign’ for $250M

Melisha Dsouza
29 Jan 2019
2 min read
Dropbox has purchased HelloSign, a San Francisco based private company that provides lightweight document workflow and eSignature services. Dropbox has paid $230 million for this deal which is expected to close in Quarter 1. Dropbox co-founder and CEO, Drew Houston, said in a statement “HelloSign has built a thriving business focused on eSignature and document workflow products that their users love. Together, we can deliver an even better experience to Dropbox users, simplify their workflows, and expand the market we serve”. Dropbox’ SVP of engineering, Quentin Clark told TechCrunch that, HelloSign’s workflow capabilities added in 2017 were key to the purchase. He calls their investment in APIs as ‘unique’ and that their workflow products are aligned with Dropbox’ long-term direction that Dropbox will pursue ‘a broader vision’. This could possibly mean extending Dropbox Storage capabilities in the long run. This deal comes as an extension to a partnership that Dropbox established with HelloSign last year, to use two of HelloSign technologies-  to offer eSignature and electronic fax solutions to Dropbox users. HelloSign CEO, Joseph Walla says being part of Dropbox would give HelloSign the access to resources of a much larger public company, thereby allowing them to reach a broader market than it could on a standalone basis. He stated, “Together with Dropbox, we can bring more seamless document workflows to even more customers and dramatically accelerate our impact.” COO of HelloSign, Whitney Bouck, said that the company will remain an independent entity and will continue to operate with its current management structure as part of the Dropbox family. She also added that all of the HelloSign employees will be offered employment at Dropbox as part of the deal. You can head over to TechCrunch to know more about this announcement. How Dropbox uses automated data center operations to reduce server outage and downtime NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’  
Read more
  • 0
  • 0
  • 10877
article-image-red-hat-acquires-israeli-multi-cloud-storage-software-company-noobaa
Savia Lobo
29 Nov 2018
3 min read
Save for later

Red Hat acquires Israeli multi-cloud storage software company, NooBaa

Savia Lobo
29 Nov 2018
3 min read
On Tuesday, Red Hat announced that it has acquired an Israel-based multi-cloud storage software company NooBaa. This is Red Hat’s first acquisition since it was acquired by IBM in October. However, this acquisition is not subject to IBM’s approval as Red Hat's acquisition process by IBM stands incomplete. Early this month, Red Hat CEO Jim Whitehurst said, “Until the transaction closes, it is business as usual. For example, equity practices will continue until the close of the transaction, Red Hat M&A will continue as normal, and our product roadmap remains the same." NooBaa, founded in 2013, addresses the need for greater visibility and control over unstructured data spread throughout the distributed environments. The company also developed a data platform designed to serve as an abstraction layer over existing storage infrastructure. This abstraction not only enables data portability from one cloud to another but allows users to manage data stored in multiple locations as a single, coherent data set that an application can interact with. NooBaa's technologies complement and enhance Red Hat's portfolio of hybrid cloud technologies, including Red Hat OpenShift Container Platform, Red Hat OpenShift Container Storage and Red Hat Ceph Storage. Together, these technologies are designed to provide users with a set of powerful, consistent and cohesive capabilities for managing application, compute, storage and data resources across public and private infrastructures. Ranga Rangachari, VP and GM of Red Hat's storage and hyper-converged infrastructure said, “Data portability is a key imperative for organizations building and deploying cloud-native applications across private and multiple clouds. NooBaa’s technologies will augment our portfolio and strengthen our ability to meet the needs of developers in today’s hybrid and multi-cloud world. We are thrilled to welcome a technical team of nine to the Red Hat family as we work together to further solidify Red Hat as a leading provider of open hybrid cloud technologies.” He further added, "By abstracting the underlying cloud storage infrastructure for developers, NooBaa provides a common set of interfaces and advanced data services for cloud-native applications. Developers can also read and write to a single consistent endpoint without worrying about the underlying storage infrastructure." To know more about this news in detail, head over to RedHat’s official announcement. Red Hat announces full support for Clang/LLVM, Go, and Rust Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 10771

article-image-oracle-announces-oracle-soar-a-tools-package-to-ease-application-migration-on-cloud
Savia Lobo
13 Jun 2018
2 min read
Save for later

Oracle announces Oracle Soar, a tools package to ease application migration on cloud

Savia Lobo
13 Jun 2018
2 min read
Oracle recently released Oracle Soar, a brand new tools, and services package to help customers migrate their applications on the cloud. Oracle Soar comprises a set of automated migration tools along with professional services i.e. a complete solution for migration. It is a semi-automated solution that fits in with Oracle's recent efforts to stand apart from other cloud providers which offer advanced automated services. Tools available within the Oracle Soar package are: Discovery assessment tool Process analyzer tool Automated data and configuration migration utilities tool Rapid integration tool The automated process is powered by True Cloud Method, which is Oracle’s proprietary approach to support customers throughout their cloud journey. Customers are also guided by a dedicated Oracle concierge service that ensures the migration aligns with modern, industry best practices. Customers can monitor the status of their cloud transition via an intuitive mobile application, which allows them to follow a step-by-step implementation guide for what needs to be done on each day. With Soar, customers can save up to 30% on cost and time as it offers simple migrations taking as little as 20 weeks for completion of the process. Oracle Soar is currently available for customers from the Oracle E-Business Suite, Oracle PeopleSoft and Oracle Hyperion Planning who will move to Oracle ERP Cloud, Oracle SCM Cloud and Oracle EPM Cloud. Read more about Oracle Soar, on Oracle’s official blog post. Oracle reveals issues in Object Serialization. Plans to drop it from core Java. Oracle Apex 18.1 is here! What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018    
Read more
  • 0
  • 0
  • 10763

article-image-twilio-acquires-sendgrid-a-leading-email-api-platform-to-bring-email-services-to-its-customers
Natasha Mathur
16 Oct 2018
3 min read
Save for later

Twilio acquires SendGrid, a leading Email API Platform, to bring email services to its customers

Natasha Mathur
16 Oct 2018
3 min read
Twilio Inc., a universal cloud communications platform announced yesterday, that it is acquiring SendGrid, a leading email API platform. Twilio focussed mainly on providing voice calling, text messaging, video, web, and mobile chat services. SendGrid, on the other hand, focused purely on providing email services. With this acquisition, Twilio aims to bring tremendous value to the combined customer bases by offering services around voice, video, chat as well as email. “Email is a vital communications channel for companies around the world, and so it was important to us to include this capability in our platform. The two companies share the same vision, the same model, and the same values,” mentioned Jeff Lawson, Twilio's co-founder and chief executive officer. The two companies will also be focussing on making it easy for developers to build a communications platform by delivering a single, best-in-class platform for developers. This would help them better manage all of their important communication channels including voice, messaging, video, and email. Moreover, as per the terms of the deal, SendGrid will become a wholly-owned subsidiary of Twilio. Once the deal is closed, SendGrid’s common stock will get converted into Twilio’s stock. “At closing, each outstanding share of SendGrid common stock will be converted into the right to receive 0.485 shares of Twilio Class A common stock, which represents a per share price for SendGrid common stock of $36.92 based on the closing price of Twilio Class A common stock on October 15, 2018. The exchange ratio represents a 14% premium over the average exchange ratio for the ten calendar days ending, October 15, 2018”, reads Twilio’s press release. The boards of directors of both Twilio and SendGrid have approved the above-mentioned transaction. “Our two companies have always shared a common goal - to create powerful communications experiences for businesses by enabling developers to easily embed communications into the software they are building. Our mission is to help our customers deliver communications that drive engagement and growth, and this combination will allow us to accelerate that mission for our customers”, said Sameer Dholakia, SendGrid’s CEO. The acquisition will be closing in the first half of 2019. This is subject to the satisfaction of customary closing conditions, including approvals by shareholders of each SendGrid’s and Twilio’s. “We believe this is a once-in-a-lifetime opportunity to bring together the two leading developer-focused communications platforms to create the unquestioned platform of choice for all companies looking to transform their customer engagement”, said Lawson. For more information, check out the official Twilio press release. Twilio WhatsApp API: A great tool to reach new businesses Make phone calls and send SMS messages from your website using Twilio Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 10517
article-image-hortonworks-partner-with-google-cloud-to-enhance-their-big-data-strategy
Gebin George
22 Jun 2018
2 min read
Save for later

Hortonworks partner with Google Cloud to enhance their Big Data strategy

Gebin George
22 Jun 2018
2 min read
Hortonworks currently is a leader in global data management solutions partnered with Google Cloud to enhance Hortonworks Data Platform (HDP) and Hortonworks Dataflow (HDF). It has promised to deliver next-generation data analytics for hybrid and multi-cloud deployments. This partnership will enable customers to leverage new innovations from the open source community via HDP and HDF on GCP for faster business innovations. HDP’s integration with Google Cloud gives us the following features: Flexibility for ephemeral workloads: Analytical workloads which are on-demand can be managed within minutes with no add-on cost and at unlimited elastic scale. Analytics made faster: Take advantage of Apache Hive and Apache Spark for interactive query, machine learning and analytics. Automated cloud provisioning: simplifies the deployment of HDP and HDF in GCP making it easier to configure and secure workloads to make optimal use of cloud resources. In addition HDF has gone through following enhancements: Deploying Hybrid Data architecture: Smooth and secure flow of data from any source which varies from on-premise to cloud. Streaming Analytics in Real-time: Build streaming applications with ease, which will capture real-time insights without having to code a single line. With the combination of HDP, HDF and Hortonworks DataPlane Service, Hortonworks can uniquely deliver consistent metadata, security and data governance across hybrid cloud and multicloud architectures. Arun Murthy, Co-Founder & Chief Product Officer, Hortonworks said “ Partnering with Google Cloud lets our joint customers take advantage of the scalability, flexibility and agility of the cloud when running analytic and IoT workloads at scale with HDP and HDF. Together with Google Cloud, we offer enterprises an easy path to adopt cloud and, ultimately, a modern data architecture. Similarly, Google Cloud’s project management director, Sudhir Hasbe, said “ Enterprises want to be able to get smarter about both their business and their customers through advanced analytics and machine learning. Our partnership with Hortonworks will give customers the ability to quickly run data analytics, machine learning and streaming analytics workloads in GCP while enabling a bridge to hybrid or cloud-native data architectures” Refer to the Hortonworks platform blog and Google cloud blog for more information on services and enhancements. Google cloud collaborates with Unity 3D; a connected gaming experience is here How to Run Hadoop on Google Cloud – Part 1 AT&T combines with Google cloud to deliver cloud networking at scale
Read more
  • 0
  • 0
  • 10249

article-image-introducing-tigergraph-cloud-a-database-as-a-service-in-the-cloud-with-ai-and-machine-learning-support
Savia Lobo
27 Nov 2018
3 min read
Save for later

Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support

Savia Lobo
27 Nov 2018
3 min read
Today, TigerGraph, the world’s fastest graph analytics platform for the enterprise, introduced TigerGraph Cloud, the simplest, most robust and cost-effective way to run scalable graph analytics in the cloud. With TigerGraph Cloud, users can easily get their TigerGraph services up and running. They can also tap into TigerGraph’s library of customizable graph algorithms to support key use cases including AI and Machine Learning. It provides data scientists, business analysts, and developers with the ideal cloud-based service for applying SQL-like queries for faster and deeper insights into data. It also enables organizations to tap into the power of graph analytics within hours. Features of TigerGraph Cloud Simplicity It forgoes the need to set up, configure or manage servers, schedule backups or monitoring, or look for security vulnerabilities. Robustness TigerGraph relies on the same framework providing point-in-time recovery, powerful configuration options, and stability that has been used for its own workloads over several years. Application Starter Kits It offers out-of-the-box starter kits for quicker application development for cases such as Anti-Fraud, Anti-Money Laundering (AML), Customer 360, Enterprise Graph analytics and more. These starter kits include graph schemas, sample data, preloaded queries and a library of customizable graph algorithms (PageRank, Shortest Path, Community Detection, and others). TigerGraph makes it easy for organizations to tailor such algorithms for their own use cases. Flexibility and elastic pricing Users pay for exactly the hours they use and are billed on a monthly basis. Spin up a cluster for a few hours for minimal cost, or run larger, mission-critical workloads with predictable pricing. This new cloud offering will also be available for production on AWS, with other cloud availability forthcoming. Yu Xu, founder and CEO, TigerGraph, said, “TigerGraph Cloud addresses these needs, and enables anyone and everyone to take advantage of scalable graph analytics without cloud vendor lock-in. Organizations can tap into graph analytics to power explainable AI - AI whose actions can be easily understood by humans - a must-have in regulated industries. TigerGraph Cloud further provides users with access to our robust graph algorithm library to support PageRank, Community Detection and other queries for massive business advantage.” Philip Howard, research director, Bloor Research, said, “What is interesting about TigerGraph Cloud is not just that it provides scalable graph analytics, but that it does so without cloud vendor lock-in, enabling companies to start immediately on their graph analytics journey." According to TigerGraph, “Compared to TigerGraph Cloud, other graph cloud solutions are up to 116x slower on two hop queries, while TigerGraph Cloud uses up to 9x less storage. This translates into direct savings for you.” TigerGraph also announces New Marquee Customers TigerGraph also announced the addition of new customers including Intuit, Zillow and PingAn Technology among other leading enterprises in cybersecurity, pharmaceuticals, and banking. To know more about TigerGraph Cloud in detail, visit its official website. MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’  
Read more
  • 0
  • 0
  • 10136
Modal Close icon
Modal Close icon