Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-zeit-releases-serverless-docker-in-beta
Richard Gall
15 Aug 2018
3 min read
Save for later

Zeit releases Serverless Docker in beta

Richard Gall
15 Aug 2018
3 min read
Zeit, the organization behind the cloud deployment software Now, yesterday launched Serverless Docker in beta. The concept was first discussed by the Zeit team at Zeit Day 2018 back in April, but it's now available to use and promises to radically speed up deployments for engineers. In a post published on the Zeit website yesterday, the team listed some of the key features of this new capability, including: An impressive 10x-20x improvement in cold boot performance (in practice this means cold boots can happen in less than a second A new slot configuration property that defines resource allocation in terms of CPU and Memory, allowing you to fit an application within the set of constraints that are most appropriate for it Support for HTTP/2.0 and WebSocket connections to deployments, which means you no longer need to rewrite applications as functions. The key point to remember with this release, according to Zeit, is that  "Serverless can be a very general computing model. One that does not require new protocols, new APIs and can support every programming language and framework without large rewrites." Read next: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 What's so great about Serverless Docker? Clearly, speed is one of the most exciting things about serverless Docker. But there's more to it than that - it also offers a great developer experience. Johannes Schickling, co-founder and CEO of Prisma (a GraphQL data abstraction layer) said that, with Serverless Docker, Zeit "is making compute more accessible. Serverless Docker is exactly the abstraction I want for applications." https://twitter.com/schickling/status/1029372602178039810 Others on Twitter were also complimentary about Serverless Docker's developer experience - with one person comparing it favourably with AWS - "their developer experience just makes me SO MAD at AWS in comparison." https://twitter.com/simonw/status/1029452011236777985 Combining serverless and containers One of the reasons people are excited about Zeit's release is that it provides the next step in serverless. But it also brings containers into the picture too. Typically, much of the conversation around software infrastructure over the last year or so has viewed serverless and containers as two options to choose from rather than two things that can be used together. It's worth remembering that Zeit's product has largely been developed alongside its customers that use Now. "This beta contains the lessons and the experiences of a massively distributed and diverse user base, that has completed millions of deployments, over the past two years." Eager to demonstrate how Serverless Docker works for a wide range of use cases, Zeit has put together a long list of examples of Serverless Docker in action on GitHub. You can find them here. Read next A serverless online store on AWS could save you money. Build one. Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 17655

article-image-microsoft-azures-new-governance-dapp-an-enterprise-blockchain-without-mining
Prasad Ramesh
09 Aug 2018
2 min read
Save for later

Microsoft Azure’s new governance DApp: An enterprise blockchain without mining

Prasad Ramesh
09 Aug 2018
2 min read
Microsoft Azure has just released a Blockchain-as-a-Service product that uses Ethereum to support blockchain with a set of templates to deploy and configure your choice of blockchain network. This can be done with minimal Azure and blockchain knowledge. The conventional blockchain in the open is based on Proof-of-Work (PoW) and requires mining as the parties do not trust each other. An enterprise blockchain does not require PoW but is based on Proof-of-Authority (PoA) where approved identities or validators on a blockchain, validate the transactions on the blockchain. The PoA product features a decentralized application (DApp) called the Governance DApp. Blockchains in this new model can be deployed in 5-45 minutes depending on the size and complexity of the network. The PoA network comes with security features such as identity leasing system to ensure no two nodes carry the same identity. There are also other features to achieve good performance. Web assembly smart contracts: Solidity is cited as one of the pain areas when developing smart contracts on Ethereum. This feature allows developers to use familiar languages such as C, C++, and Rust. Azure Monitor: Used to track node and network statistics. Developers can view the underlying blockchain to track statistics while the network admins can detect and prevent network outages. Extensible governance: With this feature, customers can participate in a consortium without managing the network infrastructure. It can be optionally delegated to an operator of their choosing. Governance DApp: Provides a decentralized governance in which network authority changes are administered via on-chain voting done by select administrators. It also contains validator delegation for authorities to manage their validator nodes that are set up in each PoA deployment. Users can audit change history, each change is recorded, providing transparency and auditability. Source: Microsoft Blog Along with these features, the Governance DApp will also ensure each consortium member has control over their own keys. This enables secure signing on a wallet chosen by the user. The blog mentions “In the case of a VM or regional outage, new nodes can quickly spin up and resume the previous nodes’ identities.” To know more visit the official Microsoft Blog. Read next Automate tasks using Azure PowerShell and Azure CLI [Tutorial] Microsoft announces general availability of Azure SQL Data Sync Microsoft supercharges its Azure AI platform with new features
Read more
  • 0
  • 0
  • 17616

article-image-googles-cloud-healthcare-api-is-now-available-in-beta
Amrata Joshi
09 Apr 2019
3 min read
Save for later

Google’s Cloud Healthcare API is now available in beta

Amrata Joshi
09 Apr 2019
3 min read
Last week, Google announced that its Cloud Healthcare API is now available in beta. The API acts as a bridge between on-site healthcare systems and applications that are hosted on Google Cloud. This API is HIPAA compliant, ecosystem-ready and developer-friendly. The aim of the team at Google is to give hospitals and other healthcare facilities more analytical power with the help of Cloud Healthcare API. The official post reads, "From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data and better understand that data through the application of analytics and machine learning in real time, at scale." This API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP). With the help of this API, users can now explore new capabilities for data analysis, machine learning, and application development for healthcare solutions. The  Cloud Healthcare API also simplifies app development and device integration to speed up the process. This API also supports standards-based data formats and protocols of existing healthcare tech. For instance, it will allow healthcare organizations to stream data processing with Cloud Dataflow, analyze data at scale with BigQuery, and tap into machine learning with the Cloud Machine Learning Engine. Features of Cloud Healthcare API Compliant and certified This API is HIPAA compliant and HITRUST CSF certified. Google is also planning ISO 27001, ISO 27017, and ISO 27018 certifications for Cloud Healthcare API. Explore your data This API allows users to explore their healthcare data by incorporating advanced analytics and machine learning solutions such as BigQuery, Cloud AutoML, and Cloud ML Engine. Managed scalability Google’s Cloud Healthcare API provides web-native, serverless scaling which is optimized by Google’s infrastructure. Users can simply activate the API to send requests as the initial capacity configuration is not required. Apigee Integration This API integrates with Apigee, which is recognized by Gartner as a leader in full lifecycle API management, for delivering app and service ecosystems around user data. Developer-friendly This API organizes users’ healthcare information into datasets with one or more modality-specific stores per set where each store exposes both a REST and RPC interface. Enhanced data liquidity The API also supports bulk import and export of FHIR data and DICOM data, which accelerates delivery for applications with dependencies on existing datasets. It further provides a convenient API for moving data between projects. The official post reads, “While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers.” Google will highlight what its partners, including the American Cancer Society, CareCloud, Kaiser Permanente, and iDigital are doing with the API at the ongoing Google Cloud Next. To know more about this news, check out Google’s official announcement. Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council  
Read more
  • 0
  • 0
  • 17537

article-image-vmworld-2019-vmware-tanzu-on-kubernetes-new-hybrid-cloud-offerings-collaboration-with-multi-cloud-platforms-and-more
Fatema Patrawala
30 Aug 2019
7 min read
Save for later

VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more!

Fatema Patrawala
30 Aug 2019
7 min read
VMware kicked off its VMworld 2019 US in San Francisco last week on 25th August and ended yesterday with a series of updates, spanning Kubernetes, Azure, security and more. This year’s event theme was “Make Your Mark” aimed at empowering VMworld 2019 attendees to learn, connect and innovate in the world of IT and business. 20,000 attendees from more than 100 countries descended to San Francisco for VMworld 2019. VMware CEO Pat Gelsinger took the stage, and articulated VMware’s commitment and support for TechSoup, a one-stop IT shop for global nonprofits. Gelsinger also put emphasis on the company's 'any cloud, any application, any device, with intrinsic security' strategy. “VMware is committed to providing software solutions to enable customers to build, run, manage, connect and protect any app, on any cloud and any device,” said Pat Gelsinger, chief executive officer, VMware. “We are passionate about our ability to drive positive global impact across our people, products and the planet.” Let us take a look at the key highlights of the show: VMworld 2019: CEO's take on shaping tech as a force for good The opening keynote from Pat Gelsinger had everything one would expect; customer success stories, product announcements and the need for ethical fix in tech. "As technologists, we can't afford to think of technology as someone else's problem," Gelsinger told attendees, adding “VMware puts tremendous energy into shaping tech as a force for good.” Gelsinger cited three benefits of technology which ended up opening the Pandora's Box. Free apps and services led to severely altered privacy expectations; ubiquitous online communities led to a crisis in misinformation; while the promise of blockchain has led to illicit uses of cryptocurrencies. "Bitcoin today is not okay, but the underlying technology is extremely powerful," said Gelsinger, who has previously gone on record regarding the detrimental environmental impact of crypto. This prism of engineering for good, alongside good engineering, can be seen in how emerging technologies are being utilised. With edge, AI and 5G, and cloud as the "foundation... we're about to redefine the application experience," as the VMware CEO put it. Read also: VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Gelsinger’s 2018 keynote was about the theme of tech 'superpowers'. Cloud, mobile, AI, and edge. This time, more focus was given to how the edge was developing. Whether it was a thin edge, containing a few devices and an SD-WAN connection, a thick edge of a remote data centre with NFV, or something in between, VMware aims to have it all covered. "Telcos will play a bigger role in the cloud universe than ever before," said Gelsinger, referring to the rise of 5G. "The shift from hardware to software [in telco] is a great opportunity for US industry to step in and play a great role in the development of 5G." VMworld 2019 introduces Tanzu to build, run and manage software on Kubernetes VMware is moving away from virtual machines to containerized applications. On the product side VMware Tanzu was introduced, a new product portfolio that aims to enable enterprise-class building, running, and management of software on Kubernetes. In Swahili, ’tanzu’ means the growing branch of a tree and in Japanese, ’tansu’ refers to a modular form of cabinetry. For VMware, Tanzu is their growing portfolio of solutions that help build, run and manage modern apps. Included in this is Project Pacific, which is a tech preview focused on transforming VMware vSphere into a Kubernetes native platform. "With project Pacific, we're bringing the largest infrastructure community, the largest set of operators, the largest set of customers directly to the Kubernetes. We will be the leading enabler of Kubernetes," Gelsinger said. Read also: VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform Other product launches included an update to collaboration program Workspace ONE, including an AI-powered virtual assistant, as well as the launch of CloudHealth Hybrid by VMware. The latter, built on cloud cost management tool CloudHealth, aims to help organisations save costs across an entire multi-cloud landscape and will be available by the end of Q3. Collaboration, not compete with major cloud providers - Google Cloud, AWS & Microsoft Azure At VMworld 2019 VMware announced an extended partnership with Google Cloud earlier this month led the industry to consider the company's positioning amid the hyperscalers. VMware Cloud on AWS continues to gain traction - Gelsinger said Outposts, the hybrid tool announced at re:Invent last year, is being delivered upon - and the company also has partnerships in place with IBM and Alibaba Cloud. Further, VMware in Microsoft Azure is now generally available, with the facility to gradually switch across Azure data centres. By the first quarter of 2020, the plan is to make it available across nine global areas. Read also: Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users The company's decision not to compete, but collaborate with the biggest public clouds has paid off. Gelsinger also admitted that the company may have contributed to some confusion over what hybrid cloud and multi-cloud truly meant. But the explanation from Gelsinger was pretty interesting. Increasingly, with organisations opting for different clouds for different workloads, and changing environments, Gelsinger described a frequent customer pain point for those nearer the start of their journeys. Do they migrate their applications or do they modernise? Increasingly, customers want both - the hybrid option. "We believe we have a unique opportunity for both of these," he said. "Moving to the hybrid cloud enables live migration, no downtime, no refactoring... this is the path to deliver cloud migration and cloud modernisation." As far as multi-cloud was concerned, Gelsinger argued: "We believe technologists who master the multi-cloud generation will own it for the next decade." Collaboration with NVIDIA to accelerate GPU services on AWS NVIDIA and VMware today announced their intent to deliver accelerated GPU services for VMware Cloud on AWS to power modern enterprise applications, including AI, machine learning and data analytics workflows. These services will enable customers to seamlessly migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video processing applications. Through this partnership, VMware Cloud on AWS customers will gain access to a new, highly scalable and secure cloud service consisting of Amazon EC2 bare metal instances to be accelerated by NVIDIA T4 GPUs, and new NVIDIA Virtual Compute Server (vComputeServer) software. “From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Jensen Huang, founder and CEO, NVIDIA. “Together with VMware, we’re designing the most advanced GPU infrastructure to foster innovation across the enterprise, from virtualization, to hybrid cloud, to VMware's new Bitfusion data center disaggregation.” Read also: NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Apart from this, Gelsinger made special note to mention VMware's most recent acquisitions, with Pivotal and Carbon Black and discussed about where they fit in the VMware stack at the back. VMware’s hybrid cloud platform for Next-gen Hybrid IT VMware introduced new and expanded cloud offerings to help customers meet the unique needs of traditional and modern applications. VMware empowers IT operators, developers, desktop administrators, and security professionals with the company’s hybrid cloud platform to build, run, and manage workloads on a consistent infrastructure across their data center, public cloud, or edge infrastructure of choice. VMware uniquely enables a consistent hybrid cloud platform spanning all major public clouds – AWS, Azure, Google Cloud, IBM Cloud – and more than 60 VMware Cloud Verified partners worldwide. More than 70 million workloads run on VMware. Of these, 10 million are in the cloud. These are running in more than 10,000 data centers run by VMware Cloud providers. Take a look at the full list of VMworld 2019 announcements here. What’s new in cloud and virtualization this week? VMware signs definitive agreement to acquire Pivotal Software and Carbon Black Pivotal open sources kpack, a Kubernetes-native image build service Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal
Read more
  • 0
  • 0
  • 17391

article-image-microsoft-releases-open-service-broker-for-azure-osba-version-1-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Microsoft releases Open Service Broker for Azure (OSBA) version 1.0

Savia Lobo
29 Jun 2018
2 min read
Microsoft released version 1.0 of Open Service Broker for Azure (OSBA) along with full support for Azure SQL, Azure Database for MySQL, and Azure Database for PostgreSQL. Microsoft announced the preview of Open Service Broker for Azure (OSBA) at the KubeCon 2017. OSBA is the simplest way to connect apps running on cloud-native environment (such as Kubernetes, Cloud Foundry, and OpenShift) and rich suite of managed services available on Azure. The OSBA 1.0 ensures to connect mission-critical applications to Azure’s enterprise-grade backing services. It is also ideal to run on a containerized environment like Kubernetes. In a recent announcement of a strategic partnership between Microsoft and Red Hat to provide  OpenShift service on Azure, Microsoft demonstrated the use of OSBA using an OpenShift project template. OSBA will enable customers to deploy Azure services directly from the OpenShift console and connect them to their containerized applications running on OpenShift. It also plans to collaborate with Bitnami to bring OSBA into KubeApps, for customers to deploy solutions like WordPress built on Azure Database for MySQL and Artifactory on Azure Database for PostgreSQL. Microsoft plans 3 additional focus areas for OSBA and the Kubernetes service catalog: Plans to expand the set of Azure services available in OSBA by re-enabling services such as Azure Cosmos DB and Azure Redis. These services will progress to a stable state as Microsoft will learn how customers intend to use them. They plan to continue working with the Kubernetes community to align the capabilities of the service catalog with the behavior that customers expect. With this, the cluster operator will have the ability to choose which classes/plans are available to developers. Lastly, Microsoft has a vision for the Kubernetes service catalog and the Open Service Broker API. It will enable developers to describe general requirements for a service, such as “a MySQL database of version 5.7 or higher”. Read the full coverage on Microsoft’s official blog post GitLab is moving from Azure to Google Cloud in July Announces general availability of Azure SQL Data Sync Build an IoT application with Azure IoT [Tutorial]
Read more
  • 0
  • 0
  • 17374

article-image-aws-elastic-load-balancing-support-added-for-redirects-and-fixed-responses-in-application-load-balancer
Natasha Mathur
30 Jul 2018
2 min read
Save for later

AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer

Natasha Mathur
30 Jul 2018
2 min read
AWS announced support for two new actions namely, redirect and fixed-response for elastic load balancing in Application Load Balancer last week. Elastic Load Balancing offers automatic distribution of the incoming application traffic. The traffic is distributed across targets, such as Amazon EC2 instances, IP addresses, and containers. One of the types of load balancers that Elastic load offers is Application Load Balancer. Application Load Balancer simplifies and improves the security of your application as it uses only the latest SSL/TLS ciphers and protocols. It is best suited for load balancing of HTTP and HTTPS traffic and operates at the request level which is layer 7. Redirect and Fixed response support simplifies the deployment process while leveraging the scale, availability, and reliability of Elastic Load Balancing. Let’s discuss how these latest features work. The new redirect action enables the load balancer to redirect the incoming requests from one URL to another URL. This involves redirecting HTTP requests to HTTPS requests, allowing more secure browsing, better search ranking and high SSL/TLS score for your site. Redirects also help redirect the users from an old version of an application to a new version. The fixed-response actions help control which client requests are served by your applications. This helps you respond to the incoming requests with HTTP error response codes as well as custom error messages from the load balancer. There is no need to forward the request to the application. If you use both redirect and fixed-response actions in your Application Load Balancer, then the customer experience and the security of your user requests are improved considerably. Redirect and fixed-response actions are now available for your Application Load Balancer in all AWS regions. For more details, check out the Elastic Load Balancing documentation page. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] Build an IoT application with AWS IoT [Tutorial]
Read more
  • 0
  • 0
  • 17311
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-azure-now-supports-nvidia-gpu-cloud-ngc
Vijin Boricha
31 Aug 2018
2 min read
Save for later

Microsoft Azure now supports NVIDIA GPU Cloud (NGC)

Vijin Boricha
31 Aug 2018
2 min read
Yesterday, Microsoft announced NVIDIA GPU Cloud (NGC) support on its Azure platform. Following this, data scientists, researchers, and developers can build, test, and deploy GPU computing projects on Azure. With this availability, users can run containers from NGC with Azure giving them access to on-demand GPU computing that can scale as per their requirement. This eventually eliminates the complexity of software integration and testing. The need for NVIDIA GPU Cloud (NGC) It is challenging and time-consuming to build and test reliable software stacks to run popular deep learning software such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch, and NVIDIA TensorRT. This is due to the operating level and updated framework dependencies. Finding, installing, and testing the correct dependency is quite a hassle as it is supposed to be done in a multi-tenant environment and across many systems. NGC eliminates these complexities by offering pre-configured containers with GPU-accelerated software. Users can now access 35 GPU-accelerated containers for deep learning software, high-performance computing applications, high-performance visualization tools and much more enabled to run on the following Microsoft Azure instance types with NVIDIA GPUs: NCv3 (1, 2 or 4 NVIDIA Tesla V100 GPUs) NCv2 (1, 2 or 4 NVIDIA Tesla P100 GPUs) ND (1, 2 or 4 NVIDIA Tesla P40 GPUs) According to NVIDIA, these same NVIDIA GPU Cloud (NGC) containers can also work across Azure instance types along with different types or quantities of GPUs. Using NGC containers with Azure is quite easy. Users just have to sign up for a free NGC account before starting, then visit Microsoft Azure Marketplace to find the pre-configured NVIDIA GPU Cloud Image for Deep Learning and high-performance computing. Once you launch the NVIDIA GPU instance on Azure, you can pull the containers you want from the NGC registry into your running instance. You can find detailed steps to setting up NGC in the Using NGC with Microsoft Azure documentation. Microsoft Azure’s new governance DApp: An enterprise blockchain without mining NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499  
Read more
  • 0
  • 0
  • 17185

article-image-githubs-new-integration-for-jira-software-cloud-aims-to-provide-teams-a-seamless-project-management-experience
Bhagyashree R
08 Oct 2018
2 min read
Save for later

GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience

Bhagyashree R
08 Oct 2018
2 min read
Last week, GitHub announced that they have built a new integration to enable software teams to connect their code on GitHub.com to their projects on Jira Software Cloud. This integration updates Jira with data from GitHub, providing a better visibility into the current status of your project. What are the advantages of this new GitHub and Jira integration? No need to constantly switch between GitHub and Jira With your GitHub account linked to Jira, your team can see the branches, commit messages, and pull request in the context of the Jira tickets they’re working on. This integration provides a deeper connection by allowing you to view references to Jira in GitHub issues and pull requests. Source: GitHub Improved capabilities This new GitHub-managed app provides improved security, along with the following capabilities: Smart commits: You can use smart commits to update the status, leave a comment, or log time without having to leave your command line or GitHub View from within a Jira ticket: You can view associated pull requests, commits, and branches from within a Jira ticket Searching Jira issues: You can search for Jira issues based on related GitHub information, such as open pull requests. Check the status of development work: The status of development work can be seen from within Jira projects Keep Jira issues up to date: You can automatically keep your Jira issues up to date while working in GitHub Install the Jira Software and GitHub app to connect your GitHub repositories to your Jira instance. The previous version of the Jira integration will be deprecated in favor of this new GitHub-maintained integration. Once the migration is complete, the legacy integration (DVCS connector) is disabled automatically. Read the full announcement at the GitHub blog. 4 myths about Git and GitHub you should know about GitHub addresses technical debt, now runs on Rails 5.2.1 GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 17137

article-image-cloudflares-workers-enable-containerless-cloud-computing-powered-by-v8-isolates-and-webassembly
Melisha Dsouza
12 Nov 2018
5 min read
Save for later

Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly

Melisha Dsouza
12 Nov 2018
5 min read
Cloudflare’s cloud computing platform Workers doesn’t use containers or virtual machines to deploy computing. Workers allows users to build serverless applications on Cloudflare's data centers. It provides a lightweight JavaScript execution environment to augment existing applications or create entirely new ones without having to configure or maintain infrastructure. Why did Cloudflare create workers? Cloudflare provided limited features and options that developers could build in-house. There was not much flexibility for customers to build features themselves. To enable users to write code on their servers deployed around the world, they had to allow untrusted code to run, with low overhead. This needed to process millions of requests per second and that too at a very fast speed. Customers couldn’t write their own code without the team’s supervision. It would be expensive to use traditional virtualization and container technologies like Kubernetes let alone run thousands of Kubernetes pod at 155 data centers of Cloudflare would be resource intensive. Enter Cloudflare’s ‘Workers’ to solve these issues. Features of Workers #1 ‘Isolates’- Run code from multiple customers ‘Isolates’ is a technology built by Google Chrome team to power the Javascript engine in that browser, V8: Isolates.  These are lightweight contexts that group variables, with the code allowed to mutate them. A single process can run hundreds or thousands of Isolates, while easily  switching between them. Thus, Isolates make it possible to run untrusted code from different customers within a single operating system process. They start real quick (Any given Isolate can start around a hundred times faster than a Node process on a machine) and do not allow one Isolate to access the memory of another. #2 Cold Starts Workers facilitate the concept of ‘cold start’ when a new copy of code has to be started on a machine. In the Lambda world, this means spinning up a new containerized process which can delay requests  for as much as ten seconds ending up in a terrible user experience. A Lambda can only process one single request at a time. A new Lambda has to be cold-started every time an additional concurrent request is recieved. If a Lambda doesn’t get a request soon enough, it will be shut down and it all starts again.  Since Workers don’t have to start a process, Isolates start in 5 milliseconds. It scales and deploys quickly, entirely upgrading existing Serverless technologies. #3 Context Switching A normal context switch performed by an OS can take as much as 100 microseconds. When multiplied by all the Node, Python or Go processes running on average Lambda servers, this leads to a heavy overhead. This splits the CPUs power between running the customer’s code and switching between processes. An Isolate-based system runs all of the code in a single process which means there are no expensive context switches. The machine can invest virtually all of its time running your code. #4 Memory The V8 was designed to be multi-tenant. It runs the code from the many tabs in a user’s browser in isolated environments within a single process. Since memory is often the highest cost of running a customer’s code, V8 lowers it and dramatically changes the cost economics. #5 Security It is not safe to run code from multiple customers within the same process. Testing, fuzzing, penetration testing, and bounties are required to build a truly secure system of that complexity. The open-source nature of V8 helps in creating aanisolation layer that helps Cloudflare take care of the security aspect. Cloudlfare’s Workers also allows users to build responses from multiple background service requests either to the Cloudflare cache, application origin, or third party APIs. They can build conditional responses for inbound requests to assess and subsequently block or reroute malicious or unauthorized requests. All of this at just a third of what AWS costs, remarked an astute Twitter observer. https://twitter.com/seldo/status/1061461318765555713 Running code through WebAssembly One of the disadvantages of using Workers is that, since it is an Isolate-based system, it cannot run arbitrary compiled code. Users have to either write their code in Javascript, or a language which targets WebAssembly (eg. Go or Rust). Also, if a user cannot recompile their processes, they won’t be able to run them in an Isolate. This has been nicely summarised in the above mentioned tweet. He notes that WebAssembly modules are already in the npm registry and it creates the potential for npm to become the dependency management solution for every programming language. He mentions that the “availability of open source libraries to achieve the task at hand is the primary reason people pick a programming language”. This leads us to the question of “How does software development change when you can use any library anytime?” You can head over to the Cloudflare blog to understand more about containerless cloud computing. Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites  
Read more
  • 0
  • 0
  • 16808

article-image-microsoft-commits-5-billion-iot-projects
Richard Gall
06 Apr 2018
2 min read
Save for later

Microsoft commits $5 billion to IoT projects

Richard Gall
06 Apr 2018
2 min read
Microsoft has announced that it will pour $5 billion into IoT over the next 4 years. To date, Microsoft has spent $1.5 billion, so this moves could be viewed as a step change in the organization's commitment to IoT. This makes sense for Microsoft. The company has fallen behind in the consumer technology race. It appears to be moving towards cloud and infrastructure projects instead. Azure has given it a strong position, but with AWS setting the pace in the cloud field, Microsoft needs to move quickly if it is to position itself as the frontrunner in the future of IoT. Julia White, CVP of Azure said this: "With our IoT platform spanning cloud, OS and devices, we are uniquely positioned to simplify the IoT journey so any customer—regardless of size, technical expertise, budget, industry or other factors—can create trusted, connected solutions that improve business and customer experiences, as well as the daily lives of people all over the world. The investment we’re announcing today will ensure we continue to meet all our customers’ needs both now and in the future." The timing of this huge investment has not gone unnoticed. At the end of March, Microsoft revealed that it was reorganizing to allow itself to place greater strategic attention on the 'intelligent cloud and intelligent edge'. It's no coincidence that the senior member set to leave is Terry Myerson, the man who has been leading the Windows side of the business since 2013. However, the extent to which this announcement from Microsoft is really that much of a pivot is questionable. In The Register, Simon Sharwood writes: "Five billion bucks is a lot of money. But not quite so impressive once you realise that Microsoft spent $13.0bn on R&D in FY 2017 and $12bn in each of FY 16 and 15. Five billion spread across the next four years may well be less than ten per cent of all R&D spend." The analysis from many quarters in the tech media is that this is a move that marks what many have been thinking - managing Windows' decline in favour of Microsoft's move into the cloud and infrastructure space. It's pretty hard to see past that - but it will be interesting to see how Microsoft continues to respond to competition from the likes of Amazon.
Read more
  • 0
  • 0
  • 16766
article-image-microsoft-cloud-services-dns-outage-results-in-deleting-several-microsoft-azure-database-records
Bhagyashree R
04 Feb 2019
2 min read
Save for later

Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records

Bhagyashree R
04 Feb 2019
2 min read
On January 29, Microsoft Cloud services including Microsoft Azure, Office 365, and Dynamics 365 suffered a major outage. This resulted in customers experiencing intermittent access to Office 365 and also deleting several database records. This comes just after a major outage that prevented Microsoft 365 users from accessing their emails for an entire day in Europe. https://twitter.com/AzureSupport/status/1090359445241061376 Users who were already logged into Microsoft services weren’t affected; however, those that were trying to log into new sessions were not able to do so. How did this Microsoft Azure outage happen? According to Microsoft, the preliminary reason behind this outage was a DNS issue with CenturyLink, an external DNS provider. Microsoft Azure’s status page read, “Engineers identified a DNS issue with an external DNS provider”. CenturyLink, in a statement, mentioned that their DNS services experienced disruption due to a software defect, which affected connectivity to a customer’s cloud resources. Along with authentication issues, this outage also caused the deletion of users’ live data stored in Transparent Data Encryption (TDE) databases in Microsoft Azure. TDE databases encrypt information dynamically and decrypt them when customers access it. As the data is stored in encrypted form, it prevents intruders from accessing the database. For encryption, many Azure users store their own encryption keys in Microsoft’s Key Vault encryption key management system. The deletion was triggered by a script that automatically drops TDE database tables when corresponding keys can no longer be accessed in the Key Vault. Microsoft was able to restore the tables from a five-minute snapshot backup. But, those transactions that customers had processed within five minutes of the table drop were expected to raise a support ticket asking for the database copy. Read more about Microsoft’s Azure outage in detail on ZDNet. Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020 Outage in the Microsoft 365 and Gmail made users unable to log into their accounts Microsoft Office 365 now available on the Mac App Store
Read more
  • 0
  • 0
  • 16717

article-image-go-cloud-is-googles-bid-to-establish-golang-as-the-go-to-language-of-cloud
Richard Gall
25 Jul 2018
2 min read
Save for later

Go Cloud is Google's bid to establish Golang as the go-to language of cloud

Richard Gall
25 Jul 2018
2 min read
Google's Go is one of the fastest growing programming languages on the planet. But Google is now bidding to make it the go-to language for cloud development. Go Cloud, a new library that features a set of tools to support cloud development, has been revealed in a blog post published yesterday. "With this project," the team explains, "we aim to make Go the language of choice for developers building portable cloud applications." Why Go Cloud now? Google developed Go Cloud because of a demand for a way of writing, simpler applications that aren't so tightly coupled to a single cloud provider. The team did considerable research into the key challenges and use cases in the Go community to arrive at Go Cloud. They found that the increased demand for multi-cloud or hybrid cloud solutions wasn't being fully leveraged by engineering teams, as there is a trade off between improving portability and shipping updates. Essentially, the need to decouple applications was being pushed back by the day-to-day pressures of delivering new features. With Go Cloud, developers will be able to solve this problem and develop portable cloud solutions that aren't tied to one cloud provider. What's inside Go Cloud? Go Cloud is a library that consists of a range of APIs. The team has "identified common services used by cloud applications and have created generic APIs to work across cloud providers." These APIs include: Blob storage MySQL database access Runtime configuration A HTTP server configured with request logging, tracing, and health checking At the moment Go Cloud is compatible with Google Cloud Platform and AWS, but say they plan "to add support for additional cloud providers very soon." Try Go Cloud for yourself If you want to see how Go Cloud works, you can try it out for yourself - this tutorial on GitHub is a good place to start. You can also stay up to date with news about the project by joining Google's dedicated mailing list.   Google Cloud Launches Blockchain Toolkit to help developers build apps easily Writing test functions in Golang [Tutorial]
Read more
  • 0
  • 0
  • 16619

article-image-microsoft-supercharges-its-azure-ai-platform-with-new-features
Gebin George
14 Jun 2018
2 min read
Save for later

Microsoft supercharges its Azure AI platform with new features

Gebin George
14 Jun 2018
2 min read
Microsoft recently announced few innovations to their AI platform powered by Microsoft Azure. These updates are well aligned to their Digital Transformation strategy of helping organizations augment their machine learning capabilities for better performance. Cognitive Search Cognitive Search is a new feature in Azure portal which leverages the power of AI to understand the content and append the information into Azure Search. It also has support for different file-readers like PDF, office documents.It also enables OCR capabilities like key phrase extraction, language detection, image analysis and even facial recognition. So the initial search will pull all the data from various resources and then apply cognitive skills to store data in the optimized index. Azure ML SDK for Python In the Azure Machine Learning ecosystem, this additional SDK facilitates the developers and the data scientists to execute key AML workflows, Model training, Model deployment, and scoring directly using a single control plane API within Python Azure ML Packages Microsoft now offers Azure ML packages that represent a rich set of pip- installable extensions to Azure ML. This makes the process of building efficient ML models more streamlined by building on deep learning capabilities of Azure AI platform. ML.NET This cross-platform open source framework is meant for .NET developers and provides enterprise-grade software libraries of latest innovations in Machine Learning and platforms that includes Bing, Office, and Windows. This service is available in the AI platform for preview. Project Brainware This service is also available on Azure ML portal for preview. This architecture is essentially built to process deep neural networks; it uses hardware acceleration to enable fast AI. You can have a look at the Azure AI portal for more details. New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL Epicor partners with Microsoft Azure to adopt Cloud ERP SAP Cloud Platform is now generally available on Microsoft Azure  
Read more
  • 0
  • 0
  • 16570
article-image-netflix-releases-flamescope
Richard Gall
06 Apr 2018
2 min read
Save for later

Netflix releases FlameScope

Richard Gall
06 Apr 2018
2 min read
Netflix has released FlameScope, a visualization tool that allows software engineering teams to monitor performance issues. From application startup to single threaded execution, FlameScope will provide real time insight into the time based metrics crucial to software performance. The team at Netflix has made FlameScope open  source, encouraging engineers to contribute to the project and help develop it further - we're sure that many development teams could derive a lot of value from the tool, and we're likely to see many customisations as its community grows. How does FlameScope work? Watch the video below to learn more about FlameScope. https://youtu.be/cFuI8SAAvJg Essentially, FlameScope allows you to build something a bit like a flame graph, but with an extra dimension. One of the challenges that Netflix identified that flame graphs sometimes have is that while they allow you to analyze steady and consistent workloads, "often there are small perturbations or variation during that minute that you want to know about, which become a needle-in-a-haystack search when shown with the full profile". With FlameScope, you get the flame graph, but by using a subsecond-offset heat map, you're also able to see the "small perturbations" you might have otherwise missed. As Netflix explains: "You can select an arbitrary continuous time-slice of the captured profile, and visualize it as a flame graph." Why Netflix built FlameScope FlameScope was built by the Netflix cloud engineering team. The key motivations for building it are actually pretty interesting. The team had a microservice that was suffering from strange spikes in latency, the cause a mystery. One of the members of the team found that these spikes, which occurred around every fifteen minutes appeared to correlate with "an increase in CPU utilization that lasted only a few seconds." CPU frame graphs, of course, didn't help for the reasons outlined above. To tackle this, the team effectively sliced up a flame graph into smaller chunks. Slicing it down into one second snapshots was, as you might expect, a pretty arduous task, so by using subsecond heatmaps, the team was able to create flamegraphs on a really small scale. This made it much easier to visualize those variations. The team are planning to continue to develop the FlameScope project. It will be interesting to see where they decide to take it and how the community responds. To learn more read the post on the Netflix Tech Blog.
Read more
  • 0
  • 0
  • 16259

article-image-introducing-numpywren-a-system-for-linear-algebra-built-on-a-serverless-architecture
Sugandha Lahoti
29 Oct 2018
3 min read
Save for later

Introducing numpywren, a system for linear algebra built on a serverless architecture

Sugandha Lahoti
29 Oct 2018
3 min read
Last week, researchers from UC Berkeley and UW Madison published a research paper highlighting a system for linear algebra built on a serverless framework. numpywren is a scientific computing framework built on top of the serverless execution framework pywren. Pywren is a stateless computation framework that leverages AWS Lambda to execute python functions remotely in parallel. What is numpywren? Basically Numpywren, is a distributed system for executing large-scale dense linear algebra programs via stateless function executions. numpywren runs computations as stateless functions while storing intermediate state in a distributed object store. Instead of dealing with individual machines, hostnames, and processor grids numpywren works on the abstraction of "cores" and "memory". Numpywren currently uses Amazon EC2 and Lambda services for computation and uses Amazon S3 as a distributed memory abstraction. Numpywren can scale to run Cholesky decomposition (a linear algebra algorithm) on a 1Mx1M matrix within 36% of the completion time of ScaLAPACK running on dedicated instances and can be tuned to use 33% fewer CPU-hours. They’ve also introduced LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting. Why serverless for Numpywren? Per their research, serverless computing model can be used for computationally intensive programs while providing ease-of-use and seamless fault tolerance. The elasticity provided by serverless computing also allows the numpywren system to dynamically adapt to the inherent parallelism of common linear algebra algorithms. What’s next for Numpywren? One of the main drawbacks of the serverless model is the high communication needed due to the lack of locality and efficient broadcast primitives. The researchers want to incorporate coarser serverless executions (e.g., 8 cores instead of 1) that process larger portions of the input data. They also want to develop services that provide efficient collective communication primitives like broadcast to help address this problem. The researchers want modern convex optimization solvers such as CVXOPT to use Numpywren to scale much larger problems. They are also working on automatically translating numpy code directly into LAmbdaPACK instructions that can be executed in parallel. As data centers continue their push towards disaggregation, the researchers point out that platforms like numpywren open up a fruitful area of research. For further explanation, go through the research paper. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Azure Functions 2.0 launches with better workload support for serverless How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 16176
Modal Close icon
Modal Close icon