Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-oracle-directors-support-billion-dollar-lawsuit-against-larry-ellison-and-safra-catz-for-netsuite-deal
Fatema Patrawala
23 Aug 2019
5 min read
Save for later

Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal

Fatema Patrawala
23 Aug 2019
5 min read
On Tuesday, Reuters reported that Oracle directors gave a go ahead for a million dollar lawsuit filed against Larry Ellison and Safra Catz in a NetSuite deal in 2016. This was made possible by several board members who wrote an extraordinary letter to the Delaware Court. According to Reuters, in 2017, shareholders led by the Firemen’s Retirement System of St. Louis alleged that Oracle directors breached their duties when they approved a $9.3 billion acquisition of NetSuite – a company controlled by Oracle chair Larry Ellison – at a huge premium above NetSuite’s trading price. Shareholders alleged that Oracle directors sanctioned Ellison’s self-dealing - and also claimed that Oracle’s board members were too entwined with Ellison to be entrusted with the decision of whether the company should sue him and other directors over the NetSuite deal. In an opinion published in Reuters in May 2018, Vice-Chancellor Sam Glasscock of Delaware Chancery Court agreed that shareholders had shown it would have been futile for them to demand action from the board itself. Three years after closing a $9.3 billion deal to acquire NetSuite, three board members, including former U.S. Defense Secretary Leon Panetta, sent a letter on August 15th to Sam Glasscock III, Vice Chancellor for the Court of Chancery in Georgetown, Delaware, approving the lawsuit as members of a special board of directors entity known as the Special Litigation Committee. This lawsuit in legal parlance is known as a derivative suit. According to Justia, this type of suit is filed in cases like this. “Since shareholders are generally allowed to file a lawsuit in the event that a corporation has refused to file one on its own behalf, many derivative suits are brought against a particular officer or director of the corporation for breach of contract or breach of fiduciary duty,” the Justia site explained. The letter went on to say there was an attempt to settle this suit, which was originally launched in 2017, through negotiation outside of court, but when that attempt failed, the directors wrote this letter to the court stating that the suit should be allowed to proceed. As per the letter, the lawsuit, which was originally filed by the Firemen’s Retirement System of St. Louis, could be worth billions. It reads, “One of the lead lawyers for the Firemen’s fund, Joel Friedlander of Friedlander & Gorris, said at a hearing in June that shareholders believe the breach-of-duty claims against Oracle and NetSuite executives are worth billions of dollars. So in last week’s letter, Oracle’s board effectively unleashed plaintiffs’ lawyers to seek ten-figure damages against its own members.” Oracle directors struggled with its cloud footing and ended up buying NetSuite TechCrunch noted that Larry Ellison was involved in setting up NetSuite in the late 1990s and was a major shareholder of NetSuite at the time of the acquisition. Oracle directors were struggling to find its cloud footing in 2016, and it was believed that by buying an established SaaS player, like NetSuite, it could begin to build out its cloud business much faster than trying to develop something like it internally. On Hacker News, a few users commented saying Oracle directors overpaid NetSuite and enriched Larry Ellison. One comment reads, “As you know people, as you learn about things, you realize that these generalizations we have are, virtually to a generalization, false. Well, except for this one, as it turns out. What you think of Oracle, is even truer than you think it is. There has been no entity in human history with less complexity or nuance to it than Oracle. And I gotta say, as someone who has seen that complexity for my entire life, it's very hard to get used to that idea. It's like, 'surely this is more complicated!' but it's like: Wow, this is really simple! This company is very straightforward, in its defense. This company is about one man, his alter-ego, and what he wants to inflict upon humanity -- that's it! ...Ship mediocrity, inflict misery, lie our asses off, screw our customers, and make a whole shitload of money. Yeah... you talk to Oracle, it's like, 'no, we don't fucking make dreams happen -- we make money!' ...You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle.” Oracle does “organizational restructuring” by laying off 100s of employees IBM, Oracle under the scanner again for questionable hiring and firing policies The tug of war between Google and Oracle over API copyright issue has the future of software development in the crossfires
Read more
  • 0
  • 0
  • 14684

article-image-former-google-cloud-ceo-joins-stripe-board-just-as-stripe-joins-the-global-unicorn-club
Bhagyashree R
31 Jan 2019
2 min read
Save for later

Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club

Bhagyashree R
31 Jan 2019
2 min read
Stripe, the payments infrastructure company, has received a whopping $100 million in funding from Tiger Global Management and now its valuation stands at $22.5 billion as reported by The Information on Tuesday. Last year in September, it also secured $245 million through its funding round, also led by Tiger Global Management. Founded in 2010 by the Irish brothers, Patrick and John Collision, Stripe has now become one of the most valuable “unicorns”, a term used for firms worth more than $1 billion, in the U.S. The company also boasts an impressive list of clients, recently adding Google and Uber to its stable users. The company is now planning to expand its platform by launching a point-of-sale payments terminal package targeted at online retailers making the jump to offline. A Stripe spokesperson told CNBC, “Stripe is rapidly scaling internationally, as well as extending our platform into issuing, global fraud prevention, and physical stores with Stripe Terminal. The follow-on funding gives us more leverage in these strategic areas.” The company is also expanding its team. On Tuesday, Patrick Collision announced that Diane Greene, who is an Alphabet board of directors member will be joining the Stripe’s board of directors. Along with Greene, joining the team are Michael Moritz, a partner at Sequoia Capital, Michelle Wilson, former general counsel at Amazon, and Jonathan Chadwick, former CFO of VMware, McAfee, and Skype. https://twitter.com/patrickc/status/1090386301642141696 In addition to Tiger Global Management, the start-up has also being supported by various other investors including Sequoia Capital, Khosla Ventures, Andreessen Horowitz, and PayPal co-founders Peter Thiel, Max Levchin, and Elon Musk. For more details, read the full story on The Information website. PayPal replaces Flow with TypeScript as their type checker for every new web app After BitPay, Coinbase bans Gab accounts and its founder, Andrew Torba Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S.
Read more
  • 0
  • 0
  • 14535

article-image-introducing-pivotal-function-service-alpha-an-open-kubernetes-based-multi-cloud-serverless-framework-for-developer-workloads
Melisha Dsouza
10 Dec 2018
3 min read
Save for later

Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads

Melisha Dsouza
10 Dec 2018
3 min read
Last week, Pivotal announced the ‘Pivotal Function Service’ (PFS)  in alpha. Until now, Pivotal has focussed on making open-source tools for enterprise developers but has lacked a serverless component to its suite of offerings. This aspect changes with the launch of PFS. PFS is designed to work both on-premise and in the cloud in a cloud-native fashion while being open source. It is a Kubernetes-based, multi-cloud function service offering customers a single platform for all their workloads on any cloud. Developers can deploy and operate databases, batch jobs, web APIs, legacy apps, event-driven functions, and many other workloads the same way everywhere, all because of the Pivotal Cloud Foundry (PCF) platform. This is comprised of Pivotal Application Service (PAS), Pivotal Container Service (PKS), and now, Pivotal Function Service (PFS). Providing the same developer and operator experience on every public or cloud, PFS is event-oriented with built-in components that make it easy to architect loosely coupled, streaming systems. Its buildpacks simplify packaging and are operator-friendly providing a secure, low-touch experience running atop Kubernetes. The fact that Pivotal can work on any cloud as an open product, makes it stand apart from cloud providers like Amazon, Google, and Microsoft, which provide similar services that run exclusively on their clouds. Features of PFS PFS is built on Knative, which is an open-source project led by Google that simplifies how developers deploy functions atop Kubernetes and Istio. PFS runs on Kubernetes and Istio and helps customers take advantage of the benefits of Kubernetes and Istio, abstracting away the complexity of both technologies. PFS allows customers to use familiar, container-based workflows for serverless scenarios. PFS Event Sources helps customers create feeds from external event sources such as GitHub webhooks, blob stores, and database services. PFS can be connected easily with popular message brokers such as Kafka, Google Pub/Sub, and RabbitMQ; that provide a reliable backing services for messaging channels. Pivotal has continued to develop the riff invoker model in PFS, to help developers deliver both streaming and non-streaming function code using simple, language-idiomatic interfaces. The new package includes several key components for developers, including a native eventing ability that provides a way to build rich event triggers to call whatever functionality a developer requires within a Kubernetes-based environment. This is particularly important for companies that deploy a hybrid use case to manage the events across on-premise and cloud in a seamless way. Head over to Pivotal’s official Blog to know more about this announcement. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12/ ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 14522

article-image-microsoft-introduces-immutable-blob-storage-a-highly-protected-object-storage-for-azure
Savia Lobo
06 Jul 2018
2 min read
Save for later

Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure

Savia Lobo
06 Jul 2018
2 min read
Microsoft released a new Chamber of Secrets named as ‘Immutable Blob Storage’.  This storage service safeguards sensitive data and is built on the Azure Platform. It is the latest addition to Microsoft’s continuous development towards the industry-specific cloud offerings. This service is mainly built for the financial sector but can be utilized for other sectors too by helping them in managing the information they own. The Immutable Blob Storage is a specialized version of Azure’s existing object storage and includes a number of added security features, which include: The ability to configure an environment such that the records inside it are not easily deleted by anyone; not even by the administrators who maintain the deployment. Enables companies to block edits to existing files. This setting can assist banks and other heavily regulated organizations to prove the validity of their records during audits. The service costs of Immutable Blob Storage is as same as Azure’s regular object service and the two products are integrated with another. Immutable Blob Storage can be used for both standard and immutable storage. This means  IT no longer needs to manage the complexity of a separate archive storage solution. These features come on top of the ones that have been carried over to Immutable Blob Storage from the standard object service. This also includes a data lifecycle management tool that allows organizations to set policies for managing their data. Read more about this new feature on Microsoft Azure’s blog post. How to migrate Power BI datasets to Microsoft Analysis Services models [Tutorial] Microsoft releases Open Service Broker for Azure (OSBA) version 1.0 Microsoft Azure IoT Edge is open source and generally available!
Read more
  • 0
  • 0
  • 14491

article-image-rancher-labs-announces-k3s-a-lightweight-distribution-of-kubernetes-to-manage-clusters-in-edge-computing-environments
Melisha Dsouza
27 Feb 2019
3 min read
Save for later

Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments

Melisha Dsouza
27 Feb 2019
3 min read
Yesterday, Rancher Labs announced K3s, a lightweight Kubernetes distribution to run Kubernetes in a resource-constrained environment. According to the official blog post, this project was launched to “address the increasing demand for small, easy to manage Kubernetes clusters running on x86, ARM64 and ARMv7 processors in edge computing environments”. To operate an edge computing on Kubernetes is a complex task. K3s will reduce the memory required to run Kubernetes and provide developers with a distribution of Kubernetes that requires less than 512 MB of RAM, ideally suited for edge use cases. Features of K3s #1 Simplicity of Installation K3s was designed to maximize the simplicity of installation and operations on a large scale Kubernetes cluster. It is a standards-compliant, Kubernetes distribution for “mission-critical, production use cases”. #2 Zero Host dependencies There is no requirement for an external installer to install Kubernetes--everything necessary to install it on any device is included in a single, 40MB binary.  A single command will enable the single-node k3s cluster to be provisioned or upgraded. Nodes can be simply added to the cluster running a single command on the new node, pointing it to the original server and passing through a secure token. #3 Automatic certificate and encryption key generation All of the certificates needed to establish TLS between the Kubernetes masters and nodes, as well as the encryption keys for service accounts are automatically created when a cluster is launched. #4 Reduces Memory footprint K3s reduces the memory required to run Kubernetes by removing old and non-essential code and any alpha functionality that is disabled by default. It also removes old features that have been deprecated, non-default admission controllers, in-tree cloud providers, and storage drivers. Users can add in any drivers they need. #5 Conservation of RAM Rancher’s K3s combines the processes that run on a Kubernetes management server into a single process. It also combines the Kubelet, kubeproxy and flannel agent processes that run on a worker node into a single process. Both of these techniques help in conserving RAM. #6 Reducing runtime footprint Rancher labs were able to cut down the runtime footprint significantly by using containerd instead of Docker as the runtime container engine. Functionalities like libnetwork, swarm, Docker storage drivers and other plugins have also been removed to achieve this aim. #7 SQLite as an optional datastore To provide a lightweight alternative to etcd, Rancher added SQLite as optional datastore in K3s. This was done because SQLite has “a lower memory footprint, as well as dramatically simplified operations.” Kelsey Hightower, a Staff Developer Advocate at Google Cloud Platform, commended Rancher Labs for removing features, instead of adding anything additional, to be able to focus on running clusters in low-resource computing environments. https://twitter.com/kelseyhightower/status/1100565940939436034 Kubernetes users have also welcomed the news with enthusiasm. https://twitter.com/toszos/status/1100479805106147330 https://twitter.com/ashim_k_saha/status/1100624734121689089 K3s is released with support for x86_64, ARM64 and ARMv7 architectures,  to work across any edge infrastructure. Head over to the K3s page for a quick demo on how to use the same. Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers Introducing Platform9 Managed Kubernetes Service CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure  
Read more
  • 0
  • 0
  • 14399

article-image-cloud-next-2019-tokyo-google-announces-new-security-capabilities-for-enterprise-users
Bhagyashree R
01 Aug 2019
3 min read
Save for later

Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users

Bhagyashree R
01 Aug 2019
3 min read
At its Cloud Next 2019 conference happening in Tokyo, Google unveiled new security capabilities that are coming to its enterprise products, G Suite Enterprise, Google Cloud, and Cloud Identity. These capabilities are intended to help its enterprise customers protect their “users, data, and applications in the cloud.” Google is hosting this two-day event (July 31- Aug 1) to showcase its cloud products. Among the key announcements made are Advanced Protection Program support for enterprise products that are rolling out soon, expanded availability of Titan Security Keys, improved anomaly detection in G Suite enterprise, and more. Advanced Protection Program for high-risk employees The Advanced Protection Program was launched in 2017 to protect the personal Google accounts of users who are at high risk of online threats like phishing. The program goes beyond the traditional two-step verification by enforcing you to use a physical security key in addition to your password for signing in to your Google account. The program will be available in beta in the coming days for G Suite, Google Cloud Platform (GCP) and Cloud Identity customers. It will enable enterprise admins to enforce a set of security policies for employees who are at high-risk of targeted attacks such as IT administrators, business executives, among others. The set of policies include enforcing the use of Fast Identity Online (FIDO) keys like Titan Security Keys, automatically blocking of access to non-trusted third-party apps, and enabling enhanced scanning of incoming emails. Wider availability of Titan Security Keys After looking at the growing demand for Titan Security Keys in the US, Google has now expanded its availability in Canada, France, Japan, and the United Kingdom (UK). These keys are available as bundles of two: USB/NFC and Bluetooth. You can use these keys anywhere FIDO security keys are supported including Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more. Anomalous activity alerts in G Suite G Suite Enterprise and G Suite Enterprise for Education admins can now opt-in to receive anomalous activity alerts in the G Suite alert center. G Suite takes the help of machine learning to analyze security signals within Google Drive to detect potential security risks. These security risks include data exfiltration, policy violations when sharing and downloading files, and more. Google also announced that it will be rolling out support for password vaulted apps in Cloud Identity. Karthik Lakshminarayanan and Vidya Nagarajan from the Google Cloud team wrote in a blog post, “The combination of standards-based- and password-vaulted app support will deliver one of the largest app catalogs in the industry, providing seamless one-click access for users and a single point of management, visibility, and control for admins.” You can read the official announcement by Google to know more in detail. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless Understanding security features in the Google Cloud Platform (GCP)
Read more
  • 0
  • 0
  • 14380
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-google-cloud-console-incident-resolved
Melisha Dsouza
12 Mar 2019
2 min read
Save for later

Google Cloud Console Incident Resolved!

Melisha Dsouza
12 Mar 2019
2 min read
On 11th March, Google Cloud team received a report of an issue with Google Cloud Console and Google Cloud Dataflow. Mitigation work to fix the issue was started on the same day as per Google Cloud’s official page. According to Google post, “Affected users may receive a "failed to load" error message when attempting to list resources like Compute Engine instances, billing accounts, GKE clusters, and Google Cloud Functions quotas.” As a workaround, the team suggested the use of gcloud SDK instead of the Cloud Console. No workaround was suggested for Google Cloud Dataflow. While the mitigation was underway, another update was posted by the team: “The issue is partially resolved for a majority of users. Some users would still face trouble listing project permissions from the Google Cloud Console.” The issue which began around 09:58 Pacific Time, was finally resolved around 16:30 Pacific Time on the same day. The team said that they will conduct an internal investigation of this issue and “make appropriate improvements to their systems to help prevent or minimize future recurrence. They will also provide a more detailed analysis of this incident once they have completed our internal investigation.”  There is no other information revealed as of today. This downtime affected a  majority of Google Cloud users. https://twitter.com/lukwam/status/1105174746520526848 https://twitter.com/jbkavungal/status/1105184750560571393 https://twitter.com/bpmtri/status/1105264883837239297 Head over to Google Cloud’s official page for more insights on this news. Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 14375

article-image-amazon-launches-tls-termination-support-for-network-load-balancer
Bhagyashree R
25 Jan 2019
2 min read
Save for later

Amazon launches TLS Termination support for Network Load Balancer

Bhagyashree R
25 Jan 2019
2 min read
Starting from yesterday, AWS Network Load Balancers (NLB) supports TLS/SSL. This new feature simplifies the process of building secure web applications by allowing users to make use of TLS connections that terminate at an NLB. This support is fully integrated with AWS PrivateLink and is also supported by AWS CloudFormation. https://twitter.com/colmmacc/status/1088510453767000064 Here are some features and benefits it comes with: Simplified management Using TLS at scale requires you to do extra management work like distributing the server certificate to each backend server. Additionally, it also increases the attack surface due to the presence of multiple copies of the certificate. This TLS/SSL support comes with a central management point for your certificates by integrating with AWS Certificate Manager (ACM) and Identity Access Manager (IAM). Improved compliance This new feature provides the flexibility of predefined security policies. Developers can use these built-in security policies to specify the cipher suites and protocol versions that are acceptable to their application. This will help you if you are going for PCI and FedRAMP compliance and also allow you to achieve a perfect TLS score. Classic upgrade Users who are currently using a Classic Load Balancer for TLS termination can switch to NLB, which will help them to scale quickly in case of an increased load. Users will also be able to make use a static IP address for their NLB and log the source IP address for requests. Access logs This support allows users to enable access logs for their NLBs and direct them to the S3 bucket of their choice. These logs will document information about the TLS protocol version, cipher suite, connection time, handshake time, and more. To read more in detail, check out Amazon’s announcement. Amazon is reportedly building a video game streaming service, says Information Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more
Read more
  • 0
  • 0
  • 14272

article-image-amazon-is-reportedly-building-a-video-game-streaming-service-says-information
Sugandha Lahoti
14 Jan 2019
2 min read
Save for later

Amazon is reportedly building a video game streaming service, says Information

Sugandha Lahoti
14 Jan 2019
2 min read
According to a report by Information, Amazon is developing a video game streaming service. Microsoft and Google have also previously announced similar game streaming offerings. In October, Google announced a new experimental game streaming service, namely, Project Stream. In the same month, Microsoft’s gaming chief Phil Spencer confirmed a streaming game service for any device at the E3 conference called the Project X Cloud. Amazon’s idea is to potentially bring top gaming titles to virtually anyone with a smartphone or streaming device. The service will handle all the compute-intensive calculations needed to run graphics-intensive games in the cloud. It would then stream them directly into a smart device so that gamers can get the same experience as running the titles natively on a high-end gaming system. Information says that although the Amazon gaming service isn’t likely to be launched until next year, Amazon has begun talking to games publishers about distributing their titles through its service. Most likely, this initiative would succeed considering Amazon is the biggest player in the cloud market. Amazon currently owns 32 percent of the cloud market, compared with Microsoft Azure’s 17 percent and Google Cloud’s 8 percent. These make better chances for Amazon to succeed. This would make it easier for gamers to take advantage of Amazon’s vast cloud offerings and play elaborate, robust games even on their mobile devices As the Information noted, a successful streaming platform may possibly overcome the long-standing business model of the gaming world, in which customers pay out $50 to $60 for a Triple-A title. Amazon is yet to shell out the details of such a video gaming service officially. Check out the full report on The Information. Microsoft announces Project xCloud, a new Xbox game streaming service Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service. Corona Labs open sources Corona, its free and cross-platform 2D game engine
Read more
  • 0
  • 0
  • 14188

article-image-microsoft-open-sources-trill-a-streaming-engine-that-employs-algorithms-to-process-a-trillion-events-per-day
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”

Prasad Ramesh
18 Dec 2018
3 min read
Yesterday, Microsoft open sourced Trill, previously an internal project used for processing “a trillion events per day”. It was the first streaming engine to incorporate algorithms that process events in small batches of data based on latency on the user side. It powers services like Financial Fabric, Bing ads, Azure stream analytics, Halo, etc. With the increasing flow of data, the ability to process huge amounts of data each millisecond is a necessity. Microsoft has open sourced Trill for processing a trillion events per day to ‘address this growing trend’. Microsoft Trill features Trill is a single-node engine library and any .NET application, service, or platform can readily use Trill to start processing queries. It has a temporal query language which allows users to use complex queries over real-time and offline data sets. Trill has high performance which allows users to get results with great speed and low latency. How did Trill start? Trill was a research project at Microsoft Research in 2012. It has been described in various research papers like VLDB and the IEEE Data Engineering Bulletin. Trill is based on a former Microsoft service called StreamInsight—a platform that allowed developers to develop and deploy event processing applications. Both of these systems are based on an extended query and data model which extends the relational model with a component for time. Systems before Trill could only achieve a part of the benefits. All these advantages come in one package with Trill. Trill was the very first streaming engine that incorporated algorithms to process events in data batches based on the latency tolerated by users. It was also the first engine that organized data batches in a columnar format. This enabled queries to execute with much higher efficiency. Using Trill is similar to working with any .NET library. Trill has the same performance for real-time and offline datasets. Trill allows users to perform advanced time-oriented analytics and also look for complex patterns over streaming datasets. Open-sourcing Trill Microsoft believes there Trill is the best available tool in this domain in the developer community. By open sourcing it, they want to offer the features of IStreamable abstraction to all customers. There are opportunities for community involvement for future development of Trill. It allows users to write custom aggregates. There are also research projects built on Trill where the code is present but is not yet ready to use. For more details on Trill, visit the Microsoft website. Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft confirms replacing EdgeHTML with Chromium in Edge Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced
Read more
  • 0
  • 0
  • 14135
article-image-cloudflare-workers-kv-a-distributed-native-key-value-store-for-cloudflare-workers
Prasad Ramesh
01 Oct 2018
3 min read
Save for later

Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers

Prasad Ramesh
01 Oct 2018
3 min read
Cloudflare announced a fast distributed native key-value store for Cloudflare Workers on Friday. They are calling this “Cloudflare Workers KV”. Cloudflare Workers is a new kind of computing platform which is built on top of their global network of over 150 data centers. It allows writing serverless code which runs in the fabric of the internet itself. This allows engaging with users faster than other platforms. Cloudflare Workers KV is built on a new architecture which eliminates cold starts and dramatically reduced the memory overhead of keeping the code running. The values can also be written from within a Cloudflare Worker. Cloudflare handles synchronizing keys and values across the network. Cloudflare Workers KV features Developers can augment their existing applications or build a new application on Cloudflare’s network using Cloudflare Workers and Cloudflare Workers KV. Cloudflare Workers KV can scale to support applications serving dozens or even millions of users. Some of its features are as follows. Serverless storage Cloudflare created a serverless execution environment at each of their 153 data centers with Cloudflare Workers, but it still caused customers to manage their own storage. But with Cloudflare Workers KV, global application access to a key-value store is just an API call away. Responsive applications anywhere Serverless applications that run on Cloudflare Workers get low latency access to a globally distributed key-value store. Cloudflare Workers KV achieves a low latency by caching replicas of the keys and values stored in Cloudflare's cloud network. Build without scaling concerns Cloudflare Workers KV allows developers to focus their time on adding new capabilities to their serverless applications. They won’t have to waste time scaling their key-value stores. Key features of Cloudflare Workers KV The key features of Cloudflare workers KV as listed on their website are: Accessible from all 153 Cloudflare locations Supports values up to 64 KB Supports keys up to 2 KB Read and write from Cloudflare Workers An API to write to Workers KV from 3rd party applications Uses Cloudflare’s robust caching infrastructure Set arbitrary TTLs for values Integrates with Workers Preview It is currently in beta. To know more about workers KV, visit the Cloudflare Blog and the Cloudflare website. Bandwidth Alliance: Cloudflare collaborates with Microsoft, IBM and others for saving bandwidth Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Google introduces Cloud HSM beta hardware security module for crypto key security
Read more
  • 0
  • 0
  • 14132

article-image-gke-sandbox-a-gvisor-based-feature-to-increase-security-and-isolation-in-containers
Vincy Davis
17 May 2019
4 min read
Save for later

GKE Sandbox : A gVisor based feature to increase security and isolation in containers

Vincy Davis
17 May 2019
4 min read
During the Google Cloud Next ‘19, Google Cloud announced the beta version of GKE Sandbox, a new feature in Google Kubernetes Engine (GKE). Yesterday, Yoshi Tamura (Product Manager of Google Kubernetes Engine and gVisor) and Adin Scannell (Senior Staff Software Engineer of gVisor) explained in brief about the GKE Sandbox, on Google Cloud’s official blogspot. GKE Sandbox increases the security and isolation of containers by adding an extra layer between the containers and the host OS. At general availability, GKE Sandbox will be available in the upcoming GKE Advanced. This feature will help in building demanding production applications on top of managed Kubernetes service. GKE Sandbox uses gVisor to abstract the internals, which makes the internals an easy-to-use service. While creating a pod, the user can simply choose GKE Sandbox and continue to interact with containers. This will need no new learning of controls or a mental model. In view of limiting potential attacks, GKE Sandbox helps teams running multi-tenant clusters such as SaaS providers. These teams are often executing  unknown or untrusted code. This helps in providing more secure multi-tenancy in GKE. gVisor is an open-source container sandbox runtime that was released last year. It was created to defend against a host compromise when it runs an arbitrary, untrusted code, and still integrate with container-based infrastructure. gVisor is used in many Google Cloud Platform (GCP) services like the App Engine standard environment, Cloud Functions, Cloud ML Engine, and most recently Cloud Run. Some features of gVisor include: Provides an independent operating system kernel to each container. Applications can interact with the virtualized environment provided by gVisor's kernel rather than the host kernel. Manages and places restrictions on file and network operations. Ensures there are two isolation layers between the containerized application and the host OS. Due to the reduced and restricted interaction of an application with the host kernel, attackers have a smaller attack surface. An experience shared on the official Google blog post mentions how Data refinery creator Descartes Labs have applied machine intelligence to massive data sets. Tim Kelton, Co-Founder and Head of SRE, Security, and Cloud Operations at Descartes Labs, said, “As a multi-tenant SaaS provider, we still wanted to leverage Kubernetes scheduling to achieve cost optimizations, but build additional security layers on top of users’ individual workloads. GKE Sandbox provides an additional layer of isolation that is quick to deploy, scales, and performs well on the ML workloads we execute for our users." Applications suitable for GKE Sandbox GKE Sandbox is well-suited to run compute and memory-bound applications and so works with a wide variety of applications such as: Microservices and functions : GKE Sandbox will enable additional defense in depth while preserving low spin-up times and high service density. Data processing : GKE Sandbox can process data in less than 5 percent for streaming disk I/O and compute-bound applications like FFmpeg. CPU-based machine learning: Training and executing machine learning models frequently involves large quantities of data and complex workflows which mostly belongs to a third party. The CPU overhead of sandboxing compute-bound machine learning tasks is less than 10 percent. A user on Reddit commented, “This is a really interesting add-on to GKE and I'm glad to see vendors starting to offer a variety of container runtimes on their platforms.” GKE Sandbox feature has got rave reviews on twitter too. https://twitter.com/ahmetb/status/1128709028203220992 https://twitter.com/sarki247/status/1128931366803001345 If you want to try GKE Sandbox and know more details, head over to Google’s official feature page. Google Open-sources Sandboxed API, a tool that helps in automating the process of porting existing C and C++ code Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh Google Cloud Console Incident Resolved!
Read more
  • 0
  • 0
  • 14098

article-image-adobe-set-to-acquire-marketo-putting-adobe-experience-cloud-at-the-heart-of-all-marketing
Bhagyashree R
21 Sep 2018
3 min read
Save for later

Adobe set to acquire Marketo putting Adobe Experience Cloud at the heart of all marketing

Bhagyashree R
21 Sep 2018
3 min read
Yesterday, Adobe Systems confirmed their plans to acquire Marketo Inc for $4.75 billion from Vista Equity Partners Management. This deal is expected to close in the fourth quarter of Adobe’s Fiscal Year 2018 in December. Adobe, with this acquisition, aims to combine Adobe Experience Cloud and Marketo Commerce Cloud to provide a unified platform for all marketers. Marketo is a US-based software company, which develops marketing software that provides inbound marketing, social marketing, CRM, and other related services. The industries it serves include healthcare, technology, financial services, manufacturing, and media, among others. What acquiring Marketo means to Adobe? A single platform to serve both B2B and B2C customers The integration of Marketo Commerce Cloud into the Adobe Experience Cloud will help Adobe deliver a single platform that serves both B2B and B2C customers globally. This acquisition will bring together Marketo’s lead account-based marketing technology and Adobe’s Experience Cloud analytics, advertising, and commerce capabilities. This will enable B2B companies to create, manage, and execute marketing engagements at scale. Access to Marketo’s huge customer base Enterprises from various industries are using Marketo’s marketing applications to drive engagement and customer loyalty. Marketo will bring its huge ecosystem, which consists of nearly 5000 customers and over 500 partners to Adobe. Brad Rencher, Executive Vice President and General Manager, Digital Experience at Adobe said: “The acquisition of Marketo widens Adobe’s lead in customer experience across B2C and B2B and puts Adobe Experience Cloud at the heart of all marketing.” What’s in it for Marketo? Signaling the next phase of Marketo's growth, its acquisition by Adobe will further accelerate its product roadmap and go-to-market execution. With Adobe, Marketo's products will get a new level of global operational scale and the ability to penetrate new verticals and geographies. The CEO of Marketo, Steve Lucas, believes that with Adobe they will be able to rapidly innovate and provide their customers a definitive system of engagement: “Adobe and Marketo both share an unwavering belief in the power of content and data to drive business results. Marketo delivers the leading B2B marketing engagement platform for the modern marketer, and there is no better home for Marketo to continue to rapidly innovate than Adobe.” To know more about Adobe acquiring Marketo, read their official announcement on Adobe’s  website. Adobe to spot fake images using Artificial Intelligence Adobe is going to acquire Magento for $1.68 Billion Adobe glides into Augmented Reality with Adobe Aero
Read more
  • 0
  • 0
  • 14073
article-image-introducing-vmware-integrated-openstack-vio-5-0-a-new-infrastructure-as-a-service-iaas-cloud
Savia Lobo
30 May 2018
3 min read
Save for later

Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud

Savia Lobo
30 May 2018
3 min read
VMware recently released its brand new Infrastructure-as-a-Service (IaaS) cloud, known as the VMware Integrated OpenStack (VIO) 5.0. This release, announced at the OpenStack Summit in Vancouver, Canada, is fully based on the new OpenStack Queens release. VIO provides customers with a fast and efficient solution to deploy and operate OpenStack clouds. These clouds are highly optimized for VMware's NFV and software-defined data center (SDDC) infrastructure, with advanced automation and onboarding. If one is already using VIO, they can use OpenStack's built-in upgrade capability to upgrade seamlessly to VIO 5.0. VMWare Integrated OpenStack(VIO)5.0 would be available in both Carrier and Data Center Editions.The VIO-Carrier Edition will addresses specific requirements of communication service providers (CSP). The improvements in this include: An Accelerated Data Plane Performance:  Support of NSX Managed Virtual Distributed Switch in Enhanced Data Path mode and DPDK provides customers with: Significant improvements in application response time, reduced network latencies breakthrough network performance optimized data plane techniques in VMware vSphere. Multi-Tenant Resource is now scalable: This will provide resource guarantee and resource isolation to each tenant. It will also support elastic resource scaling that allows CSPs to add new resources dynamically across different vSphere clusters to adapt to traffic conditions or transition from pilot phase to production in place. OpenStack for 5G and Edge Computing: Customers will have full control over the micro data centers and apps at the edge via automated API-driven orchestration and lifecycle management. The solution will help tackle enterprise use cases such as utilities, oil and gas drilling platforms, point-of-sale applications, security cameras, and manufacturing plants. Also, Telco oriented use-cases such Multi-Access Edge Computing (MEC), latency sensitivity VNF deployments, and operational support systems (OSS) would be addressed. VIO 5.0 also enables CSP and enterprise customers to utilize Queens advancements to support mission-critical workloads, across container and cloud-native application environments. Some new features include: High Scalability: One can run upto 500 hosts and 15,000 VMs in a single region using the VIO5.0. It will also introduce support for multiple regions at once with monitoring and metrics at scale. High Availability for Mission-Critical Workloads: Creating snapshots, clones, and backups of attached volumes to dramatically improve VM and application uptime via enhancements to the Cinder volume driver is now possible. Unified Virtualized Environment: Ability to deploy and run both VM and container workloads on a single virtualized infrastructure manager (VIM) and with a single network fabric based on VMware NSX-T Data Center. This architecture will enable customers to seamlessly deploy hybrid workloads where some components run in containers while others run in VMs. Advanced Security: Consolidate and simplify user and role management based on enhancements to Keystone, including the use of application credentials as well as system role assignment. VMware Integrated OpenStack 5.0 takes security to new levels with encryption of internal API traffic, Keystone to Keystone federation, and support for major identity management providers that includes VMware Identity Manager. Optimization and Standardization of DNS Services: Scalable, on-demand DNS as a service via Designate. Customers can auto-register any VM or Virtual Network Function (VNF) to a corporate approved DNS server instead of manually registering newly provisioned hosts through Designate. To know more about the other features in detail read VMWare’s official blog. How to create and configure an Azure Virtual Machine Introducing OpenStack Foundation’s Kata Containers 1.0 SDLC puts process at the center of software engineering
Read more
  • 0
  • 0
  • 14005

article-image-microsoft-connect-2018-azure-updates-azure-pipelines-extension-for-visual-studio-code-github-releases-and-much-more
Melisha Dsouza
05 Dec 2018
4 min read
Save for later

Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more!

Melisha Dsouza
05 Dec 2018
4 min read
“I’m excited to share some of the latest things we’re working on at Microsoft to help developers achieve more when building the applications of tomorrow, today.” -Scott Guthrie - Executive Vice President, Cloud and Enterprise Group, Microsoft On the 4th of December, at the Microsoft Connect(); 2018 Conference, the tech giant announced a series of updates in its Azure domain. With an aim to make it easy for operators and developers to adopt and use Kubernetes, Microsoft has announced the public preview of Azure Kubernetes Service virtual nodes and Azure Container Instances GPU support. They have also announced Azure Pipelines extension for Visual Studio Code, GitHub Releases, and much more! #1 Azure Kubernetes Service virtual nodes, Azure Container Instances GPU support enters public preview The Azure Kubernetes Service (AKS) is powered by the open source Virtual Kubelet technology. This release will enable customers to fully experience serverless Kubernetes. Customers will be able to extend the consistent, powerful Kubernetes API (provided by AKS) with the scalable, container-based compute capacity of ACI. With AKS virtual nodes, customers can precisely allocate the number of additional containers needed, rather than waiting for additional VM-based nodes to spin up. The ACI is billed by the second, based on the resources that a customer specifies, thus enabling them to match their costs to their workloads. This, in turn, will help the AP provided by Kubernetes to reap the benefits of serverless platforms without having to worry about managing any additional compute resources Adding GPU support to ACI will enable a new class of compute-intensive applications through AKS virtual nodes. The blog says that initially, ACI will support the K80, P100, and V100 GPUs from Nvidia and users can specify the type and number of GPUs that they would like for their container. #2 Azure Pipelines extension for Visual Studio Code The  Azure Pipelines extension for Visual Studio Code will enable developers use VS syntax highlighting and IntelliSense that will be aware of the Azure Pipelines YAML format. Traditionally, in Visual Studio Code, syntax highlighting required developers to remember exactly which keys are legal, causing them to flip back and forth to the documentation while keeping track of the location of the keys. Using this new functionality of Azure, they will now be alerted in red “ink” if they write “tasks:” instead of “task:”. They just need to press Ctrl-Space (or Cmd-Space on macOS) to see what’s accepted at that point in the file. #3 GitHub releases Developers can now seamlessly manage GitHub Releases using Azure Pipelines. This allows them to create new releases, modify drafts, or discard older drafts. The new GitHub Releases task supports actions like attaching binary files, publishing draft releases, and marking a release as pre-release and much more. #4 Azure IoT Edge support in the Azure DevOps project Azure DevOps Projects enables developers to set up a fully functional DevOps pipeline straight from the Azure portal which will be customized to the programming language and application platform they want to use, along with the Azure functionality they want to leverage and deploy to. The community showed a growing interest in using Azure DevOps to build and deploy IoT based solutions. The Azure portal for Azure IoT Edge in the Azure DevOps project workflow will make it easy for customers to achieve this goal. They can easily deploy IoT Edge modules written in Node.js, Python, Java, .NET Core, or C, helping users to develop, build, and deploy their IoT Edge application. This support will provide customers with: A Git code repository with a sample IoT Edge application written in Node.js, Python, Java, .NET Core, or C A build and a release pipeline setup for deployment to Azure Easy provisioning of all Azure resources required for Azure IoT Edge #5 ServiceNow integration with Azure Pipelines Azure has joined forces with ServiceNow, an organization that is focussed on automating routines activities, tasks, and processes at work. They help enterprises gain efficiencies and increase the productivity of their workforce. Developers can now automate the deployment process using Azure Pipelines, and use ServiceNow Change Management for risk assessment, scheduling, approvals, and oversight while updating production. You can head over to Microsoft’s official Blog to know more about these announcements. Microsoft and Mastercard partner to build a universally-recognized digital identity Microsoft open sources (SEAL) Simple Encrypted Arithmetic Library 3.1.0, with aims to standardize homomorphic encryption Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser  
Read more
  • 0
  • 0
  • 13984
Modal Close icon
Modal Close icon