Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-google-introduces-cloud-hsm-beta-hardware-security-module-for-crypto-key-security
Prasad Ramesh
23 Aug 2018
2 min read
Save for later

Google introduces Cloud HSM beta hardware security module for crypto key security

Prasad Ramesh
23 Aug 2018
2 min read
Google has rolled out a beta of its Cloud hardware security module aimed at hardware cryptographic key security. Cloud HSM allows better security for customers without them having to worry about operational overhead. Cloud HSM is a cloud-hosted hardware security module that allows customers to store encryption keys. Federal Information Processing Standard Publication (FIPS) 140-2 level 3 security is used in the Cloud HSM. FIPS is a U.S. government security standard for cryptographic modules under non-military use. This standard is certified to be used in financial and health-care institutions. It is a specialized hardware component designed to encrypt small data blocks contrary to larger blocks that are managed with Key Management Service (KMS). It is available now and is fully managed by Google, meaning all the patching, scaling, cluster management and upgrades will be done automatically with no downtime. The customer has full control of the Cloud HSM service via the Cloud KMS APIs. Il-Sung Lee, Product Manager at Google, stated: “And because the Cloud HSM service is tightly integrated with Cloud KMS, you can now protect your data in customer-managed encryption key-enabled services, such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc, with a hardware-protected key.” In addition to Cloud HSM, Google has also released betas for asymmetric key support for both Cloud KMS and Cloud HSM. Now users can create a variety of asymmetric keys for decryption or signing operations. This means that users can now store their keys used for PKI or code signing in a Google Cloud managed keystore. “Specifically, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 keys will be available for signing operations, while RSA 2048, RSA 3072, and RSA 4096 keys will also have the ability to decrypt blocks of data.” For more information visit the Google Cloud blog and for HSM pricing visit the Cloud HSM page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Machine learning APIs for Google Cloud Platform Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 15373

article-image-ispa-nominated-mozilla-in-the-internet-villain-category-for-dns-over-https-push-withdrew-nominations-and-category-after-community-backlash
Fatema Patrawala
11 Jul 2019
6 min read
Save for later

ISPA nominated Mozilla in the “Internet Villain” category for DNS over HTTPs push, withdrew nominations and category after community backlash

Fatema Patrawala
11 Jul 2019
6 min read
On Tuesday, the Internet Services Providers' Association (ISPA) which is also UK's Trade Association for providers of internet services announced that the nomination of Mozilla Firefox has been withdrawn from the “Internet Villain Category”. This decision came after they saw a global backlash to their nomination of Mozilla for their DNS-over-HTTPS (DoH) push. ISPA withdrew the Internet Villain category as a whole from the ISPA Awards 2019 ceremony which will be held today in London. https://twitter.com/ISPAUK/status/1148636700467453958 The official blog post reads, “Last week ISPA included Mozilla in our list of Internet Villain nominees for our upcoming annual awards. In the 21 years the event has been running it is probably fair to say that no other nomination has generated such strong opinion. We have previously given the award to the Home Secretary for pushing surveillance legislation, leaders of regimes limiting freedom of speech and ambulance-chasing copyright lawyers. The villain category is intended to draw attention to an important issue in a light-hearted manner, but this year has clearly sent the wrong message, one that doesn’t reflect ISPA’s genuine desire to engage in a constructive dialogue. ISPA is therefore withdrawing the Mozilla nomination and Internet Villain category this year.” Mozilla Firefox, which is the preferred browser for a lot of users encourages privacy protection and feature options to keep one’s Internet activity as private as possible. One of the recently proposed features – DoH (DNS-over-HTTPS) which is still in the testing phase didn’t receive a good response from the ISPA trade association. Hence, the ISPA decided to nominate Mozilla as one of the “Internet Villains” among the nominees for 2019. In their announcement, the ISPA mentioned that Mozilla is one of the Internet Villains for supporting DoH (DNS-over-HTTPS). https://twitter.com/ISPAUK/status/1146725374455373824 Mozilla on this announcement responded by saying that this is one way to know that they are fighting the good fight. https://twitter.com/firefox/status/1147225563649564672 On the other hand this announcement amongst the community garnered a lot of criticism. They rebuked ISPA for promoting online censorship and enabling rampant surveillance. Additionally there were comments of ISPA being the Internet Villian in this scenario. Some the tweet responses are given below: https://twitter.com/larik47/status/1146870658246352896 https://twitter.com/gon_dla/status/1147158886060908544 https://twitter.com/ultratethys/status/1146798475507617793 Along with Mozilla, Article 13 Copyright Directive and United States President Donald Trump also appeared in the nominations list. Here’s how ISPA explained in their announcement: “Mozilla – for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Article 13 Copyright Directive – for threatening freedom of expression online by requiring ‘content recognition technologies’ across platforms President Donald Trump – for causing a huge amount of uncertainty across the complex, global telecommunications supply chain in the course of trying to protect national security” Why are the ISPs pushing back against DNS-over-HTTPS? DoH basically means that your DNS requests will be encrypted over an HTTPS connection. Traditionally, the DNS requests are unencrypted and your DNS provider or the ISP can monitor/control your browsing activity. Without DoH, you can easily enforce blocking/content filtering through your DNS provider or the ISP can do that when they want. However, DoH takes that out of the equation and hence, you get a private browsing experience. Admittedly big broadband ISPs and politicians are concerned that large scale third-party deployments of DoH, which encrypts DNS requests using the common HTTPS protocol for websites (i.e. turning IP addresses into human readable domain names), could disrupt their ability to censor, track and control related internet services. The above position is however a particularly narrow way of looking at the technology, because at its core DoH is about protecting user privacy and making internet connections more secure. As a result DoH is often praised and widely supported by the wider internet community. Mozilla is not alone in pushing DoH but they found themselves being singled out by the ISPA because of their proposal to enable the feature by default within Firefox which is yet to happen. Google is also planning to introduce its own DoH solution in its Chrome browser. The result could be that ISPs lose a lot of their control over DNS and break their internet censorship plans. Is DoH useful for internet users? If so, how? On one side of the coin, DoH lets users bypass any content filters enforced by the DNS or the ISPs. So, it is a good thing that it will put a stop to Internet censorship and DoH will help in this. But, on the other side, if you are a parent, you can no longer set content filters if your kid utilizes DoH on Mozilla Firefox. And potentially DoH could be a solution for some to bypass parental controls, which could be a bad thing. And this particular reason is given by the ISPA for nominating Mozilla for the Internet Villian category. It says that DNS-over-HTTPS will bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Also, using DoH means that you can no longer use the local host file, in case you are using it for ad blocking or for any other reason. The Internet community criticized the way ISPA handled the back lash and withdrew the category as a whole. One of the user comments on Hacker News read, “You have to love how all their "thoughtful criticisms" of DNS over HTTPS have nothing to do with the things they cited in their nomination of Mozilla as villain. Their issue was explicitly "bypassing UK filtering obligations" not that load of flaming horseshit they just pulled out of their ass in response to the backlash.” https://twitter.com/VModifiedMind/status/1148682124263866368   Highlights from Mary Meeker’s 2019 Internet trends report How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher
Read more
  • 0
  • 0
  • 15310

article-image-google-expands-its-machine-learning-hardware-portfolio-with-cloud-tpu-pods-alpha-to-effectively-train-and-deploy-tensorflow-machine-learning-models-on-gcp
Melisha Dsouza
13 Dec 2018
3 min read
Save for later

Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP

Melisha Dsouza
13 Dec 2018
3 min read
Today, Google cloud announced the alpha availability of ‘Cloud TPU Pods’  that are tightly-coupled supercomputers built with hundreds of Google’s custom Tensor Processing Unit (TPU) chips and dozens of host machines, linked via an ultrafast custom interconnect. Google states that these pods make it easier, faster, and more cost-effective to develop and deploy cutting-edge machine learning workloads on Google Cloud. Developers can iterate over the training data in minutes and train huge production models in hours or days instead of weeks. The Tensor Processing Unit (TPU), is an ASIC that powers several of Google’s major products, including Translate, Photos, Search, Assistant, and Gmail. It provides up to 11.5 petaflops of performance in a single pod. Features of Cloud TPU Pods #1 Proven Reference Models Customers can take advantage of  Google-qualified reference models that are optimized for performance, accuracy, and quality for many real-world use cases. These include object detection, language modeling, sentiment analysis, translation, image classification, and more. #2 Connect Cloud TPUs to Custom Machine Types Users can connect to Cloud TPUs from custom VM types. This will them optimally balance processor speeds, memory, and high-performance storage resources for their individual workloads. #3 Preemptible Cloud TPU Preemptible Cloud TPUs are 70% cheaper than on-demand instances. Long training runs with checkpointing or batch prediction on large datasets can now be done at an optimal rate using Cloud TPU’s. #4 Integrated with GCP Cloud TPUs and Google Cloud's Data and Analytics services are fully integrated with other GCP offerings. This provides developers unified access across the entire service line. Developers can run machine learning workloads on Cloud TPUs and benefit from Google Cloud Platform’s storage, networking, and data analytics technologies. #5 Additional features Cloud TPUs perform really well at synchronous training. The Cloud TPU software stack transparently distributes ML models across multiple TPU devices in a Cloud TPU Pod to help customers achieve scalability. All Cloud TPUs are integrated with Google Cloud’s high-speed storage systems, ensuring that data input pipelines can keep up with the TPUs. Users do not have to manage parameter servers, deal with complicated custom networking configurations, or set up exotic storage systems to achieve unparalleled training performance in the cloud. Performance and Cost benchmarking of Cloud TPU Google compared the Cloud TPU Pods and Google Cloud VMs with NVIDIA Tesla V100 GPUs attached- using one of the MLPerf models called TensorFlow 1.12 implementations of ResNet-50 v1.5 (GPU version, TPU version). They trained ResNet-50 on the ImageNet image classification dataset. The results of the test show that Cloud TPU Pods deliver near-linear speedups for large-scale training task; the largest Cloud TPU Pod configuration tested (256 chips) delivers a 200X speedup over an individual V100 GPU. Check out their methodology page for further details on this test. Training ResNet-50 on a full Cloud TPU v2 Pod costs almost 40% less than training the same model to the same accuracy on an n1-standard-64 Google Cloud VM with eight V100 GPUs attached. The full Cloud TPU Pod completes the training task 27 times faster. Head over to Google Cloud’s official page to know more about Cloud TPU Pods. Alternatively, check out Cloud TPU’s documentation for more insights on the same. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?
Read more
  • 0
  • 0
  • 15297

article-image-the-kernel-community-attempting-to-make-linux-more-secure
Prasad Ramesh
03 Oct 2018
3 min read
Save for later

The kernel community attempting to make Linux more secure

Prasad Ramesh
03 Oct 2018
3 min read
Last week, Google project zero criticized Ubuntu and Debian developers for not merging kernel security fixes fast enough and leaving users exposed in the meantime. The kernel community clarified yesterday on how it is making attempts to reduce and control the bugs in the Linux ecosystem by testing and kernel hardening. They acknowledge that there is not a lot the kernel community can do to eliminate bugs as bugs are part and parcel of software development. But they are focusing on testing to find them. Now there is a security team in the kernel community made up of kernel developers who are well versed with kernel core concepts. Linux Kernel developer Kroah Hartman said: “A bug is a bug. We don’t know if a bug is a security bug or not. There is a famous bug that I fixed and then three years later Red Hat realized it was a security hole”. In addition to fixing bugs, the kernel community will contribute to hardening to the kernel. Kernel hardening enables additional kernel-level security mechanisms to improve the security of the system. Linux Kernel Developer Kees Cook and others have made huge efforts to take hardening features that have been traditionally outside of the kernel and merge them for the kernel. Cook provides a summary of all the new hardening features added with every kernel released. Hardening the kernel is not enough, new features need to be enabled to take advantage of them which is not happening. A stable kernel is released every week at the official Kernel website. Then, companies pick one to support for a longer period of time for enabling device manufacturers to take advantage of it. However, Hartman observed that barring Google Pixel, most Android phones don’t include the additional hardening features, making all those phones vulnerable. He added that companies should enable these features. Hartman stated: “I went out and bought all the top of the line phones based on kernel 4.4 to see which one actually updated. I found only one company that updated their kernel,” he said.  “I'm working through the whole supply chain trying to solve that problem because it's a tough problem. There are many different groups involved -- the SoC manufacturers, the carriers, and so on. The point is that they have to push the kernel that we create out to people.” However, the big vendors like Red Hat and SUSE keep the kernel updated which have these features. The kernel community is also working with Intel to mitigate Meltdown and Spectre attacks. Intel changed its approach in how they work with the kernel community after these vulnerabilities were discovered. The bright side to this is that the Intel vulnerabilities proved that things are getting better for the kernel community. More testing is being done, patches are being made and efforts are put to make the kernel as bug-free as possible. To know more, visit the Linux Blog. Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux Linux programmers opposed to new Code of Conduct threaten to pull code from project Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’
Read more
  • 0
  • 0
  • 15293

article-image-russian-censorship-board-threatens-to-block-search-giant-yandex-due-to-pirated-content
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Russian censorship board threatens to block search giant Yandex due to pirated content

Sugandha Lahoti
30 Aug 2018
3 min read
Update, 31st August 2018: Yandex has refused to remove pirated content. According to a statement from the company, Yandex believes that the law is being misinterpreted. While pirate content must be removed from sites hosting it, the removal of links to such content on search engines falls outside the scope of the current legislation.  “In accordance with the Federal Law On Information, Information Technologies, and Information Protection, the mechanics are as follows: pirated content should be blocked by site owners and on the so-called mirrors of these sites,” Yandex says. A Yandex spokesperson said that the company works in “full compliance” with the law. “We will work with market participants to find a solution within the existing legal framework.” Check out more info on Interfax. Roskomnadzor has found Russian search giant Yandex guilty of holding pirated content. The Federal Service for Supervision of Communications, Information Technology and Mass Media or Roskomnadzor is the Russian federal executive body responsible for censorship in media and telecommunications. The Moscow City Court found the website guilty of including links to pirated content last week. The search giant was asked to remove those links and the mandate was further reiterated by Roskomnadzor this week. Per the authorities, if Yandex does not take action within today, its video platform will be blocked by the country's ISPs. Last week, major Russian broadcasters Gazprom-Media, National Media Group (NMG), and others had protested against pirated content by removing their TV channels from Yandex’s ‘TV Online’ service. They said that they would allow their content to appear again only if Yandex removes pirated content completely. Following this, Gazprom-Media had filed a copyright infringement complaint with the Moscow City Court. Subsequently, the Moscow Court made a decision compelling Yandex to remove links to pirated TV shows belonging to Gazprom-Media. Pirate content has been a long-standing challenge for the telecom sector that is yet to be completely eradicated. Not only does it lead to a loss in revenues, but also a person watching illegal movies violates copyright and intellectual property laws. The Yandex website is heavily populated with pirated content, especially TV shows and movies. Source: Yandex.video In a statement to Interfax, Deputy Head of Roskomnadzor Vadim Subbotin warned that Yandex.video will be blocked Thursday night (August 30) if the pirate links aren’t removed. “If the company does not take measures, then according to the law, the Yandex.Video service must be blocked. There’s nowhere to go,” Subbotin said. The search giant has not yet responded to this accusation. You can check out the detailed coverage of the news on Interfax. Adblocking and the Future of the Web. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran. YouTube has a $25 million plan to counter fake news and misinformation.
Read more
  • 0
  • 0
  • 15246

article-image-google-cloud-storage-security-gets-an-upgrade-with-bucket-lock-cloud-kms-keys-and-more
Melisha Dsouza
24 Oct 2018
3 min read
Save for later

Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more

Melisha Dsouza
24 Oct 2018
3 min read
Earlier this month, the team at Google Cloud Storage announced new capabilities for improving the reliability and performance of user’s data. They have now rolled out updates for storage security that will cater to privacy of data and compliance with financial services regulations.  With these new security upgrades including the general availability of Cloud Storage Bucket Lock, UI changes for privacy management, Cloud KMS integration with Cloud Storage and much more; users will be able to build reliable applications as well as ensure the safety of data. Storage security features on Google Cloud Storage: #1 General availability of Cloud Storage Bucket Lock Cloud Storage Bucket Lock is now generally available. This feature is especially useful for users that need a Write Once Read Many (WORM) storage, as it prevents deletion or modification of content for a specified period of time. To help organizations meet compliance, legal and regulatory requirements for retaining data for specific lengths of time, Bucket Lock provides retention lock capabilities, as well as event, holds for content. Bucket Lock works with all tiers of Cloud Storage. Both primary and archive data can use the same storage setup. Users can automatically move locked data into colder storage tiers and delete data once the retention period expires. Bucket Lock has been used in a diverse range of applications from financial records compliance and Healthcare records retention to Media content archives and much more. You can head over to the Bucket Lock documentation to learn more about this feature. #2 New UI features for secure sharing of data The new UI features in the Cloud Storage console enable users to securely share their data and gain insights over which data, buckets, and objects are publicly visible across their Cloud Storage environment. The public sharing option in the UI has been replaced with an Identity and Access Management (IAM) panel. This mechanism will prevent users from clicking the mouse by mistake and publicly sharing their objects. Administrators can clearly understand which content is publicly available. The mechanism also enables users to know how their data is being shared publicly. #3 Use Cloud KMS keys with Cloud Storage data Cloud Key Management System (KMS) provides users with sophisticated encryption key management capabilities. Users can manage and control encryption keys for their Cloud Storage datasets through the Cloud Storage–KMS integration. This KMS integration helps users manage active keys, authorize users or applications to use certain keys, monitor key use, and more. Cloud Storage users can also perform a  key rotation, revocation, and deletion. Head over to Google Cloud storage blog to learn more about Cloud KMS integration. #4 Access Transparency for Cloud Storage and Persistent Disk This new transparency mechanism will show users who, when, where and why Google support or the engineering team has accessed their Cloud Storage and Persistent Disk environment. Users can use Stackdriver APIs to monitor logs related to Cloud Storage actions programmatically and also archive their logs if required for future auditing. This gives complete visibility into administrative actions for monitoring and compliance purposes You can learn more about AXT on Google's blog post. Head over to Google Cloud Storage blog to understand how these new upgrades will add to the security and control of cloud resources. What’s new in Google Cloud Functions serverless platform Google Cloud announces new Go 1.11 runtime for App Engine Cloud Filestore: A new high performance storage option by Google Cloud Platform  
Read more
  • 0
  • 0
  • 15187
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introducing-openstack-foundations-kata-containers-1-0
Savia Lobo
24 May 2018
2 min read
Save for later

Introducing OpenStack Foundation’s Kata Containers 1.0

Savia Lobo
24 May 2018
2 min read
OpenStack Foundation successfully launched the version 1.0 of its first non-OpenStack project, Kata Containers. Kata containers is a result of the combination of two leading open source virtualized container projects, Intel’s Clear Containers and Hyper’s runV technology. Kata Containers enable developers to have a, lighter, faster, and an agile container management technology across stacks and platforms. Developers can have a more container-like experience with security and isolation features. Kata Containers deliver an OCLI compatible runtime with seamless integration for Docker and Kubernetes. They execute a lightweight VM for every container such that every container gets similar hardware isolation as expected from a virtual machine. Although, hosted by OpenStack foundation, Kata Containers are assumed to be platform and architecture agnostic. Kata Containers 1.0 components include: Kata Containers runtime 1.0.0 (in the /runtime repo) Kata Containers proxy 1.0.0 (in the /proxy repo) Kata Containers shim 1.0.0 (in the /shim repo) Kata Containers agent 1.0.0 (in the /agent repo) KSM throttler 1.0.0 (in the /ksm-throttler repo) Guest operating system building scripts (in the /osbuilder repo) Intel, RedHat, Canonical and cloud vendors such as Google, Huawei, NetApp, and others have offered to financially support the Kata Containers Project. Read more about Kata containers on their official website and on the GitHub Repo. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform What to expect from vSphere 6.7 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available
Read more
  • 0
  • 0
  • 15165

article-image-openwrt-18-06-2-released-with-major-bug-fixes-updated-linux-kernel-and-more
Amrata Joshi
04 Feb 2019
3 min read
Save for later

OpenWrt 18.06.2 released with major bug fixes, updated Linux kernel and more!

Amrata Joshi
04 Feb 2019
3 min read
Last week the team at OpenWrt announced the second service release of the stable OpenWrt 18.06 series, OpenWrt 18.06.2. OpenWrt is a Linux operating system that targets embedded devices and provides a fully writable filesystem with optional package management. It is also considered to be a complete replacement for the vendor-supplied firmware of a wide range of wireless routers and non-network devices. What’s new in OpenWrt 18.06.2? OpenWrt 18.06.2 comes with bug fixes in the network and the build system and updates to the kernel and base packages. In OpenWrt 18.06.2, Linux kernel has been updated to versions 4.9.152/4.14.95 (from 4.9.120/4.14.63 in v18.06.1). GNU time dependency has been removed. This release comes with added support for bpf match. In this release, a blank line has been inserted after KernelPackage template to allow chaining calls. INSTALL_SUID macro has been added. This release comes with added support for enabling the rootfs/boot partition size option via tar. Building of artifacts has been introduced. Package URL has been updated. Un-initialized return value has been fixed. Major bug fixes The docbook2man error has been fixed. The issues with libressl build on x32 (amd64ilp32) host has been fixed. The build has been fixed without modifying Makefile.am. Fedora patch has been added for crashing git style patches. The syntax error has been fixed. Security fixes for the Linux kernel, GNU patch, Glibc, BZip2, Grub, OpenSSL, and MbedTLS. IPv6 and network service fixes. Few of the users are happy about this release and they think despite small teams and budgets, the team at OpenWrt has done a wonderful job by powering so many routers. One of the comment reads, “The new release still works fine on a TP-Link TL-WR1043N/ND v1 (32MB RAM, 8MB Flash). This is an old router I got from the local reuse center for $10 a few years ago. It can handle a 100 Mbps fiber connection fine and has 5 gigabit ports. Thanks Openwrt!” But the question is if cheap routers affect the internet speed. One of the users commented on HackerNews, “My internet is too fast (150 mbps) for a cheap router to effectively manage the connection, meaning that unless I pay 250€ for a router, I will just slow down my Internet needlessly.” Read more about this news on the OpenWrt’s official blog post. Mapzen, an open-source mapping platform, joins the Linux Foundation project Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack The Haiku operating system has released R1/beta1
Read more
  • 0
  • 0
  • 15142

article-image-what-google-redhat-oracle-and-others-announced-at-kubercon-cloudnativecon-2018
Savia Lobo
17 May 2018
6 min read
Save for later

What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018

Savia Lobo
17 May 2018
6 min read
Earlier this month, 4000+ developers attended the Cloud Native Computing Foundation’s flagship event, KubeCon + CloudNativeCon 2018 conference, held at Copenhagen, Europe from May 2nd to 4th. This conference focussed on a series of announcements on microservices, containers, and other open source tools for building applications for the web. Top vendors including Google, RedHat, Oracle, and many more announced a myriad of releases and improvements with respect to Kubernetes. Read our article on Big vendor announcements at KubeCon + CloudNativeCon Europe. Let’s brush through the top 7 vendors and their release highlights in this conference. Google released Stackdriver Kubernetes Monitoring and open sourced gVisor Released in beta, the Stackdriver Kubernetes Monitoring enables both developers and operators to use Kubernetes in a comprehensive fashion and also simplifies operations for them. Features of Stackdriver Kubernetes Monitoring include: Scalable Comprehensive Observability: Stackdriver Kubernetes Monitoring sums up logs, events and metrics from the Kubernetes environment to understand the behaviour of one’s application. These are rich, unified set of signals which are used by developers to build higher quality applications faster. It also helps operators speed root cause analysis and reduce mean time to resolution (MTTR). Seamless integration with Prometheus: The Stackdriver Kubernetes Monitoring integrates seamlessly with Prometheus--a leading Kubernetes open source monitoring approach--without any change. Unified view: Stackdriver Kubernetes Monitoring provides a unified view into signals from infrastructure, applications and services across multiple Kubernetes clusters. With this, developers, operators and security analysts, can effectively manage Kubernetes workloads. This allows them to easily observe system information from various sources, in flexible ways. Some instances include, inspecting a single container, or scaling up to explore massive, multi-cluster deployments. Get started on-cloud or on-premise easily: Stackdriver Kubernetes Monitoring is pre-integrated with Google Kubernetes Engine. Thus, one can immediately use it within their Kubernetes Engine workloads. It is easily integrated with Kubernetes deployments on other clouds or on-premise infrastructure. Hence, one can access a unified collection of logs, events, and metrics for their application, regardless of where the containers are deployed. Also, Google has open-sourced gVisor, a sandboxed container runtime. gVisor, which is lighter than a Virtual machine, enables secure isolation for containers. It also integrates with Docker and Kubernetes and thus makes it simple to run sandboxed containers in production environments. gVisor is written in Go to avoid security pitfalls that can plague kernels. RedHat shared an open source toolkit called Operator Framework RedHat in collaboration with Kubernetes open source community has shared the Operator Framework to make it easy to build Kubernetes applications. The Operator Framework is an open source toolkit designed in order to manage Kubernetes native applications named as Operators in an effective, automated and scalable manner. The Operator Framework comprises of an: Operator SDK that helps developers in building Operators based on their expertise. This does not require any knowledge of the complexities of Kubernetes API. Operator Lifecycle Manager which supervises the lifecycle of all the operators running across a kubernetes cluster. It also keep a check on the services associated with the operators. Operator Metering, which is soon to be added, allows creating a usage report for Operators providing specialized services. Oracle added new open serverless support and key Kubernetes features to Oracle Container Engine According to a report, security, storage and networking are the major challenges that companies face while working with containers. In order to address these challenges, the Oracle Container Engine have proposed some solutions, which include getting new governance, compliance and auditing features such as Identity and Access Management, role-based access control, support for the Payment Card Industry Data Security Standard, and cluster management auditing capabilities. Scalability features: Oracle is adding support for small and virtualized environments, predictable IOPS, and the ability to run Kubernetes on NVIDIA Tesla GPUs. New networking features: These include load balancing and virtual cloud network. Storage features: The company has added the OCI volume provisioner and flexvolume driver. Additionally, Oracle Container Engine features support for Helm and Tiller, and the ability to run existing apps with Kubernetes. Kublr announced that its version 1.9 provides easy configuration of Kubernetes clusters for enterprise users Kublr unleashed an advanced configuration capability in its version 1.9. This feature is designed to provide customers with flexibility that enables Kubernetes clusters to meet specific use cases. The use cases include: GPU-enabled nodes for Data Science applications Hybrid clusters spanning data centers and clouds, Custom Kubernetes tuning parameters, and Meeting other advanced requirements. New features in the Kublr 1.9 include: Kubernetes 1.9.6 and new Dashboard Improved backups in AWS with full cluster restoration An introduction to Centralized monitoring, IAM, Custom cluster specification Read more about Kublr 1.9 on Kublr blog. Kubernetes announced the availability of Kubeflow 0.1 Kubernetes brought forward a power-packed package for tooling, known as Kubeflow 0.1. Kubeflow 0.1 provides a basic set of packages for developing, training, and deploying machine learning models. This package: Supports Argo, for managing ML workflows Offers Jupyter Hub to create interactive Jupyter notebooks for collaborative and interactive model training. Provides a number of TensorFlow tools, which includes Training Controller for native distributed training. The Training Controller can be configured to CPUs or GPUs and can also be adjusted to fit the size of a cluster by a single click. Additional features such as a simplified setup via bootstrap container, improved accelerator integration, and support for more ML frameworks like Spark ML, XKGBoost, and sklearn will be released soon in the 0.2 version of KubeFlow. CNCF(Cloud Native Computing Foundation) announced a new Certified Kubernetes Application Developer program The Cloud Native Computing Foundation has successfully launched the Certified Kubernetes Application Developer (CKAD) exam and corresponding Kubernetes for Developers course. The CKAD exam certifies that users are fit to design, build, configure, and expose cloud native applications on top of Kubernetes. A Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes. Read more about this program on the Cloud Native Computing Foundation blog. DigitalOcean launched managed Kubernetes service DigitalOcean cloud computing platform launched DigitalOcean Kubernetes, which is a simple and cost-effective solution for deploying, orchestrating, and managing container workloads on cloud. With the DigitalOcean Kubernetes service, developers can save time and deploy their container workloads without the need to configure things from scratch. The organization has also provided an early access to this Kubernetes service. Read more on the DigitalOcean blog. Apart, from these 7 vendors, many others such as DataDog, Humio, Weaveworks and so on have also announced features, frameworks, and services based on Kubernetes, serverless, and cloud computing. This is not the end to the announcements, read the KubeCon + CloudNativeCon 2018 website to know about other announcements rolled out in this event. Top 7 DevOps tools in 2018 Apache Spark 2.3 now has native Kubernetes support! Polycloud: a better alternative to cloud agnosticism
Read more
  • 0
  • 0
  • 15134

article-image-google-cloud-went-offline-taking-with-it-youtube-snapchat-gmail-and-a-number-of-other-web-services
Sugandha Lahoti
03 Jun 2019
4 min read
Save for later

Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services

Sugandha Lahoti
03 Jun 2019
4 min read
Update: The article has been updated to include Google's response on Sunday's disruption service. Over the weekend, Google Cloud suffered a major outage taking down a number of Google services, YouTube, GSuite, Gmail, etc. It also affected services dependent on Google such as Snapchat, Nest, Discord, Shopify and more. The problem was first reported by East Coast users in the U.S around 3 PM ET / 12 PM PT, and the company resolved them after more than four hours. According to downdetector, UK, France, Austria, Spain, Brazil, also reported they are suffering from the outage. https://twitter.com/DrKMhana/status/1135291239388143617 In a statement posted to its Google Cloud Platform the company said it experiencing a multi-region issue with the Google Compute Engine. “We are experiencing high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, GSuite, and YouTube. Users may see a slow performance or intermittent errors. We believe we have identified the root cause of the congestion and expect to return to normal service shortly,” the company said in a statement. The issue was sorted four hours after Google acknowledged the downtime. “The network congestion issue in the eastern USA, affecting Google Cloud, G Suite, and YouTube has been resolved for all affected users as of 4:00 pm US/Pacific,” the company said in a statement. “We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a detailed report of this incident once we have completed our internal investigation. This detailed report will contain information regarding SLA credits.” This outage resulted in some major suffering. Not only did it impact one of the most used apps by Netziens (YouTube and Sanpchat), people also reported that they were unable to use their NEST controlled devices such as turn on their AC or open their "smart" locks to let people into the house. https://twitter.com/davidiach/status/1135302533151436800 Even Shopify experienced problems because of the Google outage, which prevented some stores (both brick-and-mortar and online) from processing credit card payments for hours. https://twitter.com/LarryWeru/status/1135322080512270337 The entire dependency of the world’s most popular applications on just one backend in the hands of one company seems a bit startling. It is also surprising how so many people just rely on one hosting service. At the very least, companies should think of setting up a contingency plan, in case the services go down again. https://twitter.com/zeynep/status/1135308911643451392 https://twitter.com/SeverinAlexB/status/1135286351962812416 Another issue which popped up was how Google cloud randomly being down is proof that cloud-based gaming isn't ready for mass audiences yet. At this year’s Game Developers Conference (GDC), Google marked its entry in the game industry with Stadia, its new cloud-based platform for streaming games. It will be launching later this year in select countries including the U.S., Canada, U.K., and Europe. https://twitter.com/BrokenGamezHDR/status/1135318797068488712 https://twitter.com/soul_societyy/status/1135294007515500549 On Monday, Google released an apologetic update on the outage. They outlined the incident, detection and their response. In essence, the root cause of Sunday’s disruption was a configuration change that was intended for a small number of servers in a single region. The configuration was incorrectly applied to a larger number of servers across several neighboring regions, and it caused those regions to stop using more than half of their available network capacity. The network traffic to/from those regions then tried to fit into the remaining network capacity, but it did not. The network became congested, and our networking systems correctly triaged the traffic overload and dropped larger, less latency-sensitive traffic in order to preserve smaller latency-sensitive traffic flows, much as urgent packages may be couriered by bicycle through even the worst traffic jam. Next, Google’s engineering teams are conducting a thorough post-mortem to understand all the contributing factors to both the network capacity loss and the slow restoration. Facebook family of apps hits 14 hours outage, longest in its history Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos.
Read more
  • 0
  • 0
  • 15119
article-image-microsoft-invests-in-grab-together-aim-to-conquer-the-southeast-asian-on-demand-services-market-with-azures-intelligent-cloud
Natasha Mathur
09 Oct 2018
2 min read
Save for later

Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud

Natasha Mathur
09 Oct 2018
2 min read
Microsoft announced, yesterday, that it is collaborating with Grab, the leading on-demand transportation, mobile payments and online-to-offline services platform in Southeast Asia, as part of a strategic cloud partnership. The partnership aims to transform the delivery of digital services and mobility by using Microsoft’s state-of-the-art expertise in machine learning and other artificial intelligence (AI) capabilities. “Our partnership with Grab opens up new opportunities to innovate in both a rapidly evolving industry and growth region. We’re excited to team up to transform the customer experience as well as enhance the delivery of digital services for the millions of users who rely on Grab for safe and affordable transport, food and package delivery, mobile payments, and financial services”, mentioned Peggy Johnson, executive vice president at Microsoft. Grab is a Singapore-based technology company delivering ride-hailing, ride sharing, and logistics services via its app in Singapore and neighboring Southeast Asian nations. It currently operates in 235 cities across eight Southeast Asian countries.  Moreover, Grab’s digital wallet, GrabPay, is the top player in Southeast Asia. This partnership is expected to help both companies explore a wide range of innovative deep technology projects such as mobile facial recognition with built-in AI for drivers and customers, using Microsoft Azure’s fraud detection services to prevent fraudulent transactions on Grab’s platform, and so on. These projects aim to transform the experience for Grab’s users, driver-partners, merchants as well as agents. Grab will be adopting Microsoft Azure as its preferred cloud platform and Microsoft is set to make a strategic investment in Grab; the magnitude of which currently undisclosed. “As a global technology leader, Microsoft’s investment into Grab highlights our position as the leading homegrown technology player in the region. We look forward to collaborating with Microsoft in the pursuit of enhancing on-demand transportation and seamless online-to-offline experiences for users”, said Ming Maa, president of Grab. There are a few other areas of collaborations between Grab and Microsoft. These include Microsoft Outlook integration, Microsoft Kaizala, In-car solutions, and integration of Microsoft Rewards Gift Cards. For more information, check out the official Microsoft blog. Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week Microsoft’s new neural text-to-speech service lets machines speak like people
Read more
  • 0
  • 0
  • 15063

article-image-microsoft-introduces-service-mesh-interface-smi-for-interoperability-across-different-service-mesh-technologies
Amrata Joshi
22 May 2019
2 min read
Save for later

Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies

Amrata Joshi
22 May 2019
2 min read
Yesterday, the team at Microsoft launched Service Mesh Interface (SMI) that defines a set of common and portable APIs. It is an open project that started in partnership with Microsoft, HashiCorp, Linkerd, Solo.io, Kinvolk, and Weaveworks and with support coming from Aspen Mesh, Docker, Canonical, Pivotal, Rancher, Red Hat, and VMware. SMI provides developers with interoperability across different service mesh technologies including Linkerd, Istio, and Consul Connect. The need for service mesh technology Previously, not much attention was given to the network architecture and organizations believed in making applications smarter instead. But now while dealing with micro-services, containers, and orchestration systems like Kubernetes, the engineering teams face issues with securing, managing and monitoring a number of network endpoints. The service mesh technology has a solution to this problem as it makes the network smarter. It pushes service this logic into the network, controlled by a separate set of management APIs, and frees the engineers from teaching all the services to encrypt sessions, authorize clients, emit reasonable telemetry. Key features of Service Mesh Interface(SMI) It provides a standard interface for meshes on Kubernetes. It also comes with a basic feature set for common mesh use cases. It provides lexibility to support new mesh capabilities. It applies policies like identity and transport encryption across services. It also captures key metrics like error rate and latency between services. Service Mesh Interface shifts and weighs traffic between different services. William Morgan, Linkerd maintainer, said, “SMI is a big step forward for Linkerd’s goal of democratizing the service mesh, and we’re excited to make Linkerd’s simplicity and performance available to even more Kubernetes users.” Idit Levine, Founder and CEO of Solo.io, said, “The standardization of interfaces are crucial to ensuring a great end user experience across technologies and for ecosystem collaboration. With that spirit, we are excited to work with Microsoft and others on the SMI specification and have already delivered the first reference implementations with the Service Mesh Hub and SuperGloo project.” To know more about this news, check out Microsoft’s blog post. Microsoft officially releases Microsoft Edge canary builds for macOS users Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered  
Read more
  • 0
  • 0
  • 15055

article-image-vmware-signs-definitive-agreement-to-acquire-pivotal-software-and-carbon-black
Vincy Davis
23 Aug 2019
3 min read
Save for later

VMware signs definitive agreement to acquire Pivotal Software and Carbon Black

Vincy Davis
23 Aug 2019
3 min read
Yesterday, VMware announced in a press release that they entered a conclusive agreement to acquire Carbon Black, a cloud-native endpoint security software developer. According to the agreement, “VMware will acquire Carbon Black in an all cash transaction for $26 per share, representing an enterprise value of $2.1 billion.”  VMware intends to use Carbon Black’s big data and behavioral analytics to offer customers advanced threat detection and behavioral insight to defend against experienced attacks. Consequently, they aspire to protect clients through big data, behavioral analytics, and AI. Pat Gelsinger, the CEO of VMware says, “By bringing Carbon Black into the VMware family, we are now taking a huge step forward in security and delivering an enterprise-grade platform to administer and protect workloads, applications, and networks.” He adds, “With this acquisition, we will also take a significant leadership position in security for the new age of modern applications delivered from any cloud to any device.” Yesterday, after much speculation, VMware also announced that they have acquired Pivotal Software, a cloud-native platform provider, for an enterprise value of $2.7 billion. Dell technologies is a major stakeholder in both companies. Lately, VMware has been heavily investing in Kubernetes. Last year, it also launched a VMware Kubernetes Engine (VKE) to offer Kubernetes-as-a-Service. This year, Pivotal also teamed up with the Heroku team to create Cloud Native Buildpacks for Kubernetes and recently, also launched a Pivotal Spring Runtime for Kubernetes. With Pivotal, VMware plans to “deliver a comprehensive portfolio of products, tools and services necessary to build, run and manage modern applications on Kubernetes infrastructure with velocity and efficiency.” Read More: VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Gelsinger told ZDNet that both these “acquisitions address two critical technology priorities of all businesses today — building modern, enterprise-grade applications and protecting enterprise workloads and clients.” Gelsinger also pointed out that multi-cloud, digital transformation, and the increasing trend of moving “applications to the cloud and access it over distributed networks and from a diversity of endpoints” are significant reasons for placing high stakes on security. It is clear that by acquiring Carbon Black and Pivotal Software, the cloud computing and virtualization software company is seeking to expand its range of products and services with an ultimate focus on security in Kubernetes. A user on Hacker News comments, “I'm not surprised at the Pivotal acquisition. VMware is determined to succeed at Kubernetes. There is already a lot of integration with Pivotal's Kubernetes distribution both at a technical as well as a business level.” Also, developers around the world are excited to see what the future holds for VMware, Carbon Black, and Pivotal Software. https://twitter.com/rkagal1/status/1164852719594680321 https://twitter.com/CyberFavourite/status/1164656928913596417 https://twitter.com/arashg_/status/1164785525120618498 https://twitter.com/jambay/status/1164683358128857088 https://twitter.com/AnnoyedMerican/status/1164646153389875200 Per the press release, both the transaction payments are expected to be concluded in the second half of VMware’s fiscal year i.e., January 31, 2020. Interested users can read the VMware acquiring Carbon Black and Pivotal Software press releases for more information. VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 15047
article-image-debian-project-leader-elections-goes-without-nominations
Fatema Patrawala
13 Mar 2019
5 min read
Save for later

Debian project leader elections goes without nominations. What now?

Fatema Patrawala
13 Mar 2019
5 min read
The Debian Project is an association of individuals who have made common cause to create a free operating system. One of the traditional rites of the northern hemisphere spring is the elections for the Debian project leader. Over a six-week period in the month of March they hold the elections, interested candidates put their names forward, describe their vision for the project as a whole, answer questions from Debian developers, then wait and watch while the votes come in. But what would happen if Debian were to hold an election and no candidates stepped forward? The Debian project has just found itself in that situation this year and is trying to figure out what will happen next. The Debian project scatters various types of authority widely among its members, leaving relatively little for the project leader. As long as they stay within the bounds of Debian policy, individual developers have nearly absolute control over the packages they maintain, for example: Difficult technical disagreements between developers are handled by the project's technical committee. The release managers and FTP masters make the final decisions on what the project will actually ship (and when). The project secretary ensures that the necessary procedures are followed. The policy team handles much of the overall design for the distribution. So, in a sense, there is relatively little leading left for the leader to do. The roles that do fall to the leader fit into a couple of broad areas; the first of those is representing the project to the rest of the world. The leader gives talks at conferences and manages the project's relationships with other groups and companies. The second role is, to a great extent, administrative: the leader manages the project's money appoints developers to other roles within the project and takes care of details that nobody else in the project is responsible for Leaders are elected to a one-year term; for the last two years, this position has been filled by Chris Lamb. The February "Bits from the DPL" by Chris gives a good overview of what sorts of tasks the leader is expected to carry out. The Debian constitution describes the process for electing the leader. Six weeks prior to the end of the current leader's term, a call for candidates goes out. Only those recognized as Debian developers are eligible to run; they get one week to declare their intentions. There follows a three-week campaigning period, then two weeks for developers to cast their votes. This being Debian, there is always a "none of the above" option on the ballot; should this option win, the whole process restarts from the beginning. This year, the call for nominations was duly sent out by project secretary Kurt Roeckx on March 3. But, as of March 10, no eligible candidates had put their names forward. Lamb has been conspicuous in his absence from the discussion, with the obvious implication that he does not wish to run for a third term. So, it would seem, the nomination period has come to a close and the campaigning period has begun, but there is nobody there to do any campaigning. This being Debian, the constitution naturally describes what is to happen in this situation: the nomination period is extended for another week. Any Debian developers who procrastinated past the deadline now have another seven days in which to get their nominations in; the new deadline is March 17. Should this deadline also pass without candidates, it will be extended for another week; this loop will repeat indefinitely until somebody gives in and submits their name. Meanwhile, though, there is another interesting outcome from this lack of candidacy: the election of a new leader, whenever it actually happens, will come after the end of Lamb's term. There is no provision for locking the current leader in the office and requiring them to continue carrying out its duties; when the term is done, it's done. So the project is now certain to have a period of time where it has no leader at all. Some developers seem to relish this possibility; one even suggested that a machine-learning system could be placed into that role instead. But, as Joerg Jaspert pointed out: "There is a whole bunch of things going via the leader that is either hard to delegate or impossible to do so". Given enough time without a leader, various aspects of the project's operation could eventually grind to a halt. The good news is that this possibility, too, has been foreseen in the constitution. In the absence of a project leader, the chair of the technical committee and the project secretary are empowered to make decisions — as long as they are able to agree on what those decisions should be. Since Debian developers are famously an agreeable and non-argumentative bunch, there should be no problem with that aspect of things. In other words, the project will manage to muddle along for a while without a leader, though various aspects of processes could slow down and become more awkward if the current candidate drought persists. One might well wonder, though, why there seems to be nobody who wants to take the helm of this project for a year. Could the fact that it is an unpaid position requiring a lot of time and travel have something to do with it? If that were indeed to prove to be part of the problem, Debian might eventually have to consider doing what a number of similar organizations have done and create a paid position to do this work. Such a change would not be easy to make. But, if the project finds itself struggling to find a leader every year, it's a discussion that may need to happen. Are Debian and Docker slowly losing popularity? It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster! Debian 9.7 released with fix for RCE flaw  
Read more
  • 0
  • 0
  • 15029

article-image-aws-service-operator-for-kubernetes-now-available-allowing-the-creation-of-aws-resources-using-kubectl
Melisha Dsouza
08 Oct 2018
3 min read
Save for later

‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl

Melisha Dsouza
08 Oct 2018
3 min read
On the 5th of October, the Amazon team announced the general availability of ‘The AWS Service Operator’. This is an open source project in an alpha state which allows users to manage their AWS resources directly from Kubernetes using the standard Kubernetes CLI, kubectl. What is an Operator? Kubernetes is built on top of a 'controller pattern'. This allows applications and tools to listen to a central state manager (etcd), and take action when something happens. The controller pattern allows users to create decoupled experiences without having to worry about how other components are integrated. An operator is a purpose-built application that manages a specific type of component using this same pattern. You can check the entire list of operators at Awesome Operators. All about the AWS Service Operator Generally, users that need to integrate Amazon DynamoDB with an application running in Kubernetes or deploy an S3 Bucket for their application to use, would need to use tools such as AWS CloudFormation or Hashicorp Terraform. They then have to create a way to deploy those resources. This requires the user to behave as an operator to manage and maintain the entire service lifecycle. Users can now skip all of the above steps and deploy Kubernetes’ built-in control loop. This stores a desired state within the API server for both the Kubernetes components and the AWS services needed. The AWS Service Operator models the AWS Services as Custom Resource Definitions (CRDs) in Kubernetes and applies those definitions to a user’s cluster. A developer can model their entire application architecture from the container to ingress to AWS services, backing it from a single YAML manifest. This will reduce the time it takes to create new applications, and assist in keeping applications in the desired state. The AWS Service Operator exposes a way to manage DynamoDB Tables, S3 Buckets, Amazon Elastic Container Registry (Amazon ECR) Repositories, SNS Topics, SQS Queues, and SNS Subscriptions, with many more integrations coming soon. Looks like users are pretty excited about this update! Source: Hacker News You can learn more about this announcement on the AWS Service Operator project on GitHub. Head over to the official blog to explore how to use AWS Service Operator to create a DynamoDB table and deploy an application that uses the table after it has been created. Limited Availability of DigitalOcean Kubernetes announced! Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS  
Read more
  • 0
  • 0
  • 15027
Modal Close icon
Modal Close icon