Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-dropbox-walks-back-its-own-decision-brings-back-support-for-zfs-xfs-btrfs-and-ecryptfs-on-linux
Vincy Davis
23 Jul 2019
3 min read
Save for later

Dropbox walks back its own decision; brings back support for ZFS, XFS, Btrfs, and eCryptFS on Linux

Vincy Davis
23 Jul 2019
3 min read
Today, Dropbox notified users that it has brought back support for ZFS and XFS on 64-bit Linux systems, and Btrfs and eCryptFS on all Linux systems in its Beta Build 77.3.127. The support note in the Dropbox forum reads “Add support for zfs (on 64-bit systems only), eCryptFS, xfs (on 64-bit systems only), and btrfs filesystems in Linux.” Last year in November, Dropbox notified users that they are “ending support for Dropbox syncing to drives with certain uncommon file systems. The supported file systems are Ext4 filesystem on Linux, NTFS for Windows, and HFS+ or APFS for Mac.” Dropbox explained, a supported file system is necessary for Dropbox as it uses extended attributes (X-attrs) to identify files in their folder and to keep them in sync. The post also mentioned that Dropbox will support only the most common file systems that support X-attrs to ensure stability and consistency to its users. After Dropbox discontinued support for these Linux formats, many developers switched to other services such as Google Drive, Box, etc. This is speculated to be one of the reasons why Dropbox has changed its previous decision. However, no official statement from the Dropbox community, for bringing the support back, has been announced yet. Many users have expressed resentment on Dropbox’s irregular actions. A user on Hacker News says, “Too late. I have left Dropbox because of their stance on Linux filesystems, price bump with unnecessary features, and the continuous badgering to upgrade to its business. It's a great change though for those who are still on Dropbox. Their sync is top-notch” A Redditor comments, “So after I stopped using Dropbox they do care about me as a user after all? Linux users screamed about how nonsensical the original decision was. Maybe ignoring your users is not such a good idea after all? I moved to Cozy Drive - it's not perfect, but has native Linux client, is Europe based (so I am protected by EU privacy laws) and is pretty good as almost drop-in replacement.” Another Redditor said that “Too late for me, I was a big dropbox user for years, they dropped support for modern file systems and I dropped them. I started using Syncthing to replace the functionality I lost with them.” Few developers are still happy to see that Dropbox will again support the popular Linux systems. A user on Hacker News comments, “That's good news. Happy to see Dropbox thinking about the people who stuck with them from day 1. In the past few years they have been all over the place, trying to find their next big thing and in the process also neglecting their non-enterprise customers. Their core product is still the best in the market and an important alternative to Google.” Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads Linux Mint 19.2 beta releases with Update Manager, improved menu and much more! Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range
Read more
  • 0
  • 0
  • 10076

article-image-oracles-thomas-kurian-to-replace-diane-greene-as-google-cloud-ceo-is-this-googles-big-enterprise-cloud-market-move
Melisha Dsouza
19 Nov 2018
4 min read
Save for later

Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?

Melisha Dsouza
19 Nov 2018
4 min read
On 16th November, CEO of Google Cloud, Diane Greene, announced in a blog post that she will be stepping down from her post after 3 years of running the Google Cloud. The position will now be taken up by Thomas Kurian, who worked at Oracle for the past 22 years. Kurian will be joining Google Cloud on November 26th and transitioning into the Google Cloud leadership role in early 2019, while Diane works as a CEO till end of January 2019. Post that, she will continue as a Director on the Alphabet board. Google Cloud led by Diane Greene Diane Greene has been leading Google’s cloud computing division since early 2016. She has been considered to be Google’s best bet on being the second largest source of revenue while competing with Amazon and Microsoft in providing computing infrastructure for businesses. However, there are speculations that this decision indicates the said project hasn’t gone as well as planned. Although the cloud division has seen notable advances under the leadership of Greene, Amazon and Microsoft have stayed a step ahead in their cloud businesses.  According to Canalys, Amazon has roughly a third of the global cloud market, which contributes more to revenue than its sales on Amazon.com. Microsoft has roughly half of Amazon’s market share, and currently owns 8 percent of the Global market share of cloud infrastructure services. Maribel Lopez, of Lopez Research states “When Diane Greene came in they had a really solid chance of being the No. 2 provider, Microsoft has really closed the gap and is the No. 2 provider for most enterprise customers by a significant margin.” Greene acquired customers such as Twitter, Target, and HSBC for Google cloud. Major Fortune 1000 enterprises depend on Google Cloud for their future on. Under her leadership, Google established a training and professional services organization and Google partner organizations. They have come up with ways to help enterprises adopt AI through their Advanced Solutions Lab. Google’s industry verticals has achieved massive traction in health, financial services, retail, gaming and media, energy and manufacturing, and transportation. Along with the Cloud ML and the Cloud IoT groups, they acquired Apigee, Kaggle, qwiklabs and several promising small startups. She oversaw projects like creating custom chips for machine learning, thus gaining traction for artificial intelligence used on the platform. While the AI- centric approach bought Google in the limelight, Meaghan McGrath, who tracks Google and other cloud providers at Technology Business Research, says that “They’ve been making the right moves and saying the right things, but it just hasn’t shown through in performance financially,” She further stresses on the fact that Google is still hamstrung by a perception that it doesn’t really know how to work with corporate IT departments—an area where Microsoft has made its mark. Kurian to join Google Thomas Kurian worked at Oracle for the past 22 years and since 2015 was the president of product development.  On September 5th, Kurian told employees in an email on Sept. 5 that he was taking "extended time off from Oracle". The company said in a statement at the time that "we expect him to return soon.” 23 days later, Oracle put out a filing saying that Kurian had resigned "to pursue other opportunities." Google and Oracle did not have a pleasant history together. The two companies are involved in a eight-year legal battle related to Google's use of the Java programming language, without a license, in developing its Android operating system for smartphones. Oracle owns the intellectual property behind Java. In March, the Federal Circuit reversed a district court's ruling that had favored Google, sending the case back to the lower court to determine damages that it now must pay Oracle. CNBC reports that one former Google employee, who asked not to be named because of the sensitivity of the matter, is not optimistic that Kurian will be well received; since Kurian still has to figure out how to work with Googlers. It would be interesting to see how the face of Google Cloud changes under Kurian’s leadership. You can head over to Google’s blog to read more about this announcement. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more 10 useful Google Cloud AI services for your next machine learning project [Tutorial]
Read more
  • 0
  • 0
  • 10037

article-image-polaris-gps-rubriks-new-saas-platform-for-data-management-applications
Savia Lobo
06 Apr 2018
2 min read
Save for later

Polaris GPS: Rubrik's new SaaS platform for data management applications

Savia Lobo
06 Apr 2018
2 min read
Rubrik, a cloud data management company launched Polaris GPS, a new SaaS platform for Data Management Applications. This new platform helps businesses and individuals to manage their information spread across the cloud. Polaris GPS delivers a single control and policy management console across globally distributed business applications that are locally managed by Rubrik’s Cloud Data Management instances. Polaris GPS SaaS Platform This new SaaS platform forms a unified system of record for business information across all enterprise applications running in data centers and clouds. The system of record includes native search, workflow orchestration, and a global content catalog, which are exposed through an open API architecture. Developers can leverage these APIs to deliver high-value data management applications for data policy, control, security, and deep intelligence. These applications can further address challenges of risk mitigation, compliance, and governance within the enterprise. Some key features of Polaris GPS : Connects all applications and data across data center and cloud with a uniform framework. No infrastructure or upgrades required. One can leverage the latest features immediately. With Polaris GPS, one can apply the same logic throughout to any kind of data and focus on business outcomes rather than technical processes. Provides faster on-demand broker services with the help of API-driven connectivity. Helps mitigate risk with automated compliance. This means one can define policies and Polaris applies these globally to all your business applications. Read more about Polaris GPS, on Rubrik’s official website.
Read more
  • 0
  • 0
  • 9937

article-image-platform-13-openstack-queens-the-first-fully-containerized-version-released
Gebin George
28 May 2018
2 min read
Save for later

Platform 13: OpenStack Queens, the first fully containerized version released

Gebin George
28 May 2018
2 min read
Red Hat released its 13th version of OpenStack cloud platform i.e Queens. OpenStack usually follows a rapid six-month release cycle. This release was majorly focussed upon using open-source OpenStack to bridge the gap between private and public cloud. RHOP will be generally available in June through the Red Hat customer portal and as a part of both Red Hat infrastructure and cloud suite. Red Hat’s general manager of OpenStack said “RHOP 13 is the first complete containerized OpenStack. Our customers have been asking us to make it easy to run Red Hat OpenShift Container Platform (RHOCP), Red Hat's Kubernete's offering. We want to make this as seamless as possible” OpenStack has come with very interesting cross-portfolio support, to accelerate their hybrid cloud offering. This includes: Red Hat CloudForms which help in managing day-to-day tasks in Hybrid Infrastructure. Red Hat Ceph storage, a scalable storage solution which enables provisioning of hundreds of virtual machines from a single snapshot to build a massive storage solution Red Hat OpenShift container platform which enables running of cloud-native workloads with ease. OpenShift architecture supports running of both Linux as well as Kubernetes containers on a single workload. RHOP 13 also comes with a varied set of feature enhancements and upgrades, like: Containerization capabilities OpenStack 13 is building upon the containerization capabilities and services introduced with the release of OpenStack 12. It enables containerization of all the services including networking and storage. Security capabilities By the inclusion of OpenStack Barbican, RHOP 13 comes up with tenant-level lifecycle for sensitive data protection such as passwords, security certificates and keys. With the introduction of features in Barbican, encryption-based services are available to extensive data protection. For official release notes, please refer to the official OpenStack blog. Introducing OpenStack Foundation’s Kata Containers 1.0 About the Certified OpenStack Administrator Exam OpenStack Networking in a Nutshell
Read more
  • 0
  • 0
  • 9865

article-image-hashicorp-announces-consul-1-2-to-ease-service-segmentation-with-the-connect-feature
Savia Lobo
28 Jun 2018
3 min read
Save for later

HashiCorp announces Consul 1.2 to ease Service segmentation with the Connect feature

Savia Lobo
28 Jun 2018
3 min read
HashiCorp recently announced the release of a new version of its distributed service mesh, Consul 1.2.  This release supports a new feature known as Connect, which automatically changes any existing Consul cluster into a service mesh solution. It works on any platform such as physical machines, cloud, containers, schedulers, and more. HashiCorp is San Francisco based organization that helps businesses resolve development, operations, and security challenges in infrastructure, for them to focus on other business-critical tasks. Consul is one such HashiCorp’s product; it is a distributed service mesh for connecting, securing, and configuring services across any runtime platform or any public or private cloud platform. The Connect feature within the Consul 1.2, enables secure service-to-service communication with automatic TLS encryption and identity-based authorization. HashiCorp further stated the Connect feature to be free and open source. New functionalities in the Consul 1.2 Encrypted Traffic while in transit All traffic is established with Connect through a mutual TLS. It ensures traffic to be encrypted in transit and allows services to be safely deployed in low-trust environment. Connection Authorization It will allow or deny service communication by creating a service access graph with intentions. Connect uses the logical name of the service, unlike a firewall which uses IP addresses. This means rules are scale independent; it doesn’t matter if there is one web server or 100. Intentions can be configured using the UI, CLI, API, or HashiCorp Terraform. Proxy Sidecars Applications are allowed to use a lightweight proxy sidecar process to automatically establish inbound and outbound TLS connections. With this, existing applications can work with Connect without any modification. Consul ships with a built-in proxy that doesn't require external dependencies, along with third-party proxies such as Envoy. Native Integration Performance sensitive applications can natively integrate with the Consul Connect APIs to establish and accept connections without a proxy for optimal performance and security. Certificate Management Consul creates and distributes certificates using a certificate authority (CA) provider. Consul has a built-in CA system that requires no external dependencies. This CA system integrates with HashiCorp Vault, and can also be extended to support any other PKI (Public Key Infrastructure) system. Network and Cloud Independent Connect uses standard TLS over TCP/IP, which allows Connect to work on any network configuration. However, the IP advertised by the destination service should be reachable by the underlying operating system. Further, services can communicate cross-cloud without complex overlays. Know more about these functionalities in detail, by visiting HashiCorp Consul 1.2 official blog post SDLC puts process at the center of software engineering Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner What is a multi layered software architecture?  
Read more
  • 0
  • 0
  • 9826

article-image-go-1-11-support-announced-for-google-cloud-functions
Melisha Dsouza
17 Jan 2019
2 min read
Save for later

Go 1.11 support announced for Google Cloud Functions!

Melisha Dsouza
17 Jan 2019
2 min read
Yesterday, Google cloud announced the support for Go 1.11 (in beta) on Cloud Functions. Developers can now write Go functions that scale dynamically and seamlessly integrate with Google Cloud events. The Go language follows suite after Node.js and Python were announced as supported languages for Google Cloud Functions. Google Cloud functions ensures that developers do not have to worry about server management and scaling. Google Cloud functions scale automatically and developers only pay for the time a function runs. By using the familiar blocks of Go functions, developers can build a variety of applications like: Serverless application backends real-time data processing pipelines Chatbots video or image analysis tools And much more! The two types of Go functions that developers can use with cloud functions are the HTTP and background functions. The HTTP functions are invoked by HTTP requests, while background functions are triggered by events. The Google cloud runtime system provides support for multiple Go packages via the Go modules. Go 1.11 modules allow the integration of third-party dependencies into an application’s code. Go Developers and Google Cloud users have taken this news well. Reddit and Youtube did see a host of positive comments from users. Users have commented on Go being a good fit for cloud functions and making the process of adopting cloud functions much more easier. https://www.reddit.com/r/golang/comments/agne4o/get_going_with_cloud_functions_go_111_is_now_a/ee7sd35 https://www.reddit.com/r/golang/comments/agne4o/get_going_with_cloud_functions_go_111_is_now_a/ee84cej It is easy and efficient to deploy a Go function in Google Cloud. Check out the examples on Google Cloud’s official blog page. Alternatively, you can watch this video to know more about this announcement. Google Cloud releases a beta version of SparkR job types in Cloud Dataproc Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move? Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more  
Read more
  • 0
  • 0
  • 9485
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-openstack-foundation-to-tackle-open-source-infrastructure-problems-will-conduct-conferences-under-the-name-open-infrastructure-summit
Melisha Dsouza
16 Nov 2018
3 min read
Save for later

OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’

Melisha Dsouza
16 Nov 2018
3 min read
At the OpenStack Summit in Berlin this week, the OpenStack Foundation announced that from now all its bi-annual conferences will be conducted under the name of ‘Open Infrastructure Summit’. According to TechCrunch, the  Foundation itself won’t have a rebranding of its name, but a change will be brought about in the nature of what the Foundation is doing. The board will now be adopting new projects outside of the core OpenStack project. There will also be a process for adding “pilot projects” and fostering them for a minimum of 18 months. The focus for these projects will be on continuous integration and continuous delivery (CI/CD), container infrastructure, edge computing, data center, and artificial intelligence and machine learning. OpenStack currently has these pilot projects in development: Airship, Kata Containers, StarlingX and Zuul. OpenStack says that the idea of the foundation is not to manage multiple projects, or increase the Foundation’s revenue. However, the scope of this idea is focused around people who run or manage infrastructure. There are no new boards of directors or foundations for each project. The team also assures its members that the actual OpenStack technology isn’t going anywhere. OpenStack Foundation CTO Mark Collier said “We said very clearly this week that open infrastructure starts with OpenStack, so it’s not separate from it. OpenStack is the anchor tenant of the whole concept,” Collier said. Sell added, “All that we are doing is actually meant to make OpenStack better.” Adding his insights on the decision, Canonical founder Mark Shuttleworth is worried that the focus on multiple projects will “confuse people about OpenStack.” he further adds that “I would really like to see the Foundation employ the key contributors to OpenStack so that the heart of OpenStack had long-term stability that wasn’t subject to a popularity contest every six months,” Boris Renski, co-founder of OpenSTack stated that as of today a number of companies are back to doubling down on OpenStack as their core focus. He attributes this to the foundation’s focus on edge computing. The highest interest in OpenStack being shown by China. The OpenStack Foundation’s decision to tackle open source infrastructure problems, while keeping the core of the actual OpenStack project intact, is refreshing. The only possible competition it can face is from the Linux Foundation backing the Cloud Native Computing Foundation. Read Next OpenStack Rocky released to meet AI, machine learning, NFV and edge computing demands for infrastructure Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Introducing OpenStack Foundation’s Kata Containers 1.0
Read more
  • 0
  • 0
  • 9475

article-image-you-can-now-integrate-chaos-engineering-into-your-ci-and-cd-pipelines-thanks-to-gremlin-and-spinnaker
Richard Gall
02 Apr 2019
3 min read
Save for later

You can now integrate chaos engineering into your CI and CD pipelines thanks to Gremlin and Spinnaker

Richard Gall
02 Apr 2019
3 min read
Chaos engineering is a trend that has been evolving quickly over the last 12 months. While for the decade it has largely been the preserve of Silicon Valley's biggest companies, thanks to platforms and tools like Gremlin, and an increased focus on software resiliency, that's been changing. Today, however, is a particularly important step in chaos engineering, as Gremlin have partnered with Netflix-built continuous deployment platform Spinnaker to allow engineering teams to automate chaos engineering 'experiments' throughout their CI and CD pipelines. Ultimately it means DevOps teams can think differently about chaos engineering. Gradually, this could help shift the way we think about chaos engineering, as it moves from localized experiments that require an in depth understanding of one's infrastructure, to something that is built-into the development and deployment process. More importantly, it makes it easier for engineering teams to take complete ownership of the reliability of their software. At a time when distributed systems bring more unpredictability into infrastructure, and when downtime has never been more costly (a Gartner report suggested downtime costs the average U.S. company $5,600 a minute all the way back in 2014) this is a step that could have a significant impact on how engineers work in the future. Read next: How Gremlin is making chaos engineering accessible [Interview] Spinnaker and chaos engineering Spinnaker is an open source continuous delivery platform built by Netflix and supported by Google, Microsoft, and Oracle. It's a platform that has been specifically developed for highly distributed and hybrid systems. This makes it a great fit for Gremlin, and also highlights that the growth of chaos engineering is being driven by the move to cloud. Adam Jordens, a Core Contributor to Spinnaker and a member of the Spinnaker Technical Oversight Committee said that "with the rise of microservices and distributed architectures, it’s more important than ever to understand how your cloud infrastructure behaves under stress.” Jordens continued; "by integrating with Gremlin, companies will be able to automate chaos engineering into their continuous delivery platform for the continual hardening and resilience of their internet systems.” Kolton Andrus, Gremlin CEO and Co-Founder explained the importance of Spinnaker in relation to chaos engineering, saying that "by integrating with Gremlin, users can now automate chaos experiments across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, Openstack, and more, enabling enterprises to build more resilient software." In recent months Gremlin has been working hard on products and features that make chaos engineering more accessible to companies and their engineering teams. In February, it released Gremlin Free, a free version of Gremlin designed to offer users a starting point for performing chaos experiments.
Read more
  • 0
  • 0
  • 8711

article-image-hello-gg-a-new-os-framework-to-execute-super-fast-apps-on-1000s-of-transient-functional-containers
Bhagyashree R
15 Jul 2019
4 min read
Save for later

Hello 'gg', a new OS framework to execute super-fast apps on "1000s of transient functional containers"

Bhagyashree R
15 Jul 2019
4 min read
Last week at the USENIX Annual Technical Conference (ATC) 2019 event, a team of researchers introduced 'gg'. It is an open-source framework that helps developers execute applications using thousands of parallel threads on a cloud function service to achieve near-interactive completion times. "In the future, instead of running these tasks on a laptop, or keeping a warm cluster running in the cloud, users might push a button that spawns 10,000 parallel cloud functions to execute a large job in a few seconds from start. gg is designed to make this practical and easy," the paper reads. At USENIX ATC, leading systems researchers present their cutting-edge systems research. It also gives researchers to gain insight into topics like virtualization, network management and troubleshooting, cloud and edge computing, security, privacy, and more. Why is the gg framework introduced Cloud functions, better known as, serverless computing, provide developers finer granularity and lower latency. Though they were introduced for event handling and invoking web microservices, their granularity and scalability make them a good candidate for creating something called a “burstable supercomputer-on-demand”. These systems are capable of launching a burst-parallel swarm of thousands of cloud functions, all working on the same job. The goal here is to provide results to an interactive user much faster than their own computer or by booting a cold cluster and is cheaper than maintaining a warm cluster for occasional tasks. However, building applications on swarms of cloud functions pose various challenges. The paper lists some of them: Workers are stateless and may need to download large amounts of code and data on startup Workers have limited runtime before they are killed On-worker storage is limited but much faster than off-worker storage The number of available cloud workers depends on the provider's overall load and can't be known precisely upfront Worker failures occur when running at large scale Libraries and dependencies differ in a cloud function compared with a local machine Latency to the cloud makes roundtrips costly How gg works Previously, researchers have addressed some of these challenges. The gg framework aims to address these principal challenges faced by burst-parallel cloud-functions applications. With gg, developers and users can build applications that burst from zero to thousands of parallel threads to achieve low latency for everyday tasks. The following diagram shows its composition: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers The gg framework enables you to build applications on an abstraction of transient, functional containers that are also known as thunks. Applications can express their jobs in terms of interrelated thunks or Linux containers and then schedule, instantiate, and execute those thunks on a cloud-functions service. This framework is capable of containerizing and executing existing programs like software compilation, unit tests, and video encoding with the help of short-lived cloud functions. In some cases, this can give substantial gains in terms of performance. It can also be inexpensive than keeping a comparable cluster running continuously depending on the frequency of the task. The functional approach and fine-grained dependency management of gg give significant performance benefits when compiling large programs from a cold start. Here's a table showing a summary of the results for compiling Inkscape, an open-source software: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers When running “cold” on AWS Lambda, gg was nearly 5x faster than an existing icecc system, running on a 48-core or 384-core cluster of running VMs. To know more in detail, read the paper: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers. You can also check out gg's code on GitHub. Also, watch the talk in which Keith Winstein, an assistant professor of Computer Science at Stanford University, explains the purpose of GG and demonstrates how it exactly works: https://www.youtube.com/watch?v=O9qqSZAny3I&t=55m15s Cloud computing trends in 2019 Cloudflare's Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Serverless Computing 101
Read more
  • 0
  • 0
  • 7949

article-image-amazon-eventbridge-an-event-bus-with-higher-security-and-speed-to-boost-aws-serverless-ecosystem
Vincy Davis
15 Jul 2019
4 min read
Save for later

Amazon EventBridge: An event bus with higher security and speed to boost AWS serverless ecosystem

Vincy Davis
15 Jul 2019
4 min read
Last week, Amazon had a pretty huge news for its AWS serverless ecosystem, one which is being considered as the biggest thing since AWS Lambda itself. Few days ago, with an aim to help customers integrate their own AWS applications with Software as a Service (SaaS) applications, Amazon EventBridge was launched. The EventBridge model is an asynchronous, fast, clean, and easy to use event bus which can be used to publish events, specific to each AWS customer. The SaaS application and a code running on AWS are now independent of a shared communication protocol, runtime environment, or programming language. This allows Lambda functions to handle events from a Saas application as well as route events to other AWS targets. Similar to CloudWatch events, EventBridge also has an existing default event bus that accepts events from AWS services and calls to PutEvents. One distinction between them is that in EventBridge, each partner application that a user subscribes to will also create an event source. This event source can then be used to associate with an event bus in an AWS account. AWS users can select any of their event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule. Important terms to understand the use of Amazon EventBridge Partner: An organization that has integrated their SaaS application with EventBridge. Customer: An organization that uses AWS, and that has subscribed to a partner’s SaaS application. Partner Name: A unique name that identifies an Amazon EventBridge partner. Partner Event Bus: An Event Bus that is used to deliver events from a partner to AWS. How EventBridge works for partners & customers A partner can allow their customers to enter an AWS account number and then select an AWS region. Next, CreatePartnerEventSource is called by the partner in the desired region and the customer is informed of the event source name. After accepting the invitation to connect, customers have to wait for the status of the event source to change to Active. Each time an event of interest to the customer occurs, the partner calls the PutPartnerEvents and reference the event source. Image Source: Amazon It works the same way for customers as well. Customer accepts the invitation to connect by calling CreateEventBus, to create an event bus associated with the event source. Customer can add rules and targets to prepare the Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. Customers can use DeActivateEventSource and ActivateEventSource to control the flow. Amazon EventBridge is launched with ten partner event sources including Datadog, Zendesk, PagerDuty, Whispir, Segment, Symantec and more. This is pretty big news for users who deal with building serverless applications. With inbuilt partner integrations these partners can directly trigger an event in an EventBridge, without the need for a webhook. Thus “AWS is the mediator rather than HTTP”, quotes Paul Johnston, the ServerlessDays cofounder. He also adds that, “The security implications of partner integrations are the first thing that springs to mind. The speed implications will almost certainly be improved as well, with those partners almost certainly using AWS events at the other end as well.” https://twitter.com/PaulDJohnston/status/1149629728065650693 https://twitter.com/PaulDJohnston/status/1149629729571397632 Users are excited with the kind of creative freedom Amazon EventBridge will bring to their products. https://twitter.com/allPowerde/status/1149792437738622976 https://twitter.com/ShortJared/status/1149314506067255304 https://twitter.com/petrabarus/status/1149329981975040000 https://twitter.com/TobiM/status/1149911798256152576 Users with SaaS application can integrate with EventBridge Partner Integration. Visit the Amazon blog to learn the implementation of EventBridge. Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon Aurora makes PostgreSQL Serverless generally available Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic
Read more
  • 0
  • 0
  • 7896
article-image-zabbix-4-2-release-for-data-collection-processing-and-visualization
Fatema Patrawala
03 Apr 2019
7 min read
Save for later

Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization

Fatema Patrawala
03 Apr 2019
7 min read
Zabbix Team announced the release of Zabbix 4.2. The latest release of Zabbix is packed with modern monitoring system for: data collection and processing, distributed monitoring, real-time problem and anomaly detection, alerting and escalations, visualization and more. Let us check out what Zabbix 4.2 has actually brought to the table. Here is a list of the most important functionality included into the new release. Official support of new platforms In addition to existing official packages and appliances, Zabbix 4.2 will now cater to the following platforms: Zabbix package for RaspberryPi Zabbix package for SUSE Enterprise Linux Server Zabbix agent for Mac OS/X Zabbix agent for MSI for Windows Zabbix Docker images Built-in support of Prometheus data collection Zabbix is able to collect data in many different ways (push/pull) from various data sources including JMX, SNMP, WMI, HTTP/HTTPS, RestAPI, XML Soap, SSH, Telnet, agents, scripts and other data sources, with Prometheus being the latest addition to the bunch. Now the 4.2 release will offer an integration with the exporters using native support of PromQL language. Moreover, the use of dependent metrics will give the Zabbix team ability to collect massive amounts of Prometheus metrics in a highly efficient way: this way they get all the data using a single HTTP call and then just reuse it for corresponding dependent metrics. Zabbix can also transform Prometheus data into JSON format, which can be used directly for low-level discovery. Efficient high-frequency monitoring We all want to discover problems as fast as possible. Now with 4.2 we can collect data with high frequency, instantly discover problems without keeping excessive amount of history data in the Zabbix database. Validation of collected data and error handling No one wants to collect incorrect data. With Zabbix 4.2 we can address that via built-in preprocessing rules that validate data by matching or not matching regular expression, using JSONPath or XMLPath. Now it is also possible to extract error messages from collected data. This can be especially handy if we get an error from external APIs. Preprocessing data with JavaScript In Zabbix 4.2 you can fully harness the power of user-defined scripts written in JavaScript. Support of JavaScript gives absolute freedom of data preprocessing! In fact, you can now replace all external scripts with JavaScript. This will enable all sorts of data transformation, aggregation, filtering, arithmetical and logical operations and much more. Test preprocessing rules from UI As preprocessing becomes much more powerful, it is important to have a tool to verify complex scenarios. Zabbix 4.2 will allow to test preprocessing rules straight from the Web UI! Processing millions of metrics per second! Prior to 4.2, all preprocessing was handled solely by the Zabbix server. A combination of proxy-based preprocessing with throttling gives us the ability to perform high-frequency monitoring collecting millions of values per second without overloading the Zabbix Server. Proxies will perform massive preprocessing of collected data while the Server will only receive a small fraction of it. Easy low level discovery Low-level discovery (LLD) is a very effective tool for automatic discovery of all sorts of resources (filesystems, processes, applications, services, etc) and automatic creation of metrics, triggers and graphs related to them. It tremendously helps to save time and effort allowing to use just a single template for monitoring devices with different resources. Zabbix 4.2 supports processing based on arbitrary JSON input, which in turn allows us to communicate directly with external APIs, and use received data for automatic creation of hosts, metrics and triggers. Combined with JavaScript preprocessing it opens up fantastic opportunities for templates, that may work with various external data sources such as cloud APIs, application APIs, data in XML, JSON or any other format. Support of TimescaleDB TimescaleDB promises better performance due to more efficient algorithms and performance oriented data structures. Another significant advantage of TimescaleDB is automatic table partitioning, which improves performance and (combined with Zabbix) delivers fully automatic management of historical data. However, Zabbix team hasn’t performed any serious benchmarking yet. So it is hard to comment on real life experience of running TimescaleDB in production. At this moment TimescaleDB is an actively developed and rather young project. Simplified tag management Prior to Zabbix 4.2 we could only set tags for individual triggers. Now tag management is much more efficient thanks to template and host tags support. All detected problems get tag information not only from the trigger, but also from the host and corresponding templates. More flexible auto-registration Zabbix 4.2 auto-registration options gives the ability to filter host names based on a regular expression. It’s really useful if we want to create different auto-registration scenarios for various sets of hosts. Matching by regular expression is especially beneficial in case we have complex naming conventions for our devices. Control host names for auto-discovery Another improvement is related to naming hosts during auto-discovery. Zabbix 4.2 allows to assign received metric data to a host name and visible name. It is an extremely useful feature that enables great level of automation for network discovery, especially if we use Zabbix or SNMP agents. Test media type from Web UI Zabbix 4.2 allows us to send a test message or check that our chosen alerting method works as expected straight from the Zabbix frontend. This is quite useful for checking the scripts we are using for integration with external alerting and helpdesk systems etc. Remote monitoring of Zabbix components Zabbix 4.2 introduces remote monitoring of internal performance and availability metrics of the Zabbix Server and Proxy. Not only that, it also allows to discover Zabbix related issues and alert us even if the components are overloaded or, for example, have a large amount of data stored in local buffer (in case of proxies). Nicely formatted email messages Zabbix 4.2 comes with support of HTML format in email messages. It means that we are not limited to plain text anymore, the messages can use all power of HTML and CSS for much nicer and easy to read alert messages. Accessing remote services from network maps A new set of macros is now supported in network maps for creation of user-defined URLs pointing to external systems. It allows to open external tickets in helpdesk or configuration management systems, or do any other actions using just one or two mouse-clicks. LLD rule as a dependant metric This functionality allows to use received values of a master metric for data collection and LLD rules simultaneously. In case of data collection from Prometheus exporters, Zabbix will only execute HTTP query once and the result of the query will be used immediately for all dependent metrics (LLD rules and metric values). Animations for maps Zabbix 4.2 comes with support of animated GIFs making problems on maps more noticeable. Extracting data from HTTP headers Web-monitoring brings the ability to extract data from HTTP headers. With this we can now create multi-step scenarios for Web-monitoring and for external APIs using the authentication token received in one of the steps. Zabbix Sender pushes data to all IP addresses Zabbix Sender will now send metric data to all IP addresses defined in the “ServerActive” parameter of the Zabbix Agent configuration file. Filter for configuration of triggers Configuration of triggers page got a nice extended filter for quick and easy selection of triggers by a specified criteria. Showing exact time in graph tooltip It is a minor yet very useful improvement. Zabbix will show you timestamp in graph tooltip. Other improvements Non-destructive resizing and reordering of dashboard widgets Mass-update for item prototypes Support of IPv6 for DNS related checks (“net.dns” and “new.dns.record”) “skip” parameter for VMWare event log check “vmware.eventlog” Extended preprocessing error messages to include intermediate step results Expanded information and the complete list of Zabbix 4.2 developments, improvements and new functionality is available in Zabbix Manual. Encrypting Zabbix Traffic Deploying a Zabbix proxy Zabbix and I – Almost Heroes
Read more
  • 0
  • 0
  • 7600

article-image-joyent-public-cloud-to-reach-end-of-life-in-november
Amrata Joshi
07 Jun 2019
4 min read
Save for later

Joyent Public Cloud to reach End-of-Life in November

Amrata Joshi
07 Jun 2019
4 min read
Yesterday Joyent announced its departure from the public cloud space. Beginning November 9, 2019, the Joyent Public Cloud, including Triton Compute and Triton Object Storage (Manta), will no longer accept new customers as of June 6, 2019, and will discontinue serving existing customers upon EOL on November 9th. In 2016, Joyent was acquired by Samsung after it had explored Manta, which is the Joyent’s object storage system, for implementation, Samsung liked the product, hence had bought it. In 2014, Joyent was even praised by Gartner, in its IaaS Magic Quadrant, for having a “unique vision.” The company had also developed a single-tenant cloud offering for cloud-mature, hyperscale users such as Samsung, who also demand vastly improved cloud costs. Since more resources are required for expanding the single-tenant cloud business, the company had to take this call. The team will continue to build functionality for their open source Triton offering complemented by commercial support options to utilize Triton equivalent private clouds in a single-tenant model. The official blog post reads, “As that single-tenant cloud business has expanded, the resources required to support it have grown as well, which has led us to a difficult decision.” Now the current customers have five months to switch and find a new home. The customers need to migrate, backup, or retrieve data running or stored in the Joyent Cloud before November 9th. The company will be removing compute and data from the current public cloud after November 9th and will not be capturing backups of any customer data. Joyent is working towards assisting its customers through the transition with the help of its partners. Some of the primary partners involved in assistance include OVH, Microsoft Azure, Redapt Attunix, and a few more, meanwhile, the additional partners are being finalized. Users might have to deploy the same open source software that powers the Joyent Public Cloud in their own datacenter or on a BMaaS provider like SoftLayer with the company’s ongoing support. For the ones who don’t have the level of scale for their own datacenter or for running BMaaS, Joyent is evaluating different options to support this transition and make it as smooth as possible. Steve Tuck, Joyent president and chief operating officer (COO), wrote in the blog post, “To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home.” He further added, “We are truly grateful for your business and the commitment that you have shown us over the years; thank you.” All publicly-available data centers including US-West, US-Southwest, US-East 1/2/3/3b, EU-West, and Manta will be impacted by the EOL. However, the company said that there will be no impact to their Node.js Enterprise Support offering, they will  invest heavily in software support business for both Triton and Node.js support. They will also be shortly releasing a new Node.js Support portal for the customers. Few think that Joyent’s value proposition got affected because of its public interface. A user commented on HackerNews, “Joyent's value proposition was killed (for the most part) by the experience of using their public interface. It would've taken a great deal of bravery to try that and decide a local install would be better. The node thing also did a lot of damage - Joyent wrote a lot of the SmartOS/Triton command line tools in node so they were slow as hell. Triton itself is a very non-trivial install although quite probably less so than a complete k8s rig.” Others have expressed remorse on Joyent Public Cloud EOL. https://twitter.com/mcavage/status/1136657172836708352 https://twitter.com/jamesaduncan/status/1136656364057612288 https://twitter.com/pborenstein/status/1136661813070827520 To know more about this news, check out EOL of Joyent Public Cloud. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Bryan Cantrill on the changing ethical dilemmas in Software Engineering Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 7549

article-image-introducing-datastax-constellation-a-cloud-platform-for-rapid-development-of-apache-cassandra-based-apps
Bhagyashree R
21 May 2019
3 min read
Save for later

Introducing DataStax Constellation: A cloud platform for rapid development of Apache Cassandra-based apps

Bhagyashree R
21 May 2019
3 min read
At the first day of Accelerate 2019, DataStax unveiled DataStax Constellation, a modern cloud platform specifically designed for Apache Cassandra. DataStax is a leading provider of the always-on, active everywhere distributed hybrid cloud database built on Apache Cassandra. https://twitter.com/DataStax/status/1130803273647230976 DataStax Accelerate 2019 is a three-day event (21-23 May) happening at Maryland, US. On the agenda, this event has 70+ technical sessions, networking with experts and people from leading companies like IBM, Walgreens, T-Mobile, and also new product announcements. Sharing the vision behind DataStax Constellation, Billy Bosworth, CEO of DataStax, said, “With Constellation, we are making a major commitment to being the leading cloud database company and putting cloud development at the top of our priority list. From edge to hybrid to multi-cloud, we are providing developers with a cloud platform that includes the complete set of tools they need to build game-changing applications that spark transformational business change and let them do what they do best.” What is DataStax Constellation? DataStax Constellation is a modern cloud platform that provides smart services for easy and rapid development and deployment of Cassandra-based applications. It comes with an integrated web console that simplifies the use and management of Cassandra. DataStax Constellation provides an interactive developer tool for CQL (Cassandra Query Language) named DataStax Studio. This tool makes it easy for developers to collaborate by keeping track of code, query results, and visualizations in self-documenting notebooks. The Constellation platform is initially launched with two cloud services, DataStax Apache Cassandra-as-a-Service and DataStax Insights: DataStax Apache Cassandra as a Service DataStax Apache Cassandra as a Service enables you to easily develop and deploy Apache Cassandra applications in the cloud. Here are some of the advantages and features it comes with: Ensures high availability of applications: It assures uptime and integrity with multiple data replicas. Users are only charged when the database is in use, which significantly reduces operational overhead. Reduces administrative overhead: It makes your applications capable of self-healing with its advanced optimization and remediation mechanisms. Better performance than open-source Cassandra: This provides up to three times better performance than open source Apache Cassandra at any scale. DataStax Insights DataStax Insights is performance management and monitoring tool for DataStax Constellation and DataStax Enterprise. Here are some of the features it comes with: Centralized and scalable monitoring: It provides centralized and scalable monitoring across all cloud and on-premise deployments. Simplified administration: It provides an at-a-glance health index that simplifies administration via a single view of all clusters. Automated performance tuning: Its AI-powered analysis and recommendations enable automated performance tuning. Sharing his future plans regarding Constellation, Bosworth said, “Constellation is for all developers seeking easy and obvious application deployment in any cloud. And the great thing is that we are planning for it to be available on all three of the major cloud providers: AWS, Google, and Microsoft.” DataStax plans to make Constellation, Insights, and Cassandra as a Service available on all three cloud providers in Q4 of 2019. To know more about DataStax Constellation, visit its official website Instaclustr releases three open source projects for Apache Cassandra database users ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features cstar: Spotify’s Cassandra orchestration tool is now open source!
Read more
  • 0
  • 0
  • 7342
article-image-couchbase-mobile-2-released
Richard Gall
13 Apr 2018
2 min read
Save for later

Couchbase mobile 2.0 is released

Richard Gall
13 Apr 2018
2 min read
Couchbase has just released Couchbase Mobile 2.0. And the organization is pretty excited; it claims that it's going to revolutionize the way businesses process and handle edge analytics. In many ways, Couchbase Mobile 2.0 extends many of the features of the main Couchbase server to its mobile version. Ultimately, it demonstrates Couchbase responding to some of the core demands of business - minimizing the friction between cloud solutions and mobile devices at the edge of networks. The challenges Couchbase Mobile 2.0 is trying to solve According to the Couchbase website, Couchbase Mobile 2.0 is being marketed as solving 3 key challenges: Deployment flexibility Performance at scale Security The combination of these 3 is really the holy grail for many software solutions companies. It's an attempt to resolve that tension between the need for security and stability while remaining adaptable and responsive to change. Learn more about Couchbase Mobile 2.0 here. Ravi Mayuram, Senior VP of Engineering and CTO of Couchbase said said: "With Couchbase Mobile 2.0, we are bringing some very exciting new capabilities to the edge that parallels what we have on Couchbase Server. For the first time, SQL queries and Full-Text Search are available on a NoSQL database running on the edge. Additionally, we’ve made programming much easier through thread and type safe database APIs, as well as automatic conflict resolution." Key features of Couchbase Mobile 2.0 Here are some of the key features of the Couchbase Mobile 2.0: Full text query and SQL search. Data change events will allow developers to build applications that respond more quickly. That's only going to be good for user experience. Using WebSocket for replication will make replication more efficient. That's because "it eliminates continuously polling servers". Data conflicts can now be resolved much more quickly. This new release will help to cement Couchbase's position as a data platform. And with an impressive list of customers, including Wells Fargo, Tommy Hilfiger, eBay and DreamWorks, it will be interesting to see to what extent it can grow that list. Source: Globe Newswire
Read more
  • 0
  • 0
  • 6888

article-image-cockroach-labs-2018-cloud-report-aws-outperforms-gcp-hands-down
Melisha Dsouza
14 Dec 2018
5 min read
Save for later

Cockroach Labs 2018 Cloud Report: AWS outperforms GCP hands down

Melisha Dsouza
14 Dec 2018
5 min read
While testing the features for CockroachDB 2.1, the team discovered that AWS offered 40% greater throughput than GCP. To understand the reason for this result, the team compared GCP and AWS on TPC-C performance (e.g., throughput and latency), CPU, Network, I/O, and cost. This has resulted in CockroachDB releasing a 2018 Cloud Report to help customers decide on which cloud solution to go with based on the most commonly faced questions, such as should they use Amazon Web Services (AWS), Google Cloud Platform (GCP) or Microsoft Azure? How should they tune their workload for different offerings? Which of the platforms are more reliable? Note: They did not test Microsoft Azure due to bandwidth constraints but will do so in the near future. The tests conducted For GCP, the team chose the n1-standard-16 machine with Intel Xeon Scalable Processor (Skylake) in the us-east region and for AWS  they chose the latest compute-optimized AWS instance type, c5d.4xlarge instances, to match n1-standard-16, because they both have 16 cpus and SSDs. #1 TPC-C Benchmarking test The team tested the workload performance by using TPC-C. The results were surprising as CockroachDB 2.1 achieves 40% more throughput (tpmC) on TPC-C when tested on AWS using c5d.4xlarge than on GCP via n1-standard-16. They then tested the TPC-C against some of the most popular AWS instance types. Taking the testing a step ahead, they focused on the higher performing c5 series with SSDs, EBS-gp2, and EBS-io1 volume types. The AWS Nitro System present in c5and m5 series offers approximately similar or superior performance when compared to a similar GCP instance. The results were clear: AWS wins on TPC-C benchmark. #2 CPU Experiment The team chose stress-ng as according to them, it offered more benchmarks and provided more flexible configurations as compared to sysbench benchmarking test. On running the Stress-ng command stress-ng --metrics-brief --cpu 16 -t 1m five times on both AWS and GCP, they found that   AWS offered 28% more throughput (~2,900) on stress-ng than GCP. #3 Network throughput and latency test The team measured network throughput using a tool called iPerf and latency via another tool PING. They have given a detailed setup of the iPerf tool used for this experiment in a blog post. The tests were run 4 times, each for AWS and GCP. The results once again showed AWS was better than GCP. GCP showed a fairly normal distribution of network throughput centered at ~5.6 GB/sec. Throughput ranges from 4.01 GB/sec to 6.67 GB/sec, which according to the team is “a somewhat unpredictable spread of network performance”, reinforced by the observed average variance for GCP of 0.487 GB/sec. AWS, offers significantly higher throughput, centered on 9.6 GB/sec, and providing a much tighter spread between 9.60 GB/sec and 9.63 GB/sec when compared to GCP. On checking network throughput variance, for AWS, the variance is only 0.006 GB/sec. This indicates that the GCP network throughput is 81x more variable when compared to AWS. The network latency test showed that, AWS has a tighter network latency than GCP. AWS’s values are centered on an average latency, 0.057 ms. AWS offers significantly better network throughput and latency with none of the variability present in GCP. #4 I/O Experiment The team tested I/O using a configuration of Sysbench that simulates small writes with frequent syncs for both write and read performance. This test measures throughput based on a fixed set of threads, or the number of items concurrently writing to disk. The write performance showed that AWS consistently offers more write throughput across all thread variance from 1 thread up to 64. In fact, it can be as high as 67x difference in throughput. AWS also offers better average and 95th percentile write latency across all thread tests. At 32 and 64 threads, GCP provides marginally more throughput. For read latency, AWS tops the charts for up to 32 threads. At 32 and 64 threads GCP and AWS split the results. The test also shows that GCP offers a marginally better performance with similar latency to AWS for read performance at 32 threads and up. The team also used the no barrier method of writing directly to disk without waiting for the write cache to be flushed. The result for this were reverse as compared to the above experiments. They found that GCP with no barrier speeds things up by 6x! On AWS, no barrier (vs. not setting no barrier) is only a 25% speed up. #5 Cost Considering AWS outperformed GCP at the TPC-C benchmarks, the team wanted to check the cost involved on both platforms. For both clouds we assumed the following discounts available: On GCP :a  three-year committed use price discount with local SSD in the central region. On AWS : a three-year standard contract paid up front. They found that GCP is more expensive as compared to AWS, given the performance it has shown in the tests conducted. GCP costs 2.5 times more than AWS per tpmC. In response to this generated report, Google Cloud developer advocate, Seth Vargo, posted a comment on Hacker News assuring users that Google’s team would look into the tests and conduct their own benchmarking to provide customers with the much needed answers to the questions generated by this report. It would be interesting to see the results GCP comes up with in response to this report. Head over to cockroachlabs.com for more insights on the tests conducted. CockroachDB 2.0 is out! Cockroach Labs announced managed CockroachDB-as-a-Service Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 6199
Modal Close icon
Modal Close icon