Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-amazon-eks-windows-container-support-is-now-generally-available
Savia Lobo
10 Oct 2019
2 min read
Save for later

Amazon EKS Windows Container Support is now generally available

Savia Lobo
10 Oct 2019
2 min read
A few days ago, Amazon announced the general availability of the Windows Container support on  Amazon Elastic Kubernetes Service (EKS). The company announced a preview of the Windows Container support in March this year and also invited customers to try it out and provide their feedback. With the Windows Container Support, development teams can now deploy applications designed to run on Windows Servers, on Kubernetes alongside Linux applications. It will also bring in more consistency in system logging, performance monitoring, and code deployment pipelines. “We are proud to be the first Cloud provider to have General Availability of Windows Containers on Kubernetes and look forward to customers unlocking the business benefits of Kubernetes for both their Windows and Linux workloads,” the official post mentions. A few considerations before deploying the Worker nodes include: Windows workloads are supported with Amazon EKS clusters running Kubernetes version 1.14 or later. Amazon EC2 instance types C3, C4, D2, I2, M4 (excluding m4.16xlarge), and R3 instances are not supported for Windows workloads. Host networking mode is not supported for Windows workloads. Amazon EKS clusters must contain 1 or more Linux worker nodes to run core system pods that only run on Linux, such as coredns and the VPC resource controller. The kubelet and kube-proxy event logs are redirected to the Amazon EKS Windows Event Log and are set to a 200 MB limit. In a demonstration, Martin Beeby, a principal evangelist for Amazon Web Services has created a new Amazon Elastic Kubernetes Service cluster, which works with any cluster that is using Kubernetes version 1.14 and above. He has also added some new Windows nodes and deploys a Windows application. For a complete demonstration and to know more about the Amazon EKS Windows Container Support, read AWS’ official blog post. Amazon EBS snapshots exposed publicly leaking sensitive data in hundreds of thousands, security analyst reveals at DefCon 27 Amazon is being sued for recording children’s voices through Alexa without consent Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 21167

article-image-fastly-edge-cloud-platform-files-for-ipo
Bhagyashree R
22 Apr 2019
3 min read
Save for later

Fastly, edge cloud platform, files for IPO

Bhagyashree R
22 Apr 2019
3 min read
Last week, Fastly Inc., a provider of an edge cloud platform announced that it has filed its proposed initial public offering (ipo) with the US Securities and Exchange Commission. Last year in July, in its last round of financing before a public offering,  the company raised $40 million investment. The book-running managers for the proposed offering are BofA Merrill Lynch, Citigroup, and Credit Suisse. William Blair, Raymond James, Baird, Oppenheimer & Co., Stifel, Craig-Hallum Capital Group and D.A. Davidson & Co. are co-managers for the proposed offering. Founded by Artur Bergman in 2011, Fastly is an American cloud computing services provider. Its edge cloud platform provides a content delivery network, Internet security services, load balancing, and video & streaming services. The edge cloud platform is designed from the ground up to be programmable and to support agile software development. This programmable edge cloud platform gives developers real-time visibility and control by stream logging data. So, developers are able to instantly see the impact of new code in production, troubleshoot issues as they occur, and rapidly identify suspicious traffic. Fastly boasts of catering to customers like The New York Times, Reddit, GitHub, Stripe, Ticketmaster and Pinterest. The company, in the unfinished prospectus shared how it has grown over the years, the risks of investing in the company, what are its plans for the future, and more. The company shows a steady growth in its revenue, while in December 2017 it was $104.9 million, it increased to $144.6 million, by the end of 2018. Its loss has also shown some decline from $32.5 million in December 2017 to $30.9 million in December 2018. Predicting its future market value, the prospectus says, “When incorporating these additional offerings, we estimate a total market opportunity of approximately $18.0 billion in 2019, based on expected growth from 2017, to $35.8 billion in 2022, growing with an expected CAGR of 25.6%.“ Fastly has not yet determined the number of shares to offered and the price range for the proposed offering. Currently, the company’s public filing has a placeholder amount of $100 million. However, looking at the amount of funding the company has received, TechCrunch predicts that it is more likely to get closer to $1 billion when it finally prices its shares. Fastly has two classes of authorized common stock: Class A and Class B. The rights of both the common stockholders are identical, except with respect to voting and conversion. Each Class A share is entitled to one vote per share and each Class B share is entitled to 10 votes per share. Class B shares are convertible into one shares of Class A common stock. The Class A common stock will be listed on The New York Stock Exchange under the symbol “FSLY.” To read more in detail, check out the ipo filing by Fastly. Fastly open sources Lucet, a native WebAssembly compiler and runtime Cloudflare raises $150M with Franklin Templeton leading the latest round of funding Dark Web Phishing Kits: Cheap, plentiful and ready to trick you  
Read more
  • 0
  • 0
  • 21064

article-image-gnu-bash-5-0-is-here-with-new-features-and-improvements
Natasha Mathur
08 Jan 2019
2 min read
Save for later

Bash 5.0 is here with new features and improvements

Natasha Mathur
08 Jan 2019
2 min read
GNU project made version 5.0 of its popular POSIX shell Bash ( Bourne Again Shell) available yesterday. Bash 5.0 explores new improvements and features such as BASH_ARGV0, EPOCHSECONDS, and EPOCHREALTIME among others. Bash was first released in 1989 and was created for the GNU project as a replacement for their Bourne shell. It is capable of performing functions such as interactive command line editing, and job control on architectures that support it. It is a complete implementation of the IEEE POSIX shell and tools specification. Key Updates New features Bash 5.0 comes with a newly added EPOCHSECONDS variable, which is capable of expanding to the time in seconds. There is another newly added EPOCHREALTIME variable which is similar to EPOCHSECONDS in Bash 5.0. EPOCHREALTIME is capable of obtaining the number of seconds since the Unix Epoch, the only difference being that this variable is a floating point with microsecond granularity. BASH_ARGV0 is also a newly added variable in Bash 5.0 that expands to $0 and sets $0 on assignment. There is a newly defined config-top.h in Bash 5.0. This allows the shell to use a static value for $PATH. Bash 5.0 has a new shell option that can enable and disable sending history to syslog at runtime. Other Changes The `globasciiranges' option is now enabled by default in Bash 5.0 and can be set to off by default at configuration time. POSIX mode is now capable of enabling the `shift_verbose' option. The `history' builtin option in Bash 5.0 can now delete ranges of history entries using   `-d start-end'. A change that caused strings containing + backslashes to be flagged as glob patterns has been reverted in Bash 5.0. For complete information on bash 5.0, check out its official release notes. GNU ed 1.15 released! GNU Bison 3.2 got rolled out GNU Guile 2.9.1 beta released JIT native code generation to speed up all Guile programs
Read more
  • 0
  • 0
  • 21038

article-image-google-introduces-e2-a-flexible-performance-driven-and-cost-effective-vms-for-google-compute-engine
Vincy Davis
12 Dec 2019
3 min read
Save for later

Google introduces E2, a flexible, performance-driven and cost-effective VMs for Google Compute Engine

Vincy Davis
12 Dec 2019
3 min read
Yesterday, June Yang, the director of product management at Google announced a new beta version of the E2 VMs for Google Compute Engine. It features a dynamic resource management that delivers a reliable performance with flexible configurations and the best total cost of ownership (TCO) than any other VMs in Google Cloud. According to Yang, “E2 VMs are a great fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases, and development environments.” He further adds, “For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost.” What are the key features offered by E2 VMs E2 VMs are built to offer 31% savings compared to N1, which is the lowest total cost of ownership of any VM in Google Cloud. Thus, the VMs acquire a sustainable performance at a consistently low price point. Unlike comparable options from other cloud providers, E2 VMs can support a high CPU load without complex pricing. The E2 VMs can be tailored up to 16 vCPUs and 128 GB of memory and will only distribute the resources that the user needs or with the ability to use custom machine types. Custom machine types are ideal for scenarios when workloads that require more processing power or more memory but don't need all of the upgrades that are provided by the next machine type level. How E2 VMs achieve optimal efficiency Large, efficient physical servers E2 VMs automatically take advantage of the continual improvements in machines by flexibly scheduling across the zone’s available CPU platforms. With new hardware upgrades, the E2 VMs are live migrated to newer and faster hardware which allows it to automatically take advantage of these new resources. Intelligent VM placement In E2 VMs, Borg, Google’s cluster management system predicts how a newly added VM will perform on a physical server by observing the CPU, RAM, memory bandwidth, and other resource demands of the VMs. Post this, Borg searches across thousands of servers to find the best location to add a VM. These observations by Borg ensures that a newly placed VM will be compatible with its neighbors and will not experience any interference from them. Performance-aware live migration After the VMs are placed on a host, its performance is continuously monitored so that if there is an increase in demand for VMs, a live migration can be used to transparently shift the E2 load to other hosts in the data center. A new hypervisor CPU scheduler In order to meet E2 VMs performance goals, Google has built a custom CPU scheduler with better latency and co-scheduling behavior than Linux’s default scheduler. The new scheduler yields sub-microsecond average wake-up latencies with fast context switching which helps in keeping the overhead of dynamic resource management negligible for nearly all workloads. https://twitter.com/uhoelzle/status/1204972503921131521 Read the official announcement to know the custom VM shapes and predefined configurations offered by E2 VMs. You can also read part- 2 of the announcement to know more about the dynamic resource management in E2 VMs. Why use JVM (Java Virtual Machine) for deep learning Brad Miro talks TensorFlow 2.0 features and how Google is using it internally EU antitrust regulators are investigating Google’s data collection practices, reports Reuters Google will not support Cloud Print, its cloud-based printing solution starting 2021 Google Chrome ‘secret’ experiment crashes browsers of thousands of IT admins worldwide
Read more
  • 0
  • 0
  • 21008

article-image-microsoft-partners-expand-the-range-of-mission-critical-applications-you-can-run-on-azure-from-microsoft-azure-blog-announcements
Matthew Emerick
06 Oct 2020
14 min read
Save for later

Microsoft partners expand the range of mission-critical applications you can run on Azure from Microsoft Azure Blog > Announcements

Matthew Emerick
06 Oct 2020
14 min read
How the depth and breadth of the Microsoft Azure partner ecosystem enables thousands of organizations to bring their mission-critical applications to Azure. In the past few years, IT organizations have been realizing compelling benefits when they transitioned their business-critical applications to the cloud, enabling them to address the top challenges they face with running the same applications on-premises. As even more companies embark on their digital transformation journey, the range of mission and business-critical applications has continued to expand, even more so because technology drives innovation and growth. This has further accelerated in the past months, spurred in part by our rapidly changing global economy. As a result, the definition of mission-critical applications is evolving and goes well beyond systems of record for many businesses. It’s part of why we never stopped investing across the platform to enable you to increase the availability, security, scalability, and performance of your core applications running on Azure. The expansion of mission-critical apps will only accelerate as AI, IoT, analytics, and new capabilities become more pervasive. We’re seeing the broadening scope of mission-critical scenarios both within Microsoft and in many of our customers’ industry sectors. For example, Eric Boyd, in his blog, outlined how companies in healthcare, insurance, sustainable farming, and other fields have chosen Microsoft Azure AI to transform their businesses. Applications like Microsoft Teams have now become mission-critical, especially this year, as many organizations had to enable remote workforces. This is also reflected by the sheer number of meetings happening in Teams. Going beyond Azure services and capabilities Many organizations we work with are eager to realize myriad benefits for their own business-critical applications, but first need to address questions around their cloud journey, such as: Are the core applications I use on-premises certified and supported on Azure? As I move to Azure, can I retain the same level of application customization that I have built over the years on-premises? Will my users experience any impact in the performance of my applications? In essence, they want to make sure that they can continue to capitalize on the strategic collaboration they’ve forged with their partners and ISVs as they transition their core business processes to the cloud. They want to continue to use the very same applications that they spent years customizing and optimizing on-premises. Microsoft understands that running your business on Azure goes beyond the services and capabilities that any platform can provide. You need a comprehensive ecosystem. Azure has always been partner-oriented, and we continue to strengthen our collaboration with a large number of ISVs and technology partners, so you can run the applications that are critical to the success of your business operations on Azure. A deeper look at the growing spectrum of mission-critical applications Today, you can run thousands of third-party ISV applications on Azure. Many of these ISVs in turn depend on Azure to deliver their software solutions and services. Azure has become a mission-critical platform for our partner community as well as our customers. When most people think of mission-critical applications, enterprise resource planning systems (ERP), supply chain management (SCM), product lifecycle management (PLM), and customer relationship management (CRM) applications are often the first examples that come to mind. However, to illustrate the depth and breadth of our mission-critical ecosystem, consider these distinct and very different categories of applications that are critical for thousands of businesses around the world: Enterprise resource planning (ERP) systems. Data management and analytics applications. Backup, and business continuity solutions. High-performance computing (HPC) scenarios that exemplify the broadening of business-critical applications that rely on public cloud infrastructure. Azure’s deep ecosystem addresses the needs of customers in all of these categories and more. ERP systems When most people think of mission-critical applications ERP, SCM, PLM, and CRM applications are often the first examples that come to mind. Some examples on Azure include: SAP—We have been empowering our enterprise customers to run their most mission-critical SAP workloads on Azure, bringing the intelligence, security, and reliability of Azure to their SAP applications and data. Viewpoint, a Trimble company—Viewpoint has been helping the construction industry transform through integrated construction management software and solutions for more than 40 years. To meet the scalability and flexibility needs of both Viewpoint and their customers, a significant portion of their clients are now running their software suite on Azure and experiencing tangible benefits. Data management and analytics Data is the lifeblood of the enterprise. Our customers are experiencing an explosion of mission-critical data sources, from the cloud to the edge, and analytics are key to unlocking the value of data in the cloud. AI is a key ingredient, and yet another compelling reason to modernize your core apps on Azure. DataStax—DataStax Enterprise, a scale out, hybrid, cloud-native NoSQL database built on Apache Cassandra™, in conjunction with Azure, can provide a foundation for personalized, real-time scalable applications. Learn how this combination can enable enterprises to run mission critical workloads to increase business agility, without compromising compliance and data governance. Informatica—Informatica has been working with Microsoft to help businesses ensure that the data that is driving your customer and business decisions is trusted, authenticated, and secure. Specifically, Informatica is focused on the quality of the data that is powering your mission-critical applications and can help you derive the maximum value from your existing investments. SAS®—Microsoft and SAS are enabling customers to easily run their SAS workloads in the cloud, helping them unlock critical value from their digital transformation initiatives. As part of our collaboration, SAS is migrating its analytical products and industry solutions onto Azure as the preferred cloud provider for the SAS Cloud. Discover how mission-critical analytics is finding a home in the cloud. Backup and disaster recovery solutions Uptime and disaster recovery plans that minimize recovery time objective (RTO) and recovery point objective (RPO) are the top metrics senior IT decision-makers pay close attention to when it comes to mission-critical environments. Backing up critical data is a key element of putting in place robust business continuity plans. Azure provides built-in backup and disaster recovery features, and we also partner with industry leaders like Commvault, Rubrik, Veeam, Veritas, Zerto, and others so you can keep using your existing applications no matter where your data resides. Commvault—We continue to work with Commvault to deliver data management solutions that enable higher resiliency, visibility, and agility for business-critical workloads and data in our customers’ hybrid environments. Learn about Commvault’s latest offerings—including support for Azure VMware Solution and why their Metallic SaaS suite relies exclusively on Azure. Rubrik—Learn how Rubrik helps enterprises achieve low RTOs, self-service automation at scale, and accelerated cloud adoption. Veeam—Read how you can use Veeam’s solution portfolio to backup, recover, and migrate mission-critical workloads to Azure. Veritas—Find out how Veritas InfoScale has advanced integration with Azure that simplifies the deployment and management of your mission-critical applications in the cloud. Zerto—Discover how the extensive capabilities of Zerto’s platform help you protect mission critical applications on Azure. Teradici—Finally, Teradici underscores how the lines between mission-critical and business-critical are blurring. Read how business continuity plans are being adjusted to include longer term scenarios. HPC scenarios HPC applications are often the most intensive and highest-value workloads in a company, and are business-critical in many industries, including financial services, life sciences, energy, manufacturing and more. The biggest and most audacious innovations from supporting the fight against COVID-19, to 5G semiconductor design; from aerospace engineering design processes to the development of autonomous vehicles, and so much more are being driven by HPC. Ansys—Explore how Ansys Cloud on Azure has proven to be vital for business continuity during unprecedented times. Rescale—Read how Rescale can provide a turnkey platform for engineers and researchers to quickly access Azure HPC resources, easing the transition of business-critical applications to the cloud. You can rely on the expertise of our partner community Many organizations continue to accelerate the migration of their core applications to the cloud, realizing tangible and measurable value in collaboration with our broad partner community, which includes global system integrators like Accenture, Avanade, Capgemini, Wipro, and many others. For example, UnifyCloud recently helped a large organization in the financial sector modernize their data estate on Azure while achieving 69 percent reduction in IT costs. We are excited about the opportunities ahead of us, fueled by the power of our collective imagination. Learn more about how you can run business-critical applications on Azure and increase business resiliency. Watch our Microsoft Ignite session for a deeper diver and demo.   “The construction industry relies on Viewpoint to build and host the mission-critical technology used to run their businesses, so we have the highest possible standards when it comes to the solutions we provide. Working with Microsoft has allowed us to meet those standards in the Azure cloud by increasing scalability, flexibility and reliability – all of which enable our customers to accelerate their own digital transformations and run their businesses with greater confidence.” —Dan Farner, Senior Vice President of Product Development, Viewpoint (a Trimble Company) Read the Gaining Reliability, Scalability, and Customer Satisfaction with Viewpoint on Microsoft Azure blog.     “Business critical applications require a transformational data architecture built on scale-out data and microservices to enable dramatically improved operations, developer productivity, and time-to-market. With Azure and DataStax, enterprises can now run mission critical workloads with zero downtime at global scale to achieve business agility, compliance, data sovereignty, and data governance.”—Ed Anuff, Chief Product Officer, DataStax Read the Application Modernization for Data-Driven Transformation with DataStax Enterprise on Microsoft Azure blog.     “As Microsoft’s 2020 Data Analytics Partner of Year, Informatica works hand-in-hand with Azure to solve mission critical challenges for our joint customers around the world and across every sector.  The combination of Azure’s scale, resilience and flexibility, along with Informatica’s industry-leading Cloud-Native Data Management platform on Azure, provides customers with a platform they can trust with their most complex, sensitive and valuable business critical workloads.”—Rik Tamm-Daniels, Vice President of strategic ecosystems and technology, Informatica Read the Ensuring Business-Critical Data Is Trusted, Available, and Secure with Informatica on Microsoft Azure blog.       “SAS and Microsoft share a vision of helping organizations make better decisions as they strive to serve customers, manage risks and improve operations. Organizations are moving to the cloud at an accelerated pace. Digital transformation projects that were scheduled for the future now have a 2020 delivery date. Customers realize analytics and cloud are critical to drive their digital growth strategies. This partnership helps them quickly move to Microsoft Azure, so they can build, deploy, and manage analytic workloads in a reliable, high-performant and cost-effective manner.”—Oliver Schabenberger, Executive Vice President, Chief Operating Officer and Chief Technology Officer, SAS Read the Mission-critical analytics finds a home in the cloud blog.   “Microsoft is our Foundation partner and selecting Microsoft Azure as our platform to host and deliver Metallic was an easy decision. This decision sparks customer confidence due to Azure’s performance, scale, reliability, security and offers unique Best Practice guidance for customers and partners. Our customers rely on Microsoft and Azure-centric Commvault solutions every day to manage, migrate and protect critical applications and the data required to support their digital transformation strategies.”—Randy De Meno, Vice President/Chief Technology Officer, Microsoft Practice & Solutions Read the Commvault extends collaboration with Microsoft to enhance support for mission-critical workloads blog.     “Enterprises depend on Rubrik and Azure to protect mission-critical applications in SAP, Oracle, SQL and VMware environments. Rubrik helps enterprises move to Azure securely, faster, and with a low TCO using Rubrik’s automated tiering to Azure Archive Storage. Security minded customers appreciate that with Rubrik and Microsoft, business critical data is immutable, preventing ransomware threats from accessing backups, so businesses can quickly search and restore their information on-premises and in Azure.”—Arvind Nithrakashyap, Chief Technology Officer and Co-Founder, Rubrik Learn how enterprises use Rubrik on Azure.     “Veeam continues to see increased adoption of Microsoft Azure for business-critical applications and data across our 375,000 plus global customers. While migration of applications and data remains the primary barrier to the public cloud, we are committed to helping eliminate these challenges through a unified Cloud Data Management platform that delivers simplicity, flexibility and reliability at its core, while providing unrivaled data portability for greater cost controls and savings. Backed by the unique Veeam Universal License – a portable license that moves with workloads to ensure they're always protected – our customers are able to take control of their data by easily migrating workloads to Azure, and then continue protecting and managing them in the cloud.”—Danny Allan, Chief Technology Officer and Senior Vice President for Product Strategy, Veeam Read the Backup, recovery, and migration of mission-critical workloads on Azure blog.     “Thousands of customers rely on Veritas to protect their data both on-premises and in Azure. Our partnership with Microsoft helps us drive the data protection solutions that our enterprise customers rely on to keep their business-critical applications optimized and immediately available.”—Phil Brace, Chief Revenue Officer, Veritas Read the Migrate and optimize your mission-critical applications in Microsoft Azure with Veritas InfoScale blog.     “Microsoft has always leveraged the expertise of its partners to deliver the most innovative technology to customers. Because of Zerto’s long-standing collaboration with Microsoft, Zetro's IT Resilience platform is fully integrated with Azure and provides a robust, fully orchestrated solution that reduces data loss to seconds and downtime to minutes. Utilizing Zerto’s end-to-end, converged backup, DR, and cloud mobility platform, customers have proven time and time again they can protect mission-critical applications during planned or unplanned disruptions that include ransomware, hardware failure, and numerous other scenarios using the Azure cloud – the best cloud platform for IT resilience in the hybrid cloud environment.”—Gil Levonai, CMO and SVP of Product, Zerto Read the Protecting Critical Applications in the Cloud with the Zerto Platform blog.     “The longer business continues to be disrupted, the more the lines blur and business critical functions begin to shift to mission critical, making virtual desktops and workstations on Microsoft Azure an attractive option for IT managers supporting remote workforces in any function or industry. Teradici Cloud Access Software offers a flexible and secure solution that supports demanding business critical and mission critical workloads on Microsoft Azure and Azure Stack with exceptional performance and fidelity, helping businesses gain efficiency and resilience within their business continuity strategy.”—John McVay, Director of Strategic Alliances, Teradici Read the Longer IT timelines shift business critical priorities to mission critical blog.         "It is imperative for Ansys to support our customers' accelerating needs for on-demand high performance computing to drive their increasingly complex engineering requirements. Microsoft Azure, with its purpose-built HPC and robust go-to market capabilities, was a natural choice for us, and together we are enabling our joint customers to keep designing innovative products even as they work from home.”—Navin Budhiraja, Vice President and General Manager, Cloud and Platform, Ansys Read the Ansys Cloud on Microsoft Azure: A vital resource for business continuity during the pandemic blog.     “Robust and stable business critical systems are paramount for success. Rescale customers leveraging Azure HPC resources are taking advantage of the scalability, flexibility and intelligence to improve R&D, accelerate development and reduce costs not possible with a fixed infrastructure.”—Edward Hsu, Vice President of Product, Rescale Read the Business Critical Systems that Drive Innovation blog.     “Customers are transitioning business-critical workloads to Azure and realizing significant cost benefits while modernizing their applications. Our solutions help customers develop cloud strategy, modernize quickly, and optimize cloud environments while minimizing risk and downtime.”—Vivek Bhatnagar, Co-Founder and Chief Technology Officer, UnifyCloud Read the Moving mission-critical applications to the cloud: More important than ever blog.
Read more
  • 0
  • 0
  • 21001

article-image-cncf-announces-helm-3-a-kubernetes-package-manager-and-tool-to-manage-charts-and-libraries
Fatema Patrawala
14 Nov 2019
3 min read
Save for later

CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries

Fatema Patrawala
14 Nov 2019
3 min read
The Cloud Native Computing Foundation (CNCF), which builds sustainable ecosystems for cloud native software, yesterday announced the stable release of Helm 3. Helm is a package manager for Kubernetes and a tool for managing charts of pre-configured Kubernetes resources. “Helm is one of our fastest-growing projects in contributors and users contributing back to the project,” said Chris Aniszczyk, CTO, CNCF. “Helm is a powerful tool for all Kubernetes users to streamline deployments, and we’re impressed by the progress the community has made with this release in growing their community.” As per the team the internal implementation of Helm 3 has changed considerably from Helm 2. The most important change is the removal of Tiller, a service that communicates with the Kubernetes API to manage Helm packages. Then there are improvements to chart repositories, release management, security, and library charts. Helm uses a packaging format called charts, which are collections of files describing a related set of Kubernetes resources. These charts can then be packaged into versioned archives to be deployed. Helm 2 defined a workflow for creating, installing, and managing these charts. Helm 3 builds upon that workflow, changing the underlying infrastructure to reflect the needs of the community as they change and evolve. In this release, the Helm maintainers incorporated feedback and requests from the community to better address the needs of Kubernetes users and the broad cloud native ecosystem. Helm 3 is ready for public deployment Last week, third party security firm Cure53 completed their open source security audit of Helm 3, mentioning Helm’s mature focus on security, and concluded that Helm 3 is “recommended for public deployment.” According to the report, “in light of the findings stemming from this CNCF-funded project, Cure53 can only state that the Helm projects the impression of being highly mature. This verdict is driven by a number of different factors… and essentially means that Helm can be recommended for public deployment, particularly when properly configured and secured in accordance to recommendations specified by the development team.” “When we built Helm, we set out to create a tool to serve as an ‘on-ramp’ to Kubernetes. With Helm 3, we have really accomplished that,” said Matt Fisher, the Helm 3 release manager. “Our goal has always been to make it easier for the Kubernetes user to create, share, and run production-grade workloads. The core maintainers are really excited to hit this major milestone, and we look forward to hearing how the community is using Helm 3.” Helm 3 is a joint community effort, with core maintainers from organizations including Microsoft, Samsung SDS, IBM, and Blood Orange. As per the team the next phase of Helm’s development will see new features targeted toward stability and enhancements to existing features. Features on the roadmap include enhanced functionality for helm test, improvements to Helm’s OCI integration, and enhanced functionality for the Go client libraries. To know more about this news, read the official announcement from the Cloud Native Computing Foundation. StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard for improved Kubernetes security  
Read more
  • 0
  • 0
  • 20901
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-microsoft-ignite-2018-new-azure-announcements-you-need-to-know
Melisha Dsouza
25 Sep 2018
4 min read
Save for later

Microsoft Ignite 2018: New Azure announcements you need to know

Melisha Dsouza
25 Sep 2018
4 min read
If you missed the Azure announcements made at Microsoft Ignite 2018, don’t worry, we’ve got you covered. Here are some of the biggest changes and improvements the Microsoft Azure team have made to their cloud offering. Infrastructure Improvements Azure’s new capabilities to deliver the best infrastructure for every workload include: 1. GPU enable and High-Performance VM To deliver the best infrastructure for every workload, Azure has announced the Preview of GPU-enabled and High-Performance Computing Virtual Machines. The two new N-series Virtual Machines have NVIDIA GPU capabilities. The first one is the NVv2 VMs and the second virtual machine is the NDv2 VMs. The two new H-series VMs are optimized for performance and cost and are aimed at HPC workloads like fluid dynamics, structural mechanics, energy exploration, weather forecasting, risk analysis, and more. The first VM is the HB VMs and the second VM is the HC VMs. 2. Networking Azure has announced the general availability of Azure Firewall and Virtual WAN. They have also announced the preview of Azure Front Door Service, ExpressRoute Global Reach, and ExpressRoute Direct. Azure Firewall has a built-in high availability and cloud scalability. The Virtual WAN will provide a simple, unified, global connectivity, and security platform to deploy large-scale branch connectivity. 3. Improved Disk storage Microsoft has expanded the portfolio of Azure Disk offerings to deploy any app in Azure, including those that are the most IO intensive. The new previews include the Ultra SSDs, Standard SSDs, Larger managed disk sizes - to help deal with data-intensive workloads. This will also ensure better availability, reliability, and latency as compared to standard SSDs 4. Hybrid Microsoft has announced new hybrid capabilities to manage data, create even more consistency, and secure hybrid environment. They have introduced the Azure Data Box edge, Windows Server 2019 and Azure stack. With AI enable edge computing capabilities, and OS that supports hybrid management and flexibility for deploying applications, Azure is causing waves in the developer community Built-in security & management For improved Security, Azure has announced new services for preview, like Confidential Computing DC VM series, Secure score, improved threat protection, and network map (preview). These will expand Azure security controls and services to protect network, applications, data, and identities. These services are enhanced by the unique intelligence that comes from the trillions of signals we collect in running first party services like Office 365 and Xbox. For better Management, Azure has announced the preview of Azure Blueprints. These blueprints make it easy to deploy and update Azure environments in a repeatable manner using composable artifacts such as policies, role-based access controls, and resource templates. Azure cost management in the Azure portal (preview) will help to access cost management from PowerBI or directly from your own custom applications. Migration To make the migration to the cloud less challenging, Azure has announced the support for Hyper-V assessments in Azure Migrate, Azure SQL Database Managed Instance, which enables users to migrate SQL Servers to a fully managed Azure service. To help improve your migration experience, we are announcing that if you migrate Windows Server or SQL Server 2008/R2 to Azure, you will get three years of free extended security updates on those systems. This could save you some money when Windows Server and SQL Server 2008/ R2 end of support (EOS). Automated ML capability in Azure Machine Learning The problem of finding the best machine learning pipeline for a given dataset scales faster than the time available for data science projects.  Azure’s Automated machine learning enables developers to access an automated service that identifies the best machine learning pipelines for their labelled data. Data scientists are empowered with a powerful productivity tool that also takes uncertainty into account, incorporating a probabilistic model to determine the best pipeline to try next. To follow more of the Azure buzz, head to  Microsoft’s official Blog   Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Azure Functions 2.0 launches with better workload support for serverless Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace  
Read more
  • 0
  • 0
  • 20703

article-image-amazon-announces-improved-vpc-networking-for-aws-lambda-functions
Amrata Joshi
04 Sep 2019
3 min read
Save for later

Amazon announces improved VPC networking for AWS Lambda functions

Amrata Joshi
04 Sep 2019
3 min read
Yesterday, the team at Amazon announced improved VPC (Virtual Private Cloud) networking for AWS Lambda functions. It is a major improvement on how AWS Lambda function will work with Amazon VPC networks.  In case a Lambda function is not configured to connect to your VPCs then the function can access anything available on the public internet including other AWS services, HTTPS endpoints for APIs, or endpoints and services outside AWS. So, the function has no way to connect to your private resources that are inside your VPC. When the Lambda function is configured to connect to your own VPC, it creates an elastic network interface within the VPC and does a cross-account attachment. Image Source: Amazon These Lambda functions run inside the Lambda service’s VPC but they can only access resources over the network with the help of your VPC. But in this case, the user still won’t have direct network access to the execution environment where the functions run. What has changed in the new model? AWS Hyperplane for providing NAT (Network Address Translation) capabilities  The team is using AWS Hyperplane, the Network Function Virtualization platform that is used for Network Load Balancer and NAT Gateway. It also has supported inter-VPC connectivity for AWS PrivateLink. With the help of Hyperplane the team will provide NAT capabilities from the Lambda VPC to customer VPCs. Network interfaces within VPC are mapped to the Hyperplane ENI The Hyperplane ENI (Elastic Network Interfaces), a network resource controlled by the Lambda service, allows multiple execution environments to securely access resources within the VPCs in your account. So, in the previous model, the network interfaces in your VPC were directly mapped to Lambda execution environments. But in this case, the network interfaces within your VPC are mapped to the Hyperplane ENI. Image Source: Amazon How is Hyperplane useful? To reduce latency When a function is invoked, the execution environment now uses the pre-created network interface and establishes a network tunnel to it which reduces the latency. To reuse network interface cross functions Each of the unique security group:subnet combination across functions in your account needs a distinct network interface. If such a combination is shared across multiple functions in your account, it is now possible to reuse the same network interface across functions. What remains unchanged? AWS Lambda functions will still need the IAM permissions for creating and deleting network interfaces in your VPC. Users can still control the subnet and security group configurations of the network interfaces.  Users still need to use a NAT device(for example VPC NAT Gateway) for giving a function internet access or for using VPC endpoints to connect to services outside of their VPC. The types of resources that your functions can access within the VPCs still remain the same. The official post reads, “These changes in how we connect with your VPCs improve the performance and scale for your Lambda functions. They enable you to harness the full power of serverless architectures.” To know more about this news, check out the official post. What’s new in cloud & networking this week? Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models  
Read more
  • 0
  • 0
  • 20700

article-image-why-did-slack-suffer-an-outage-on-friday
Fatema Patrawala
01 Jul 2019
4 min read
Save for later

Why did Slack suffer an outage on Friday?

Fatema Patrawala
01 Jul 2019
4 min read
On Friday, Slack, an instant messaging platform for work spaces confirmed news of the global outage. Millions of users reported disruption in services due to the outage which occurred early Friday afternoon. Slack experienced a performance degradation issue impacting users from all over the world, with multiple services being down. Yesterday the Slack team posted a detailed incident summary report of the service restoration. The Slack status page read: “On June 28, 2019 at 4:30 a.m. PDT some of our servers became unavailable, causing degraded performance in our job processing system. This resulted in delays or errors with features such notifications, unfurls, and message posting. At 1:05 p.m. PDT, a separate issue increased server load and dropped a large number of user connections. Reconnection attempts further increased the server load, slowing down customer reconnection. Server capacity was freed up eventually, enabling all customers to reconnect by 1:36 p.m. PDT. Full service restoration was completed by 7:20 p.m. PDT. During this period, customers faced delays or failure with a number of features including file uploads, notifications, search indexing, link unfurls, and reminders. Now that service has been restored, the response team is continuing their investigation and working to calculate service interruption time as soon as possible. We’re also working on preventive measures to ensure that this doesn’t happen again in the future. If you’re still running into any issues, please reach out to us at feedback@slack.com.” https://twitter.com/SlackStatus/status/1145541218044121089 These were the various services which were affected due to outage: Notifications Calls Connections Search Messaging Apps/Integrations/APIs Link Previews Workspace/Org Administration Posts/Files Timeline of Friday’s Slack outage According to user reports it was observed that some Slack messages were not delivered with users receiving an error message. On Friday, at 2:54 PM GMT+3, Slack status page gave the initial signs of the issue,  "Some people may be having an issue with Slack. We’re currently investigating and will have more information shortly. Thank you for your patience,". https://twitter.com/SlackStatus/status/1144577107759996928 According to the Down Detector, Slack users noted that message editing also appeared to be impacted by the latest bug. Comments indicated it was down around the world, including Sweden, Russia, Argentina, Italy, Czech Republic, Ukraine and Croatia. The Slack team continued to give updates on the issue, and on Friday evening they reported of services getting back to normal. https://twitter.com/SlackStatus/status/1144806594435117056 This news gained much attraction on Twitter, as many of them commented saying Slack is already preps up for the weekend. https://twitter.com/RobertCastley/status/1144575285980999682 https://twitter.com/Octane/status/1144575950815932422 https://twitter.com/woutlaban/status/1144577117788790785   Users on Hacker News compared Slack with other messaging platforms like Mattermost, Zulip chat, Rocketchat etc. One of the user comments read, “Just yesterday I was musing that if I were King of the (World|Company) I'd want an open-source Slack-alike that I could just drop into the Cloud of my choice and operate entirely within my private network, subject to my own access control just like other internal services, and with full access to all message histories in whatever database-like thing it uses in its Cloud. Sure, I'd still have a SPOF but it's game over anyway if my Cloud goes dark. Is there such a project, and if so does it have any traction in the real world?” To this another user responded, “We use this at my company - perfectly reasonable UI, don't know about the APIs/integrations, which I assume are way behind Slack…” Another user also responded, “Zulip, Rocket.Chat, and Mattermost are probably the best options.” Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Dropbox gets a major overhaul with updated desktop app, new Slack and Zoom integration Slack launches Enterprise Key Management (EKM) to provide complete control over encryption keys  
Read more
  • 0
  • 0
  • 20539

article-image-microsoft-announces-azure-quantum-an-open-cloud-ecosystem-to-learn-and-build-scalable-quantum-solutions
Savia Lobo
05 Nov 2019
3 min read
Save for later

Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions

Savia Lobo
05 Nov 2019
3 min read
Yesterday, at the Microsoft Ignite 2019 in Orlando, the company released the preview of its first full-stack, scalable, general open cloud ecosystem, ‘Azure Quantum’. For developers, Microsoft has specifically created the open-source Quantum Development Kit, which includes all of the tools and resources you need to start learning and building quantum solutions. Azure Quantum is a set of quantum services including pre-built solutions to software and quantum hardware, providing developers and customers access to some of the most competitive quantum offerings in the market. For this offering, Microsoft has partnered with 1QBit, Honeywell, IonQ, and QCI. With Azure Quantum service, anyone gains deeper insights about quantum computing through a series of tools and learning tutorials such as the quantum katas. It also allows developers to write programs with Q# and QDK and experiment running the code against simulators and a variety of quantum hardware. Customers can also solve complex business challenges with pre-built solutions and algorithms running in Azure. According to Wired, “Azure Quantum has similarities to a service from IBM, which has offered free and paid access to prototype quantum computers since 2016. Google, which said last week that one of its quantum processors had achieved a milestone known as “quantum supremacy” by outperforming a top supercomputer, has said it will soon offer remote access to quantum hardware to select companies.” Microsoft’s Azure Quantum model is more like the existing computing industry, where cloud providers allow customers to choose processors from companies such as Intel and AMD, says William Hurley, CEO of startup Strangeworks. This startup offers services for programmers to build and collaborate with quantum computing tools from IBM, Google, and others. With just a single program, users will be able to target a variety of hardware through Azure Quantum – Azure classical computing, quantum simulators, and resource estimators, and quantum hardware from our partners, as well as our future quantum system being built on revolutionary topological qubit. Microsoft, on its official website, announced that the Azure Quantum will be launched in private preview in the coming months. Many users are excited to try the Quantum service by Azure. https://twitter.com/Daniel_Rubino/status/1191364279339036673 To know more about Azure Quantum in detail, visit Microsoft’s official page. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Using Qiskit with IBM QX to generate quantum circuits [Tutorial] How to translate OpenQASM programs in IBX QX into quantum scores [Tutorial]
Read more
  • 0
  • 0
  • 20463
article-image-stackrox-app-integrates-into-the-sumo-logic-dashboard-for-improved-kubernetes-security
Savia Lobo
12 Sep 2019
3 min read
Save for later

StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security

Savia Lobo
12 Sep 2019
3 min read
Today, StackRox, a company providing threat protection for containers and Kubernetes, announced the availability of the StackRox App for the Sumo Logic Continuous Intelligence Platform. The StackRox App for Sumo Logic provides customers with critical insights into misconfigurations and security events for their container and Kubernetes environments directly within their Sumo Logic Dashboard. Using this app, different security teams can view StackRox data regarding vulnerabilities, misconfigurations, runtime threats, and other policy violations within Sumo Logic and streamline their remediation efforts. John Coyle, vice president of business development for Sumo Logic, said, "We're excited to launch our Kubernetes security integration with StackRox since it will enable customers to gain unparalleled insights and operational metrics in a single dashboard to ensure their cloud-native environments are continuously protected.” "The StackRox Kubernetes-native container security platform provides unique context on misconfigurations, risk profiling, and runtime incidents that will enable our joint customers to more quickly identify and address security issues," Coyle further added. The StackRox App for Sumo Logic provides several key metrics such as vulnerabilities, runtime threats, and compliance violations across container and Kubernetes environments through the following dashboards: StackRox Overview:  This offers a snapshot of key metrics about an organization’s overall Kubernetes and container security posture StackRox Image Violations: These display information from StackRox’s image scanning and vulnerability management capabilities and prioritizes security issues in container images based on rich context derived from Kubernetes StackRox Kubernetes Violations: These highlight prioritized list of misconfigurations of Kubernetes components based on more than 70 DevOps and Security best practices StackRox Runtime Violations: These provide insights into threats and other suspicious activity at runtime based on continuous monitoring of every single container within Kubernetes environments Richard Reinders, manager of security operations for Looker, a joint StackRox and Sumo Logic customer said, “StackRox gives us a Kubernetes-centric single pane of glass view into the security posture of our multi-cloud infrastructure. Having StackRox’s unique Kubernetes security insights available directly on our Sumo Logic Dashboard provides us with a single place to view security and compliance details alongside our operational analytics for our cloud-native infrastructure. This integration also allows us to use a single, consistent, security event detection and response pipeline.” To more about the StackRox App for Sumo Logic head over to its official website. Other interesting news in security CNCF-led open-source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Over 47K Supermicro servers’ BMCs are prone to USBAnywhere, a remote virtual media vulnerability Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks
Read more
  • 0
  • 0
  • 20402

article-image-gitlab-retracts-its-privacy-invasion-policy-after-backlash-from-community
Vincy Davis
25 Oct 2019
3 min read
Save for later

GitLab retracts its privacy invasion policy after backlash from community

Vincy Davis
25 Oct 2019
3 min read
Yesterday, GitLab retracted its earlier decision to implement user level product usage tracking on their websites after receiving negative feedback from its users. https://twitter.com/gitlab/status/1187408628531322886 Two days ago, GitLab informed its users that starting from its next yet to be released version (version 12.4), there would be an addition of Javascript snippets in GitLab.com (GitLab’s SaaS offering) and GitLab's proprietary Self-Managed packages (Starter, Premium, and Ultimate) websites. These Java snippets will be used to interact with GitLab and other third-party SaaS telemetry services. Read More: GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more GitLab.com users were specifically notified that until they accept the new service terms condition, their access to the web interface and API will be blocked. This meant that users with integration to the API will experience a brief pause of service, until the new terms are accepted by signing in to the web interface. The self-managed users, on the other hand, were apprised that they can continue to use the free software GitLab Core without any changes. The DevOps coding platform says that SaaS telemetry products are important tools to understand the analytics on user behaviour inside web-based applications. According to the company, these additional user information will help in increasing their website speed and also enrich user experience. “GitLab has a lot of features, and a lot of users, and it is time that we use telemetry to get the data we need for our product managers to improve the experience,” stated the official blog. The telemetry tools will use JavaScript snippets that will be executed in the user’s browser and will send the user information back to the telemetry service. Read More: GitLab faces backlash from users over performance degradation issues tied to redis latency The company had also assured users that they will disclose all the whereabouts of the user information in the privacy policy. They also ensured that the third-party telemetry service will have data protection standards equivalent to their own standard and will also aim for their SOC2 compliance. If any user does not wish to be tracked, they can turn on the Do Not Track (DNT) mechanism in their GitLab.com or GitLab Self-Managed web browser. The DNT mechanism will not load the  the JavaScript snippet. “The only downside to this is that users may also not get the benefit of in-app messaging or guides that some third-party telemetry tools have that would require the JavaScript snippet,” added the official blog. Following this announcement, GitLab received loads of negative feedback from users. https://twitter.com/PragmaticAndy/status/1187420028653723649 https://twitter.com/Cr0ydon/status/1187380142995320834 https://twitter.com/BlindMyStare/status/1187400169303789568 https://twitter.com/TheChanceSays/status/1187095735558238208 Although, GitLab has rolled backed the Telemetry service changes for now, and are re-considering their decision, many users are warning them to drop the idea completely. https://twitter.com/atom0s/status/1187438090991751168 https://twitter.com/ry60003333/status/1187601207046524928 https://twitter.com/tresronours/status/1187543188703186949 DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 GitLab goes multicloud using Crossplane with kubectl Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim PostGIS 3.0.0 releases with raster support as a separate extension Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more
Read more
  • 0
  • 0
  • 20330

article-image-amazon-s3-retiring-support-path-style-api-requests-sparks-censorship-fears
Fatema Patrawala
06 May 2019
5 min read
Save for later

Amazon S3 is retiring support for path-style API requests; sparks censorship fears

Fatema Patrawala
06 May 2019
5 min read
Last week on Tuesday Amazon announced that Amazon S3 will no longer support path-style API requests. Currently Amazon S3 supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com/<bucketname>/key) and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //<bucketname>.s3.amazonaws.com/key). Amazon team mentions in the announcement that, “In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format.” They have also asked customers to update their applications to use the virtual-hosted style request format when making S3 API requests. And this should be done before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format. They have further mentioned that, “Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.” Users on Hackernews see this as a poor development by Amazon and have noted its implications that collateral freedom techniques using Amazon S3 will no longer work. One of them has commented strongly on this, “One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work. To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away. I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development. This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.” Amazon team suggests that if your application is not able to utilize the virtual-hosted style request format, or if you have any questions or concerns, you may reach out to AWS Support. To know more about this news check out the official announcement page from Amazon. Update from Amazon team on 8th May Amazon’s Chief Evangelist for AWS, Jeff Barr sat with the S3 team to understand this change in detail. After getting a better understanding he posted an update on why the team plans to deprecate the path based model. Here’s his comparison on old vs the new: S3 currently supports two different addressing models: path-style and virtual-hosted style. Take a quick look at each one. The path-style model looks either like this (the global S3 endpoint): https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg https://s3.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png Or this (one of the regional S3 endpoints): https://s3-useast2.amazonaws.com/jbarrpublic/images/ritchie_and_thompson_pdp11.jpeg https://s3-us-east-2.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png For example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /jeffbarr-public/classic_amazon_door_desk.png are object keys. Even though the objects are owned by distinct AWS accounts and are in different S3 buckets and possibly in distinct AWS regions, both of them are in the DNS subdomain s3.amazonaws.com. Hold that thought while we look at the equivalent virtual-hosted style references: https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg https://jeffbarr-public.s3.amazonaws.com/classic_amazon_door_desk.png These URLs reference the same objects, but the objects are now in distinct DNS subdomains (jbarr-public.s3.amazonaws.com and jeffbarr-public.s3.amazonaws.com, respectively). The difference is subtle, but very important. When you use a URL to reference an object, DNS resolution is used to map the subdomain name to an IP address. With the path-style model, the subdomain is always s3.amazonaws.com or one of the regional endpoints; with the virtual-hosted style, the subdomain is specific to the bucket. This additional degree of endpoint specificity is the key that opens the door to many important improvements to S3. The select few in the community are in favor of this as per one of the user comment on Hacker News which says, “Thank you for listening! The original plan was insane. The new one is sane. As I pointed out here https://twitter.com/dvassallo/status/1125549694778691584 thousands of printed books had references to V1 S3 URLs. Breaking them would have been a huge loss. Thank you!” But for the other few Amazon team has failed to address the domain censorship issue as per another user which says, “Still doesn't help with domain censorship. This was discussed in-depth in the other thread from yesterday, but TLDR, it's a lot harder to block https://s3.amazonaws.com/tiananmen-square-facts than https://tiananmen-square-facts.s3.amazonaws.com because DNS lookups are made before HTTPS kicks in.” Read about this update in detail here. Amazon S3 Security access and policies 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces S3 batch operations to process millions of S3 objects
Read more
  • 0
  • 0
  • 20309
article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 20300

article-image-google-ai-introduces-snap-a-microkernel-approach-to-host-networking
Savia Lobo
29 Oct 2019
4 min read
Save for later

Google AI introduces Snap, a microkernel approach to ‘Host Networking’

Savia Lobo
29 Oct 2019
4 min read
A few days ago, the Google AI team introduced Snap, a microkernel-inspired approach to host networking at the 27th ACM Symposium on Operating Systems Principles. Snap is a userspace networking system with flexible modules that implement a range of network functions, including edge packet switching, virtualization for our cloud platform, traffic shaping policy enforcement, and a high-performance reliable messaging and RDMA-like service. The Google AI team says, “Snap has been running in production for over three years, supporting the extensible communication needs of several large and critical systems.” Why Snap? Prior to Snap, Google AI team says they were limited in their ability to develop and deploy new network functionality and performance optimizations in several ways. This is because developing kernel code was slow and drew on a smaller pool of software engineers. Second, feature release through the kernel module reloads covered only a subset of functionality and often required disconnecting applications, while the more common case of requiring a machine reboot necessitated draining the machine of running applications. Unlike prior microkernel systems, Snap benefits from multi-core hardware for fast IPC and does not require the entire system to adopt the approach wholesale, as it runs as a userspace process alongside our standard Linux distribution and kernel. Source: Snap Research paper Using Snap, the Google researchers also created a new communication stack called Pony Express that implements a custom reliable transport and communications API. Pony Express provides significant communication efficiency and latency advantages to Google applications, supporting use cases ranging from web search to storage. Features of the Snap userspace networking system Snap’s architecture comprises of recent ideas in userspace networking, in-service upgrades, centralized resource accounting, programmable packet processing, kernel-bypass RDMA functionality, and optimized co-design of transport, congestion control, and routing. With these, Snap: Enables a high rate of feature development with a microkernel-inspired approach of developing in userspace with transparent software upgrades. It also retains the benefits of centralized resource allocation and management capabilities of monolithic kernels and also improves upon accounting gaps with existing Linux-based systems. Implements a custom kernel packet injection driver and a custom CPU scheduler that enables interoperability without requiring the adoption of new application runtimes and while maintaining high performance across use cases that simultaneously require packet processing through both Snap and the Linux kernel networking stack. Encapsulates packet processing functions into composable units called “engines”, which enables both modular CPU scheduling as well as incremental and minimally disruptive state transfer during upgrades. Through Pony Express, it provides support for OSI layer 4 and 5 functionality through an interface similar to an RDMA-capable “smart” NIC. This enables transparently leveraging offload capabilities in emerging hardware NICs as a means to further improve server efficiency and throughput. Supports 3x better transport processing efficiency than the baseline Linux kernel and supporting RDMA-like functionality at speeds of 5M ops/sec/core. MicroQuanta: Snap’s new lightweight kernel scheduling class To dynamically scale CPU resources, Snap works in conjunction with a new lightweight kernel scheduling class called MicroQuanta. It provides a flexible way to share cores between latency-sensitive Snap engine tasks and other tasks, limiting the CPU share of latency-sensitive tasks and maintaining low scheduling latency at the same time. A MicroQuanta thread runs for a configurable runtime out of every period time units, with the remaining CPU time available to other CFS-scheduled tasks using a variation of a fair queuing algorithm for high and low priority tasks (rather than more traditional fixed time slots). MicroQuanta is a robust way for Snap to get priority on cores runnable by CFS tasks that avoid starvation of critical per-core kernel threads. While other Linux real-time scheduling classes use both per-CPU tick-based and global high-resolution timers for bandwidth control, MicroQuanta uses only per-CPU highresolution timers. This allows scalable time-slicing at microsecond granularity. Snap is being received positively by many in the community. https://twitter.com/copyconstruct/status/1188514635940421632 To know more about Snap in detail, you can read it’s complete research paper. Amazon announces improved VPC networking for AWS Lambda functions Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more
Read more
  • 0
  • 0
  • 20204
Modal Close icon
Modal Close icon