Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-openssh-8-0-released-addresses-scp-vulnerability-new-ssh-additions
Fatema Patrawala
19 Apr 2019
2 min read
Save for later

OpenSSH 8.0 released; addresses SCP vulnerability and new SSH additions

Fatema Patrawala
19 Apr 2019
2 min read
Theo de Raadt and the OpenBSD developers who maintain the OpenSSH, today released the latest version OpenSSH 8.0. OpenSSH 8.0 has an important security fix for a weakness in the scp(1) tool when you use scp for copying files to/from remote systems. Till now when copying files from remote systems to a local directory, SCP was not verifying the filenames of what was being sent from the server to client. This allowed a hostile server to create or clobber unexpected local files with attack-controlled data regardless of what file(s) were actually requested for copying from the remote server. OpenSSH 8.0 adds client-side checking that the filenames sent from the server match the command-line request. While this client-side checking added to SCP, the OpenSSH developers recommend against using it and instead use sftp, rsync, or other alternatives. "The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.", mention OpenSSH developers. New to OpenSSH 8.0 meanwhile is support for ECDSA keys in PKCS#11 tokens, experimental quantum-computing resistant key exchange method. Also, the default RSA key size from ssh-keygen has been increased to 3072 bits and more SSH utilities supporting a "-v" flag for greater verbosity are added. It also comes with a wide range of fixes throughout including a number of portability fixes. More details on OpenSSH 8.0 is available on OpenSSH.com. OpenSSH, now a part of the Windows Server 2019 OpenSSH 7.8 released! OpenSSH 7.9 released
Read more
  • 0
  • 0
  • 19300

article-image-lower-prices-and-more-flexible-purchase-options-for-azure-red-hat-openshift-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
4 min read
Save for later

Lower prices and more flexible purchase options for Azure Red Hat OpenShift from Microsoft Azure Blog > Announcements

Matthew Emerick
07 Oct 2020
4 min read
For the past several years, Microsoft and Red Hat have worked together to co-develop hybrid cloud solutions intended to enable greater customer innovation. In 2019, we launched Azure Red Hat OpenShift as a fully managed, jointly engineered implementation of Red Hat OpenShift running on Red Hat OpenShift 3.11 that is deeply integrated into the Azure control plane. With the release of Red Hat OpenShift 4, we announced the general availability of Azure Red Hat OpenShift on OpenShift 4 in April 2020. Today we’re sharing that in collaboration with Red Hat, we are dropping the price of Red Hat OpenShift licenses on Azure Red Hat OpenShift worker nodes by up to 77 percent. We’re also adding the choice of a three-year term for Reserved Instances (RIs) on top of the existing one year RI and pay as you go options, with a reduction in the minimum number of virtual machines required. The new pricing is effective immediately. Finally, as part of the ongoing improvements, we are increasing the Service Level Agreement (SLA) to be 99.95 percent. With these new price reductions, Azure Red Hat OpenShift provides even more value with a fully managed, highly-available enterprise Kubernetes offering that manages the upgrades, patches, and integration for the components that are required to make a platform. This allows your teams to focus on building business value, not operating technology platforms. How can Red Hat OpenShift help you? As a developer Kubernetes was built for the needs of IT Operations, not developers. Red Hat OpenShift is designed so developers can deploy apps on Kubernetes without needing to learn Kubernetes. With built-in Continuous Integration (CI) and Continuous Delivery (CD) pipelines, you can code and push to a repository and have your application up and running in minutes. Azure Red Hat OpenShift includes everything you need to manage your development lifecycle; standardized workflows, support for multiple environments, continuous integration, release management, and more. Also included is the provision self-service, on-demand application stacks, and deploy solutions from the Developer Catalog such as OpenShift Service Mesh, OpenShift Serverless, Knative, and more. Red Hat OpenShift provides commercial support for the languages, databases, and tooling you already use, while providing easy access to Azure services such as Azure Database for PostgreSQL and Azure Cosmos DB, to enable you create resilient and scalable cloud native applications. As an IT operator Adopting a container platform lets you keep up with application scale and complexity requirements. Azure Red Hat OpenShift is designed to make deploying and managing the container platform easier, with automated maintenance operations and upgrades built right in, integrated platform monitoring—including Azure Monitor for Containers, and a support experience directly from the Azure support portal. With Azure Red Hat OpenShift, your developers can be up and running in minutes. You can scale on your terms, from ten containers to thousands, and only pay for what you need. With one-click updates for platform, services, and applications, Azure Red Hat OpenShift monitors security throughout the software supply chain to make applications more stable without reducing developer productivity. You can also leverage built-in vulnerability assessment and management tools in Azure Security Center to scan images that are pushed to, imported, or pulled from an Azure Container Registry. Discover Operators from the Kubernetes community and Red Hat partners, curated by Red Hat. You can install Operators on your clusters to provide optional add-ons and shared services to your developers, such as AI and machine learning, application runtimes, data, document stores, monitoring logging and insights, security, and messaging services. Regional availability Azure Red Hat OpenShift is available in 27 regions worldwide, and we’re continuing to expand that list. Over the past few months, we have added support for Azure Red Hat OpenShift in a number of regions, including West US, Central US, North Central US, Canada Central, Canada East, Brazil South, UK West, Norway East, France Central, Germany West Central, Central India, UAE North, Korea Central, East Asia, and Japan East. Industry compliance certifications To help you meet your compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is PCI DSS, FedRAMP High, SOC 1/2/3, ISO 27001 and HITRUST certified. Azure maintains the largest compliance portfolio in the industry, both in terms of the total number of offerings and also the number of customer-facing services in assessment scope. For more details, check the Microsoft Azure Compliance Offerings, as well as the number of customer-facing services in the assessment scope. Next steps Try Azure Red Hat OpenShift now. We are excited about these new lower prices and how this helps our customers build their business on a platform that enables IT operations and developers to collaborate effectively, develop, and deploy containerized applications rapidly with strong security capabilities.
Read more
  • 0
  • 0
  • 19295

article-image-serverless-computing-101
Guest Contributor
09 Feb 2019
5 min read
Save for later

Serverless Computing 101

Guest Contributor
09 Feb 2019
5 min read
Serverless applications began gaining popularity when Amazon launched AWS Lambda back in the year 2014. Since then, we are becoming more familiar with Serverless Computing as it is exponentially growing in use and reference among the vendors who are entering the markets with their own solutions. The reason behind the hype of serverless computing is it requires no infrastructure management which is a modern approach for the enterprise to lessen up the workload. What is Serverless Computing? It is a special kind of software architecture which executes the application logic in an environment without visible processes, operating systems, servers, and virtual machines. Serverless Computing is also responsible for provisioning and managing the infrastructure entirely by the service provider. Serverless defines a cloud service that abstracts the details of the cloud-based processor from its user; this does not mean servers are no longer needed, but they are not user-specified or controlled. Serverless computing refers to serverless architecture which relates to the applications that depend on a third-party service (BaaS) and container (FaaS). Image Source: Tatvasoft The top serverless computing providers like Amazon, Microsoft, Google and IBM provide serverless computing like FaaS to companies like NetFlix, Coca-cola, Codepen and many more. FaaS Function as a Service is a mode of cloud computing architecture where developers write business logic functions or java development code which are executed by the cloud providers. In this, the developers can upload loads of functionality into the cloud that can be independently executed. The cloud service provider manages everything from execution to scaling it automatically. Key components of FaaS: Events - Something that triggers the execution of the function is regarded as an event. For instance: Uploading a file or publishing a message. Functions - It is regarded as an independent unit of deployment. For instance: Processing a file or performing a scheduled task. Resources - Components used by the function is defined as resources. For instance: File system services or database services. BaaS Backend as a Service allows developers to write and maintain only the frontend of the application and enable them by using the backend service without building and maintaining them. The BaaS service providers offer in-built pre-written software activities like user authentication, database management, remote updating, cloud storage and much more. The developers do not have to manage servers or virtual machines to keep their applications running which helps them to build and launch applications more quickly. Image courtesy - Gallantra Use-Cases of Serverless Computing Batch jobs scheduled tasks: Schedules the jobs that require intense parallel computation, IO or network access. Business logic: The orchestration of microservice workloads that execute a series of steps for applying your ideas. Chatbots: Helps to scale at peak demand times automatically. Continuous Integration pipeline: It has the ability to remove the need for pre-provisioned hosts. Captures Database change: Auditing or ensuring modifications in order to meet quality standards. HTTP REST APIs and Web apps: Sends traditional request and gives a response to the workloads. Mobile Backends: Can build on the REST API backend workload above the BaaS APIs. Multimedia processing: To execute a transformational process in response to a file upload by implementing the functions. IoT sensor input messages: Receives signals and scale in response. Stream processing at scale: To process data within a potentially infinite stream of messages. Should you use Serverless Computing? Merits Fully managed services - you do not have to worry about the execution process. Supports event triggered approach - sets the priorities as per the requirements. Offers Scalability - automatically handles load balancing. Only pay for Execution time - you need to pay just for what you used. Quick development and deployment - helps to run infinite test cases without worrying about other components. Cut-down time-to-market - you can look at your refined product in hours after creating it. Demerits Third-party dependency - developers have to depend on cloud service providers completely. Lacking Operational tools - need to depend on providers for debugging and monitoring devices. High Complexity - takes more time and it is difficult to manage more functions. Functions cannot stay for a longer period - only suitable for applications having shorter processes. Limited mapping to database indexes - challenging to configure nodes and indexes. Stateless Functions - resources cannot exist within a function after the function stops to exit. Serverless computing can be seen as the future for the next generation of cloud-native and is a new approach to write and deploy applications that allow developers to focus only on the code. This approach helps to reduce the time to market along with the operational costs and system complexity. Third-party services like AWS Lambda has eliminated the requirement to set up and configure physical servers or virtual machines. It is always best to take an expert's advice that holds years of experience in software development with modern technologies. Author Bio: Working as a manager in a Software outsourcing company Tatvasoft.com, Vikash Kumar has a keen interest in blogging and likes to share useful articles on Computing. Vikash has also published his bylines on major publication like Kd nuggets, Entrepreneur, SAP and many more. Google Cloud Firestore, the serverless, NoSQL document database, is now generally available Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications
Read more
  • 0
  • 0
  • 19245

article-image-new-dataproc-optional-components-support-apache-flink-and-docker-from-cloud-blog
Matthew Emerick
15 Oct 2020
5 min read
Save for later

New Dataproc optional components support Apache Flink and Docker from Cloud Blog

Matthew Emerick
15 Oct 2020
5 min read
Google Cloud’s Dataproc lets you run native Apache Spark and Hadoop clusters on Google Cloud in a simpler, more cost-effective way. In this blog, we will talk about our newest optional components available in Dataproc’s Component Exchange: Docker and Apache Flink. Docker container on Dataproc Docker is a widely used container technology. Since it’s now a Dataproc optional component, Docker daemons can now be installed on every node of the Dataproc cluster. This will give you the ability to install containerized applications and interact with Hadoop clusters easily on the cluster.  In addition, Docker is also critical to supporting these features: Running containers with YARN Portable Apache Beam job Running containers on YARN allows you to manage dependencies of your YARN application separately, and also allows you to create containerized services on YARN. Get more details here. Portable Apache Beam packages jobs into Docker containers and submits them the Flink cluster. Find more detail about Beam portability.  Docker optional component is also configured to use Google Container Registry, in addition to the default Docker registry. This lets you use container images managed by your organization. Here is how to create a Dataproc cluster with the Docker optional component: gcloud beta dataproc clusters create <cluster-name>   --optional-components=DOCKER   --image-version=1.5 When you run the Docker application, the log will be streamed to Cloud Logging, using gcplogs driver. If your application does not depend on any Hadoop services, check out Kubernetes and Google Kubernetes Engine to run containers natively. For more on using Dataproc, check out our documentation. Apache Flink on Dataproc Among streaming analytics technologies, Apache Beam and Apache Flink stand out. Apache Flink is a distributed processing engine using stateful computation. Apache Beam is a unified model for defining batch and steaming processing pipelines. Using Apache Flink as an execution engine, you can also run Apache Beam jobs on Dataproc, in addition to Google’s Cloud Dataflow service. Flink and running Beam on Flink are suitable for large-scale, continuous jobs, and provide: A streaming-first runtime that supports both batch processing and data streaming programs A runtime that supports very high throughput and low event latency at the same time Fault-tolerance with exactly-once processing guarantees Natural back-pressure in streaming programs Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms Integration with YARN and other components of the Apache Hadoop ecosystem Our Dataproc team here at Google Cloud recently announced that Flink Operator on Kubernetes is now available. It allows you to run Apache Flink jobs in Kubernetes, bringing the benefits of reducing platform dependency and producing better hardware efficiency.  Basic Flink Concepts A Flink cluster consists of a Flink JobManager and a set of Flink TaskManagers. Like similar roles in other distributed systems such as YARN, JobManager has responsibilities such as accepting jobs, managing resources and supervising jobs. TaskManagers are responsible for running the actual tasks.  When running Flink on Dataproc, we use YARN as resource manager for Flink. You can run Flink jobs in 2 ways: job cluster and session cluster. For the job cluster, YARN will create JobManager and TaskManagers for the job and will destroy the cluster once the job is finished. For session clusters, YARN will create JobManager and a few TaskManagers.The cluster can serve multiple jobs until being shut down by the user. How to create a cluster with Flink Use this command to get started: gcloud beta dataproc clusters create <cluster-name>   --optional-components=FLINK   --image-version=1.5 How to run a Flink job After a Dataproc cluster with Flink starts, you can submit your Flink jobs to YARN directly using the Flink job cluster. After accepting the job, Flink will start a JobManager and slots for this job in YARN. The Flink job will be run in the YARN cluster until finished. The JobManager created will then be shut down. Job logs will be available in regular YARN logs. Try this command to run a word-counting example: The Dataproc cluster will not start a Flink Session cluster by default. Instead, Dataproc will create the script “/usr/bin/flink-yarn-daemon,” which will start a Flink session.  If you want to start a Flink session when Dataproc is created, use the metadata key to allow it: If you want to start the Flink session after Dataproc is created, you can run the following command on master node: Submit jobs to that session cluster. You’ll need to get the Flink JobManager URL: How to run a Java Beam job It is very easy to run an Apache Beam job written in Java. There is no extra configuration needed. As long as you package your Beam jobs into a JAR file, you do not need to configure anything to run Beam on Flink. This is the command you can use: How to run a Python Beam job written in Python Beam jobs written in Python use a different execution model. To run them in Flink on Dataproc, you will also need to enable the Docker optional component. Here’s how to create a cluster: You will also need to install necessary Python libraries needed by Beam, such as apache_beam and apache_beam[gcp]. You can pass in a Flink master URL to let it run in a session cluster. If you leave the URL out, you need to use the job cluster mode to run this job: After you’ve written your Python job, simply run it to submit: Learn more about Dataproc.
Read more
  • 0
  • 0
  • 19161

article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 19126

article-image-amazon-adds-udp-load-balancing-support-for-network-load-balancer
Vincy Davis
25 Jun 2019
3 min read
Save for later

Amazon adds UDP load balancing support for Network Load Balancer

Vincy Davis
25 Jun 2019
3 min read
Yesterday, Amazon announced support for load balancing UDP traffic on Network Load Balancers, which will enable it to deploy connectionless services for online gaming, IoT, streaming, media transfer, and native UDP applications. This has been a long requested feature by Amazon customers. The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on the users part. UDP load balancing will give users the liberty to no longer maintain a fleet of proxy servers to ingest UDP traffic, and instead use the same load balancer for both TCP and UDP traffic. Hence simplifying the network architecture, reducing users cost and scalability. Supported Targets UDP on Network Load Balancers is supported for Instance target types only. It does not support IP target types and PrivateLink. Health Checks Health checks must be done using TCP, HTTP, or HTTPS. Users can check on the health of a service by clicking override and specifying a health check on the selected port. Users can then run a custom implementation of Syslog that stores the log messages centrally and in a highly durable form. Multiple Protocols A single Network Load Balancer can handle both TCP and UDP traffic. In situations like DNS, when support of TCP and UDP is both needed on the same port, user can set up a multi-protocol target group and a multi-protocol listener. New CloudWatch Metrics The existing CloudWatch metrics (ProcessedBytes, ActiveFlowCount, and NewFlowCount) can  now represent the aggregate traffic processed by the TCP, UDP, and TLS listeners on the given Network Load Balancer. Users who host DNS, SIP, SNMP, Syslog, RADIUS and other UDP services in their own data centers can now move their services to AWS. It is also possible to deploy services to handle Authentication, Authorization, and Accounting, often known as AAA. Earlier this year, Amazon launched the TLS Termination support for Network Load Balancer. It simplifies the process of building secure web applications by allowing users to make use of TLS connections that terminate at a Network Load Balancer. Users are delighted with Amazon’s support for load balancing UDP traffic. https://twitter.com/cgswong/status/1143312489360183296 A user on Hacker News comments,“This is a Big Deal because it enables support for QUIC, which is now being standardized as HTTP/3. To work around the TCP head of line blocking problem (among others) QUIC aises UDP. QUIC does some incredible patching over legacy decisions in the TCP and IP stack to make things faster, more reliable, especially on mobile networks, and more secure.” Another comment reads, “This is great news, and something I’ve been requesting for years. I manage an IoT backend based on CoAP, which is typically UDP-based. I’ve looked at Nginx support for UDP, but a managed load balancer is much more appealing.” Some users see this as Amazon’s way of preparing ‘http3 support’ for the future. https://twitter.com/atechiethought/status/1143240391870832640 Another user on Hacker News wrote, “Nice! I wonder if this is a preparatory step for future quick/http3 support?” For details on how to create a UDP Network Load Balancer, head over to Amazon’s official blog. Amazon patents AI-powered drones to provide ‘surveillance as a service’ Amazon is being sued for recording children’s voices through Alexa without consent Amazon announces general availability of Amazon Personalize, an AI-based recommendation service
Read more
  • 0
  • 0
  • 18877
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-azure-functions-2-0-launches-with-better-workload-support-for-serverless
Melisha Dsouza
25 Sep 2018
2 min read
Save for later

Azure Functions 2.0 launches with better workload support for serverless

Melisha Dsouza
25 Sep 2018
2 min read
Microsoft  has announced the general availability of Azure Functions 2.0. The new release aims to handle demanding workloads, which should make managing the scale of serverless applications easier than ever before. With an improved user experience, and new developer capabilities, the release is evidence of Microsoft looking to take full advantage of interest in serverless computing. New features in Azure Functions 2.0 Azure Functions can now run on more platforms Azure Functions are now supported on more environments, including local Mac or Linux machines. An integration with its VS Code will help developers have a best-in-class serverless development experience on any platform. Code optimizations Functions 2.0 has added general host improvements, support for more modern language runtimes, and the ability to run code from a package file. .NET developers can now author functions using .NET Core 2.1.  This provides a significant performance gain and helps to develop and run .NET functions in more places. Assembly resolution functions have been improved to reduce the number of conflicts. Functions 2.0 now supports both Node 8 and Node 10, with improved performance in general. A powerful new programming model Bindings and integrations of Functions 1.0 have been improvised in functions 2.0. All bindings are brought in as extensions. The change to decoupled extension packages allows bindings (and their dependencies) to be versioned without depending on the core runtime. The recent launch of Azure SignalR Service, a fully managed service, enables focus on building real-time web experiences without worrying about setting up, hosting, scaling, or load balancing the SignalR server. Find an extension for this service, in this GitHub repo. Check out the SignalR Service binding reference to start building real-time serverless applications. Easier development To improve productivity, Microsoft has introduced a powerful native tooling inside of Visual Studio, VS Code, VS for Mac, and a CLI that can be run alongside any code editing experience. In Functions 2.0, more visibility is given to distributed tracing. Dependencies are automatically tracked, and cross-resource connections are automatically correlated across a variety of services To know more about the updates in Azure Functions 2.0  head to Microsoft’s official Blog Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.
Read more
  • 0
  • 0
  • 18788

article-image-neuvector-releases-security-policy-as-code-to-help-devops-teams-automate-container-security-by-using-crds
Sugandha Lahoti
19 Nov 2019
2 min read
Save for later

Neuvector releases “Security Policy as Code” to help DevOps teams automate container security by using CRDs

Sugandha Lahoti
19 Nov 2019
2 min read
NeuVector has released a new Security Policy as code capability for Kubernetes workloads. This release will automate container security for DevOps teams by using Kubernetes Custom Resource Definitions (CRDs). As security policies can be defined, managed, and automated during the DevOps process, teams will be able to quickly deliver secure cloud-native apps. These security policies can be implemented using CRDs to deploy customized resource configurations via YAML files. As these security policies are defined as code, they are version-tracked and built for easy automation. Teams can easily migrate security policies across Kubernetes clusters (or from staging to production environments) and manage versions of security policies tied to specific application versions. “By introducing our industry-first Security Policy as Code for Kubernetes workloads, we’re excited to provide DevOps and DevSecOps teams with even more control to automate safe behaviors and ensure their applications remain secure from ever-increasing threat vectors,” explains Gary Duan, CTO, NeuVector. “We continue to build out new capabilities sought by customers – such as DLP, multi-cluster management, and, with today’s release, CRD support. Our mission is acutely focused on raising the bar for container security by offering a complete cloud-native solution for the entire application lifecycle.” Features of NeuVector’s Security Policy as code Captures network rules, protocols, processes, and file activities that are allowed for the application. Permits allowed network connections between services enforced by application protocol (layer 7) inspection. Allows or prevents external or ingress connections as warranted. Sets the “protection mode” of the application to either Monitor mode (alerting only) or Protect mode (blocking all suspicious activity). Supports integration with Open Policy Agent (OPA) and other security policy management tools. Allows DevOps and security teams to define application policies at different hierarchies such as per-service rules defined by DevOps and global rules defined by centralized security teams. It is extensible so as to support future expansion of security policy as code to admission control rules, DLP rules, response rules, and other NeuVector enforcement policies. Head on to Neuvector’s blog for more details on Security Policy as Code feature. Further details about this release will be shared at KubeCon + CloudNativeCon North America 2019. Chaos engineering comes to Kubernetes thanks to Gremlin CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries. StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities.
Read more
  • 0
  • 0
  • 18705

article-image-stackrox-kubernetes-security-platform-3-0-releases-with-advanced-configuration-and-vulnerability-management-capabilities
Bhagyashree R
13 Nov 2019
3 min read
Save for later

StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities

Bhagyashree R
13 Nov 2019
3 min read
Today, StackRox, a Kubernetes-native container security platform provider announced StackRox Kubernetes Security Platform 3.0. This release includes industry-first features for configuration and vulnerability management that enable businesses to achieve stronger protection of cloud-native, containerized applications. In a press release, Wei Lien Dang, StackRox’s vice president of product, and co-founder said, “When it comes to Kubernetes security, new challenges related to vulnerabilities and misconfigurations continue to emerge.” “DevOps and Security teams need solutions that quickly and easily solve these issues. StackRox 3.0 is the first container security platform with the capabilities orgs need to effectively deal with Kubernetes configurations and vulnerabilities, so they can reduce risk to what matters most – their applications and their customer’s data,” he added. What’s new in StackRox Kubernetes Security Platform 3.0 Features for configuration management Interactive dashboards: This will enable users to view risk-prioritized misconfigurations, easily drill-down to critical information about the misconfiguration, and determine relevant context required for effective remediation. Kubernetes role-based access control (RBAC) assessment: StackRox will continuously monitor permission for users and service accounts to help mitigate against excessive privileges being granted. Kubernetes secrets access monitoring: The platform will discover secrets in Kubernetes and monitor which deployments can use them to limit unnecessary access. Kubernetes-specific policy enforcement: StackRox will identify configurations in Kubernetes related to network exposures, privileged containers, root processes, and other factors to determine policy violations. Advanced vulnerability management capabilities Interactive dashboards: StackRox Kubernetes Security Platform 3.0 has interactive views that provide risk prioritized snapshots across your environment, highlighting vulnerabilities in both, images and Kubernetes. Discovery of Kubernetes vulnerabilities: The platform gives you visibility into critical vulnerabilities that exist in the Kubernetes platform including the ones related to the Kubernetes API server disclosed by the Kubernetes product security team. Language-specific vulnerabilities: StackRox scans container images for additional vulnerabilities that are language-dependent, providing greater coverage across containerized applications.  Along with the aforementioned features, StackRox Kubernetes Security Platform 3.0 adds support for various ecosystem platforms. These include CRI-O, the Open Container Initiative (OCI)-compliant implementation of the Kubernetes Container Runtime Interface (CRI), Google Anthos, Microsoft Teams integration, and more. These were a few latest capabilities shipped in StackRox Kubernetes Security Platform 3.0. To know more, you can check out live demos and Q&A by the StackRox team at KubeCon 2019, which will be happening from November 18-21 in San Diego, California. It brings together adopters and technologists from leading open source and cloud-native communities. Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices  
Read more
  • 0
  • 0
  • 18609

article-image-is-cloud-mining-profitable
Richard Gall
24 May 2018
5 min read
Save for later

Is cloud mining profitable?

Richard Gall
24 May 2018
5 min read
Cloud mining has become into one of the biggest trends in Bitcoin and cryptocurrency. The reason is simple: it makes mining Bitcoin incredibly easy. By using cloud, rather than hardware to mine bitcoin, you can avoid the stress and inconvenience of managing hardware. Instead of using the processing power from hardware, you share the processing power of the cloud space (or more specifically the remote data center). In theory, cloud mining should be much more profitable than mining with your own hardware. However, it's easy to be caught out. At best some schemes are useless - at worst, they could be seen as a bit of a pyramid scheme. For this reason, it's essential you do your homework. However, although there are some risks associated with cloud mining, it does have benefits. Arguably it makes Bitcoin, and cryptocurrency in general, more accessible to ordinary people. Provided people get to know the area, what works and what definitely doesn't it could be a positive opportunity for many people. How to start cloud mining Let's first take a look at different methods of cloud mining. If you're going to do it properly, it's worth taking some time to consider your options. At a top level there are 3 different types of cloud mining. Renting out your hashing power This is the most common form of cloud mining. To do this, you simple 'rent out' a certain amount of your computer's hashing power. In case you don't know, hashing power is essentially your hardware's processing power; it's what allows your computer to use and run algorithms. Hosted mining As the name suggests, this is where you simply use an external machine to mine Bitcoin. To do this, you'll have to sign up with a cloud mining provider. If you do this, you'll need to be clear on their terms and conditions, and take care when calculating profitability. Virtual hosted mining Virtual hosted mining is a hybrid approach to cloud mining. To do this, you use a personal virtual server and then install the required software. This approach can be a little more fun, especially if you want to be able to build your own Bitcoin mining set up, but of course this poses challenges too. Depending on what you want to achieve any of these options may be right for you. Which cloud mining provider should you choose? As you'd expect from a trend that's growing rapidly, there's a huge number of cloud mining providers out there that you can use. The downside is that there are plenty of dubious providers that aren't going to be profitable for you. For this reason, it's best to do your research and read what others have to say. One of the most popular cloud mining providers is Hashflare. With Hashflare, you can buy a number of different types of cryptocurrencies, including Bitcoin, Ethereum, and Litecoin. You can also select your 'mining pool', which is something many providers won't let you do. Controlling the profitability of cloud mining can be difficult, so having control over your mining pool could be important. A mining pool is a bit like a hedge fund - a group of people pool together their processing resources, and the 'pay out' will be split according to the amount of work put in in order to create what's called a 'block', which is essentially a record or ledger of transactions. Hashflare isn't the only cloud mining solution available. Genesis Mining is another very high profile provider. It's incredibly accessible - you can begin a Bitcoin mining contract for just $15.99. Of course, the more you invest the better the deal you'll get. For a detailed exploration and comparison of cloud mining solutions, this TechRadar article is very useful. Take a look before you make any decisions! How can I ensure cloud mining is profitable? It's impossible to ensure profitability. Remember - cloud mining providers are out to make a profit. Although you might well make a profit, it's not necessarily in their interests to be paying money out to you. Calculating cloud mining profitability can be immensely complex. To do it properly you need to be clear on all the elements that are going to impact profitability. This includes: The cryptocurrency you are mining How much mining will cost per unit of hashing power The growth rate of block difficulty How the network hashrate might increase over the length of your mining contract There are lots of mining calculators out there that you can use to calculate how profitable cloud mining is likely to be. This article is particularly good at outlining how you can go about calculating cloud mining profitability. Its conclusion is an interesting take that's worth considering if you are interested in starting cloud mining: is "it profitable because the underlying cryptocurrency went up, or because the mining itself was profitable?" As the writer points out, if it is the cryptocurrency's value, then you might just be better off buying the cryptocurrency. Read next A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains
Read more
  • 0
  • 0
  • 18567
article-image-optimize-your-azure-workloads-with-azure-advisor-score-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
3 min read
Save for later

Optimize your Azure workloads with Azure Advisor Score from Microsoft Azure Blog &gt; Announcements

Matthew Emerick
07 Oct 2020
3 min read
Modern engineering practices, like Agile and DevOps, are redirecting the ownership of security, operations, and cost management from centralized teams to workload owners—catalyzing innovations at a higher velocity than in traditional data centers. In this new world, workload owners are expected to build, deploy, and manage cloud workloads that are secure, reliable, performant, and cost-effective. If you’re a workload owner, you want well-architected deployments, so you might be wondering, how well are you doing today? Of all the actions you can take, which ones will make the biggest difference for your Azure workloads? And how will you know if you’re making progress? That’s why we created Azure Advisor Score—to help you understand how well your Azure workloads are following best practices, assess how much you stand to gain by remediating issues, and prioritize the most impactful recommendations you can take to optimize your deployments. Introducing Advisor Score Advisor Score enables you to get the most out of your Azure investment using a centralized dashboard to monitor and work towards optimizing the cost, security, reliability, operational excellence, and performance of your Azure resources. Advisor Score will help you: Assess how well you’re following the best practices defined by Azure Advisor and the Microsoft Azure Well-Architected Framework. Optimize your deployments by taking the most impactful actions first. Report on your well-architected progress over time. Baselining is one great use case we’ve already seen with customers. You can use Advisor Score to baseline yourself and track your progress over time toward your goals by reviewing your score’s daily, weekly, or monthly trends. Then, to reach your goals, you can take action first on the individual recommendations and resources with the most impact. How Advisor Score works Advisor Score measures how well you’re adopting Azure best practices, comparing and quantifying the impact of the Advisor recommendations you’re already following, and the ones you haven’t implemented yet. Think of it as a gap analysis for your deployed Azure workloads. The overall score is calculated on a scale from 0 percent to 100 percent both in aggregate and separately for cost, security (coming soon), reliability, operational excellence, and performance. A score of 100 percent means all your resources follow all the best practices recommended in Advisor. On the other end of the spectrum, a score of zero percent means that none of your resources follow the recommended best practices. Advisor Score weighs all resources, both those with and without active recommendations, by their individual cost relative to your total spend. This builds on the assumption that the resources which consume a greater share of your total investment in Azure are more critical to your workloads. Advisor Score also adds weight to resources with longstanding recommendations. The idea is that the accumulated impact of these recommendations grows the longer they go unaddressed. Review your Advisor Score today Check your Advisor Score today by visiting Azure Advisor in the Azure portal. To learn more about the model behind Advisor Score and see examples of how the score is calculated, review our Advisor Score documentation, and this behind-the-scenes blog from our data science team about the development of Advisor Score.
Read more
  • 0
  • 0
  • 18529

article-image-cncf-releases-9-security-best-practices-for-kubernetes-to-protect-a-customers-infrastructure
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure

Melisha Dsouza
15 Jan 2019
3 min read
According to CNCF’s bi-annual survey conducted in August 2018, 83% of the respondents prefer Kubernetes for its container management tools. 58% of respondents use Kubernetes in production, while 42% are evaluating it for future use and 40% of enterprise companies (5000+) are running Kubernetes in production. These statistics give us a clear picture of the popularity of Kubernetes amongst developers as a container orchestrator. However, the recent security flaw discovered in Kubernetes (now patched) that enable attackers to compromise clusters and perform illicit activities, did raise concerns among developers. A container environment like Kubernetes consisting of multiple layers needs to be secured on all fronts. Taking this into consideration, the cncf has released ‘9 Kubernetes Security Best Practices Everyone Must Follow’ #1 Upgrade to the Latest Version Kubernetes has a quarterly update that features various bug and security fixes. Customers are advised to always upgrade to the latest release with updated security patches to fool proof their system. #2 Role-Based Access control (RBAC) Users can control who can access the Kubernetes API and what permissions they have by enabling the RBAC. The blog advises users against giving anyone cluster admin privileges and to grant access only as needed on a case-by-case basis. #3 Namespaces for security boundaries Namespaces generate an important level of isolation between components. Also, cncf states that it is easier to have various security controls and policies when workloads are deployed in separate namespaces #4 Keeping sensitive workloads separate Sensitive workloads should be run on a dedicated set of machines. This means that if a less secure application connected to a sensitive workload is compromised, the latter remains unaffected. #5 Securing Cloud Metadata Access Sensitive metadata storing confidential information such as credentials, can be stolen and misused. The blog advises users to use Google Kubernetes Engine’s metadata concealment feature to avoid this mishap. #6 Cluster Network Policies Developers will be able to control network access of their containerized applications through network policies. #7 Implementing a Cluster-wise Pod Security Policy This will define how workloads are allowed to run in a cluster. #8 Improve Node Security Users should ensure that the host is configured in the right way and that it is secure by checking the node’s configuration against CIS benchmarks. Ensure your network blocks access to ports that can be exploited by malicious actors and minimize the administrative access given to Kubernetes nodes. #9 Audit Logging Audit logs should be enabled and monitored for anomalous API calls and authorization failures. This an indicate that a malicious hacker is trying to get into your system. The blog advises users to further look for tools to assist them in continuous monitoring and protection of their containers.  You can head over to Cloud Native computing foundation official blog to read more about these best practices. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes      
Read more
  • 0
  • 0
  • 18520

article-image-amazon-s3-update-three-new-security-access-control-features-from-aws-news-blog
Matthew Emerick
02 Oct 2020
5 min read
Save for later

Amazon S3 Update – Three New Security & Access Control Features from AWS News Blog

Matthew Emerick
02 Oct 2020
5 min read
A year or so after we launched Amazon S3, I was in an elevator at a tech conference and heard a couple of developers use “just throw it into S3” as the answer to their data storage challenge. I remember that moment well because the comment was made so casually, and it was one of the first times that I fully grasped just how quickly S3 had caught on. Since that launch, we have added hundreds of features and multiple storage classes to S3, while also reducing the cost to storage a gigabyte of data for a month by almost 85% (from $0.15 to $0.023 for S3 Standard, and as low as $0.00099 for S3 Glacier Deep Archive). Today, our customers use S3 to support many different use cases including data lakes, backup and restore, disaster recovery, archiving, and cloud-native applications. Security & Access Control As the set of use cases for S3 has expanded, our customers have asked us for new ways to regulate access to their mission-critical buckets and objects. We added IAM policies many years ago, and Block Public Access in 2018. Last year we added S3 Access Points (Easily Manage Shared Data Sets with Amazon S3 Access Points) to help you manage access in large-scale environments that might encompass hundreds of applications and petabytes of storage. Today we are launching S3 Object Ownership as a follow-on to two other S3 security & access control features that we launched earlier this month. All three features are designed to give you even more control and flexibility: Object Ownership – You can now ensure that newly created objects within a bucket have the same owner as the bucket. Bucket Owner Condition – You can now confirm the ownership of a bucket when you create a new object or perform other S3 operations. Copy API via Access Points – You can now access S3’s Copy API through an Access Point. You can use all of these new features in all AWS regions at no additional charge. Let’s take a look at each one! Object Ownership With the proper permissions in place, S3 already allows multiple AWS accounts to upload objects to the same bucket, with each account retaining ownership and control over the objects. This many-to-one upload model can be handy when using a bucket as a data lake or another type of data repository. Internal teams or external partners can all contribute to the creation of large-scale centralized resources. With this model, the bucket owner does not have full control over the objects in the bucket and cannot use bucket policies to share objects, which can lead to confusion. You can now use a new per-bucket setting to enforce uniform object ownership within a bucket. This will simplify many applications, and will obviate the need for the Lambda-powered self-COPY that has become a popular way to do this up until now. Because this setting changes the behavior seen by the account that is uploading, the PUT request must include the bucket-owner-full-control ACL. You can also choose to use a bucket policy that requires the inclusion of this ACL. To get started, open the S3 Console, locate the bucket and view its Permissions, click Object Ownership, and Edit: Then select Bucket owner preferred and click Save: As I mentioned earlier, you can use a bucket policy to enforce object ownership (read About Object Ownership and this Knowledge Center Article to learn more). Many AWS services deliver data to the bucket of your choice, and are now equipped to take advantage of this feature. S3 Server Access Logging, S3 Inventory, S3 Storage Class Analysis, AWS CloudTrail, and AWS Config now deliver data that you own. You can also configure Amazon EMR to use this feature by setting fs.s3.canned.acl to BucketOwnerFullControl in the cluster configuration (learn more). Keep in mind that this feature does not change the ownership of existing objects. Also, note that you will now own more S3 objects than before, which may cause changes to the numbers you see in your reports and other metrics. AWS CloudFormation support for Object Ownership is under development and is expected to be ready before AWS re:Invent. Bucket Owner Condition This feature lets you confirm that you are writing to a bucket that you own. You simply pass a numeric AWS Account ID to any of the S3 Bucket or Object APIs using the expectedBucketOwner parameter or the x-amz-expected-bucket-owner HTTP header. The ID indicates the AWS Account that you believe owns the subject bucket. If there’s a match, then the request will proceed as normal. If not, it will fail with a 403 status code. To learn more, read Bucket Owner Condition. Copy API via Access Points S3 Access Points give you fine-grained control over access to your shared data sets. Instead of managing a single and possibly complex policy on a bucket, you can create an access point for each application, and then use an IAM policy to regulate the S3 operations that are made via the access point (read Easily Manage Shared Data Sets with Amazon S3 Access Points to see how they work). You can now use S3 Access Points in conjunction with the S3 CopyObject API by using the ARN of the access point instead of the bucket name (read Using Access Points to learn more). Use Them Today As I mentioned earlier, you can use all of these new features in all AWS regions at no additional charge. — Jeff;  
Read more
  • 0
  • 0
  • 18431
article-image-windows-server-2019-comes-with-security-storage-and-other-changes
Prasad Ramesh
21 Dec 2018
5 min read
Save for later

Windows Server 2019 comes with security, storage and other changes

Prasad Ramesh
21 Dec 2018
5 min read
Today, Microsoft unveiled new features of Windows Server 2019. The new features are based on four themes—hybrid, security, application platform, and Hyper-Converged Infrastructure (HCI). General changes Windows Server 2019, being a Long-Term Servicing Channel (LTSC) release, includes Desktop Experience. During setup, there are two options to choose from: Server Core installations or Server with Desktop Experience installations. A new feature called System Insights brings local predictive analytics capabilities to Windows Server 2019. This feature is powered by machine learning and aimed to help users reduce operational expenses associated with managing issues in Windows Server deployments. Hybrid cloud in Windows Server 2019 Another feature called the Server Core App Compatibility feature on demand (FOD) greatly improves the app compatibility in the Windows Server Core installation option. It does so by including a subset of binaries and components from Windows Server with the Desktop Experience included. This is done without adding the Windows Server Desktop Experience graphical environment itself. The purpose is to increase the functionality of Windows server while keeping a small footprint. This feature is optional and is available as a separate ISO to be added to Windows Server Core installation. New measures for security There are new changes made to add a new protection protocol, changes in virtual machines, networking, and web. Windows Defender Advanced Threat Protection (ATP) Now, there is a Windows Defender program called Advanced Threat Protection (ATP). ATP has deep platform sensors and response actions to expose memory and kernel level attacks. ATP can respond via suppressing malicious files and also terminating malicious processes. There is a new set of host-intrusion prevention capabilities called the Windows Defender ATP Exploit Guard. The components of ATP Exploit Guard are designed to lock down and protect a machine against a wide variety of attacks and also block behaviors common in malware attacks. Software Defined Networking (SDN) SDN delivers many security features which increase customer confidence in running workloads, be it on-premises or as a cloud service provider. These enhancements are integrated into the comprehensive SDN platform which was first introduced in Windows Server 2016. Improvements to shielded virtual machines Now, users can run shielded virtual machines on machines which are intermittently connected to the Host Guardian Service. This leverages the fallback HGS and offline mode features. There are troubleshooting improvements to shield virtual machines by enabling support for VMConnect Enhanced Session Mode and PowerShell Direct. Windows Server 2019 now supports Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server inside shielded virtual machines. Changes for faster and safer web Connections are coalesced to deliver uninterrupted and encrypted browsing. For automatic connection failure mitigation and ease of deployment, HTTP/2’s server-side cipher suite negotiation is upgraded. Storage Three storage changes are made in Windows Server 2019. Storage Migration Service It is a new technology that simplifies migrating servers to a newer Windows Server version. It has a graphical tool that lists data on servers and transfers the data and configuration to newer servers. Their users can optionally move the identities of the old servers to the new ones so that apps and users don’t have to make changes. Storage Spaces Direct There are new features in Storage Spaces Direct: Deduplication and compression capabilities for ReFS volumes Persistent memory has native support Nested resiliency for 2 node hyper-converged infrastructure at the edge Two-server clusters which use a USB flash drive as a witness Support for Windows Admin Center Display of performance history Scale up to 4 petabytes per cluster Mirror-accelerated parity is two times faster Drive latency outlier detection Fault tolerance is increased by manually delimiting the allocation of volumes Storage Replica Storage Replica is now also available in Windows Server 2019 standard edition. A new feature called test failover allows mounting of destination storage to validate replication or backup data. Performance improvements are made and Windows Admin Center support is added. Failover clustering New features in failover clustering include: Addition of cluster sets and Azure-aware clusters Cross-domain cluster migration USB witness Cluster infrastructure improvements Cluster Aware Updating supports Storage Spaces Direct File share witness enhancements Cluster hardening Failover Cluster no longer uses NTLM authentication Application platform changes in Windows Server 2019 Users can now run Windows and Linux-based containers on the same container host by using the same docker daemon. Changes are being continually done to improve support for Kubernetes. A number of improvements are made to containers such as changes to identity, compatibility, reduced size, and higher performance. Now, virtual network encryption allows virtual network traffic encryption between virtual machines that communicate within subnets and are marked as Encryption Enabled. There are also some improvements to network performance for virtual workloads, time service, SDN gateways, new deployment UI, and persistent memory support for Hyper-V VMs. For more details, visit the Microsoft website. OpenSSH, now a part of the Windows Server 2019 Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019
Read more
  • 0
  • 0
  • 18362

article-image-nvidia-gpus-offer-kubernetes-for-accelerated-deployments-of-artificial-intelligence-workloads
Savia Lobo
21 Jun 2018
2 min read
Save for later

Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads

Savia Lobo
21 Jun 2018
2 min read
Nvidia recently announced that they will make Kubernetes available on its GPUs, at the Computer Vision and Pattern Recognition (CVPR) conference. Although it is not generally available, developers will be allowed to use this technology in order to test the software and provide their feedback. Source: Kubernetes on Nvidia GPUs Kubernetes on NVIDIA GPUs will allow developers and DevOps engineers to build and deploy a scalable GPU-accelerated deep learning training. It can also be used to create inference applications on multi-cloud GPU clusters. Using this novel technology, developers can handle the growing number of AI applications and services. This will be possible by automating processes such as deployment, maintenance, scheduling and operation of GPU-accelerated application containers. One can orchestrate deep learning and HPC applications on heterogeneous GPU clusters. It also includes easy-to-specify attributes such as GPU type and memory requirement. It also offers integrated metrics and monitoring capabilities for analyzing and improving GPU utilization on clusters. Interesting features of Kubernetes on Nvidia GPUs include: GPU support in Kubernetes can be used via the NVIDIA device plugin One can easily specify GPU attributes such as GPU type and memory requirements for deployment in heterogeneous GPU clusters Visualizing and monitoring GPU metrics and health with an integrated GPU monitoring stack of NVIDIA DCGM , Prometheus and Grafana Support for multiple underlying container runtimes such as Docker and CRI-O Officially supported on all NVIDIA DGX systems (DGX-1 Pascal, DGX-1 Volta and DGX Station) Read more about this exciting news on Nvidia Developer blog NVIDIA brings new deep learning updates at CVPR conference Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 18352
Modal Close icon
Modal Close icon