Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 19126

article-image-azure-functions-2-0-launches-with-better-workload-support-for-serverless
Melisha Dsouza
25 Sep 2018
2 min read
Save for later

Azure Functions 2.0 launches with better workload support for serverless

Melisha Dsouza
25 Sep 2018
2 min read
Microsoft  has announced the general availability of Azure Functions 2.0. The new release aims to handle demanding workloads, which should make managing the scale of serverless applications easier than ever before. With an improved user experience, and new developer capabilities, the release is evidence of Microsoft looking to take full advantage of interest in serverless computing. New features in Azure Functions 2.0 Azure Functions can now run on more platforms Azure Functions are now supported on more environments, including local Mac or Linux machines. An integration with its VS Code will help developers have a best-in-class serverless development experience on any platform. Code optimizations Functions 2.0 has added general host improvements, support for more modern language runtimes, and the ability to run code from a package file. .NET developers can now author functions using .NET Core 2.1.  This provides a significant performance gain and helps to develop and run .NET functions in more places. Assembly resolution functions have been improved to reduce the number of conflicts. Functions 2.0 now supports both Node 8 and Node 10, with improved performance in general. A powerful new programming model Bindings and integrations of Functions 1.0 have been improvised in functions 2.0. All bindings are brought in as extensions. The change to decoupled extension packages allows bindings (and their dependencies) to be versioned without depending on the core runtime. The recent launch of Azure SignalR Service, a fully managed service, enables focus on building real-time web experiences without worrying about setting up, hosting, scaling, or load balancing the SignalR server. Find an extension for this service, in this GitHub repo. Check out the SignalR Service binding reference to start building real-time serverless applications. Easier development To improve productivity, Microsoft has introduced a powerful native tooling inside of Visual Studio, VS Code, VS for Mac, and a CLI that can be run alongside any code editing experience. In Functions 2.0, more visibility is given to distributed tracing. Dependencies are automatically tracked, and cross-resource connections are automatically correlated across a variety of services To know more about the updates in Azure Functions 2.0  head to Microsoft’s official Blog Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.
Read more
  • 0
  • 0
  • 18788

article-image-stackrox-kubernetes-security-platform-3-0-releases-with-advanced-configuration-and-vulnerability-management-capabilities
Bhagyashree R
13 Nov 2019
3 min read
Save for later

StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities

Bhagyashree R
13 Nov 2019
3 min read
Today, StackRox, a Kubernetes-native container security platform provider announced StackRox Kubernetes Security Platform 3.0. This release includes industry-first features for configuration and vulnerability management that enable businesses to achieve stronger protection of cloud-native, containerized applications. In a press release, Wei Lien Dang, StackRox’s vice president of product, and co-founder said, “When it comes to Kubernetes security, new challenges related to vulnerabilities and misconfigurations continue to emerge.” “DevOps and Security teams need solutions that quickly and easily solve these issues. StackRox 3.0 is the first container security platform with the capabilities orgs need to effectively deal with Kubernetes configurations and vulnerabilities, so they can reduce risk to what matters most – their applications and their customer’s data,” he added. What’s new in StackRox Kubernetes Security Platform 3.0 Features for configuration management Interactive dashboards: This will enable users to view risk-prioritized misconfigurations, easily drill-down to critical information about the misconfiguration, and determine relevant context required for effective remediation. Kubernetes role-based access control (RBAC) assessment: StackRox will continuously monitor permission for users and service accounts to help mitigate against excessive privileges being granted. Kubernetes secrets access monitoring: The platform will discover secrets in Kubernetes and monitor which deployments can use them to limit unnecessary access. Kubernetes-specific policy enforcement: StackRox will identify configurations in Kubernetes related to network exposures, privileged containers, root processes, and other factors to determine policy violations. Advanced vulnerability management capabilities Interactive dashboards: StackRox Kubernetes Security Platform 3.0 has interactive views that provide risk prioritized snapshots across your environment, highlighting vulnerabilities in both, images and Kubernetes. Discovery of Kubernetes vulnerabilities: The platform gives you visibility into critical vulnerabilities that exist in the Kubernetes platform including the ones related to the Kubernetes API server disclosed by the Kubernetes product security team. Language-specific vulnerabilities: StackRox scans container images for additional vulnerabilities that are language-dependent, providing greater coverage across containerized applications.  Along with the aforementioned features, StackRox Kubernetes Security Platform 3.0 adds support for various ecosystem platforms. These include CRI-O, the Open Container Initiative (OCI)-compliant implementation of the Kubernetes Container Runtime Interface (CRI), Google Anthos, Microsoft Teams integration, and more. These were a few latest capabilities shipped in StackRox Kubernetes Security Platform 3.0. To know more, you can check out live demos and Q&A by the StackRox team at KubeCon 2019, which will be happening from November 18-21 in San Diego, California. It brings together adopters and technologists from leading open source and cloud-native communities. Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices  
Read more
  • 0
  • 0
  • 18609

article-image-is-cloud-mining-profitable
Richard Gall
24 May 2018
5 min read
Save for later

Is cloud mining profitable?

Richard Gall
24 May 2018
5 min read
Cloud mining has become into one of the biggest trends in Bitcoin and cryptocurrency. The reason is simple: it makes mining Bitcoin incredibly easy. By using cloud, rather than hardware to mine bitcoin, you can avoid the stress and inconvenience of managing hardware. Instead of using the processing power from hardware, you share the processing power of the cloud space (or more specifically the remote data center). In theory, cloud mining should be much more profitable than mining with your own hardware. However, it's easy to be caught out. At best some schemes are useless - at worst, they could be seen as a bit of a pyramid scheme. For this reason, it's essential you do your homework. However, although there are some risks associated with cloud mining, it does have benefits. Arguably it makes Bitcoin, and cryptocurrency in general, more accessible to ordinary people. Provided people get to know the area, what works and what definitely doesn't it could be a positive opportunity for many people. How to start cloud mining Let's first take a look at different methods of cloud mining. If you're going to do it properly, it's worth taking some time to consider your options. At a top level there are 3 different types of cloud mining. Renting out your hashing power This is the most common form of cloud mining. To do this, you simple 'rent out' a certain amount of your computer's hashing power. In case you don't know, hashing power is essentially your hardware's processing power; it's what allows your computer to use and run algorithms. Hosted mining As the name suggests, this is where you simply use an external machine to mine Bitcoin. To do this, you'll have to sign up with a cloud mining provider. If you do this, you'll need to be clear on their terms and conditions, and take care when calculating profitability. Virtual hosted mining Virtual hosted mining is a hybrid approach to cloud mining. To do this, you use a personal virtual server and then install the required software. This approach can be a little more fun, especially if you want to be able to build your own Bitcoin mining set up, but of course this poses challenges too. Depending on what you want to achieve any of these options may be right for you. Which cloud mining provider should you choose? As you'd expect from a trend that's growing rapidly, there's a huge number of cloud mining providers out there that you can use. The downside is that there are plenty of dubious providers that aren't going to be profitable for you. For this reason, it's best to do your research and read what others have to say. One of the most popular cloud mining providers is Hashflare. With Hashflare, you can buy a number of different types of cryptocurrencies, including Bitcoin, Ethereum, and Litecoin. You can also select your 'mining pool', which is something many providers won't let you do. Controlling the profitability of cloud mining can be difficult, so having control over your mining pool could be important. A mining pool is a bit like a hedge fund - a group of people pool together their processing resources, and the 'pay out' will be split according to the amount of work put in in order to create what's called a 'block', which is essentially a record or ledger of transactions. Hashflare isn't the only cloud mining solution available. Genesis Mining is another very high profile provider. It's incredibly accessible - you can begin a Bitcoin mining contract for just $15.99. Of course, the more you invest the better the deal you'll get. For a detailed exploration and comparison of cloud mining solutions, this TechRadar article is very useful. Take a look before you make any decisions! How can I ensure cloud mining is profitable? It's impossible to ensure profitability. Remember - cloud mining providers are out to make a profit. Although you might well make a profit, it's not necessarily in their interests to be paying money out to you. Calculating cloud mining profitability can be immensely complex. To do it properly you need to be clear on all the elements that are going to impact profitability. This includes: The cryptocurrency you are mining How much mining will cost per unit of hashing power The growth rate of block difficulty How the network hashrate might increase over the length of your mining contract There are lots of mining calculators out there that you can use to calculate how profitable cloud mining is likely to be. This article is particularly good at outlining how you can go about calculating cloud mining profitability. Its conclusion is an interesting take that's worth considering if you are interested in starting cloud mining: is "it profitable because the underlying cryptocurrency went up, or because the mining itself was profitable?" As the writer points out, if it is the cryptocurrency's value, then you might just be better off buying the cryptocurrency. Read next A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains
Read more
  • 0
  • 0
  • 18567

article-image-optimize-your-azure-workloads-with-azure-advisor-score-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
3 min read
Save for later

Optimize your Azure workloads with Azure Advisor Score from Microsoft Azure Blog > Announcements

Matthew Emerick
07 Oct 2020
3 min read
Modern engineering practices, like Agile and DevOps, are redirecting the ownership of security, operations, and cost management from centralized teams to workload owners—catalyzing innovations at a higher velocity than in traditional data centers. In this new world, workload owners are expected to build, deploy, and manage cloud workloads that are secure, reliable, performant, and cost-effective. If you’re a workload owner, you want well-architected deployments, so you might be wondering, how well are you doing today? Of all the actions you can take, which ones will make the biggest difference for your Azure workloads? And how will you know if you’re making progress? That’s why we created Azure Advisor Score—to help you understand how well your Azure workloads are following best practices, assess how much you stand to gain by remediating issues, and prioritize the most impactful recommendations you can take to optimize your deployments. Introducing Advisor Score Advisor Score enables you to get the most out of your Azure investment using a centralized dashboard to monitor and work towards optimizing the cost, security, reliability, operational excellence, and performance of your Azure resources. Advisor Score will help you: Assess how well you’re following the best practices defined by Azure Advisor and the Microsoft Azure Well-Architected Framework. Optimize your deployments by taking the most impactful actions first. Report on your well-architected progress over time. Baselining is one great use case we’ve already seen with customers. You can use Advisor Score to baseline yourself and track your progress over time toward your goals by reviewing your score’s daily, weekly, or monthly trends. Then, to reach your goals, you can take action first on the individual recommendations and resources with the most impact. How Advisor Score works Advisor Score measures how well you’re adopting Azure best practices, comparing and quantifying the impact of the Advisor recommendations you’re already following, and the ones you haven’t implemented yet. Think of it as a gap analysis for your deployed Azure workloads. The overall score is calculated on a scale from 0 percent to 100 percent both in aggregate and separately for cost, security (coming soon), reliability, operational excellence, and performance. A score of 100 percent means all your resources follow all the best practices recommended in Advisor. On the other end of the spectrum, a score of zero percent means that none of your resources follow the recommended best practices. Advisor Score weighs all resources, both those with and without active recommendations, by their individual cost relative to your total spend. This builds on the assumption that the resources which consume a greater share of your total investment in Azure are more critical to your workloads. Advisor Score also adds weight to resources with longstanding recommendations. The idea is that the accumulated impact of these recommendations grows the longer they go unaddressed. Review your Advisor Score today Check your Advisor Score today by visiting Azure Advisor in the Azure portal. To learn more about the model behind Advisor Score and see examples of how the score is calculated, review our Advisor Score documentation, and this behind-the-scenes blog from our data science team about the development of Advisor Score.
Read more
  • 0
  • 0
  • 18529

article-image-cncf-releases-9-security-best-practices-for-kubernetes-to-protect-a-customers-infrastructure
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure

Melisha Dsouza
15 Jan 2019
3 min read
According to CNCF’s bi-annual survey conducted in August 2018, 83% of the respondents prefer Kubernetes for its container management tools. 58% of respondents use Kubernetes in production, while 42% are evaluating it for future use and 40% of enterprise companies (5000+) are running Kubernetes in production. These statistics give us a clear picture of the popularity of Kubernetes amongst developers as a container orchestrator. However, the recent security flaw discovered in Kubernetes (now patched) that enable attackers to compromise clusters and perform illicit activities, did raise concerns among developers. A container environment like Kubernetes consisting of multiple layers needs to be secured on all fronts. Taking this into consideration, the cncf has released ‘9 Kubernetes Security Best Practices Everyone Must Follow’ #1 Upgrade to the Latest Version Kubernetes has a quarterly update that features various bug and security fixes. Customers are advised to always upgrade to the latest release with updated security patches to fool proof their system. #2 Role-Based Access control (RBAC) Users can control who can access the Kubernetes API and what permissions they have by enabling the RBAC. The blog advises users against giving anyone cluster admin privileges and to grant access only as needed on a case-by-case basis. #3 Namespaces for security boundaries Namespaces generate an important level of isolation between components. Also, cncf states that it is easier to have various security controls and policies when workloads are deployed in separate namespaces #4 Keeping sensitive workloads separate Sensitive workloads should be run on a dedicated set of machines. This means that if a less secure application connected to a sensitive workload is compromised, the latter remains unaffected. #5 Securing Cloud Metadata Access Sensitive metadata storing confidential information such as credentials, can be stolen and misused. The blog advises users to use Google Kubernetes Engine’s metadata concealment feature to avoid this mishap. #6 Cluster Network Policies Developers will be able to control network access of their containerized applications through network policies. #7 Implementing a Cluster-wise Pod Security Policy This will define how workloads are allowed to run in a cluster. #8 Improve Node Security Users should ensure that the host is configured in the right way and that it is secure by checking the node’s configuration against CIS benchmarks. Ensure your network blocks access to ports that can be exploited by malicious actors and minimize the administrative access given to Kubernetes nodes. #9 Audit Logging Audit logs should be enabled and monitored for anomalous API calls and authorization failures. This an indicate that a malicious hacker is trying to get into your system. The blog advises users to further look for tools to assist them in continuous monitoring and protection of their containers.  You can head over to Cloud Native computing foundation official blog to read more about these best practices. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes      
Read more
  • 0
  • 0
  • 18520
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 18294

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 18258

article-image-zone-redundancy-for-azure-cache-for-redis-now-in-preview-from-microsoft-azure-blog-announcements
Matthew Emerick
14 Oct 2020
3 min read
Save for later

Zone Redundancy for Azure Cache for Redis now in preview from Microsoft Azure Blog > Announcements

Matthew Emerick
14 Oct 2020
3 min read
Between waves of pandemics, hurricanes, and wildfires, you don’t need cloud infrastructure adding to your list of worries this year. Fortunately, there has never been a better time to ensure your Azure deployments stay resilient. Availability zones are one of the best ways to mitigate risks from outages and disasters. With that in mind, we are announcing the preview for zone redundancy in Azure Cache for Redis. Availability Zones on Azure Azure Availability Zones are geographically isolated datacenter locations within an Azure region, providing redundant power, cooling, and networking. By maintaining a physically separate set of resources with the low latency from remaining in the same region, Azure Availability Zones provide a high availability solution that is crucial for businesses requiring resiliency and business continuity. Redundancy options in Azure Cache for Redis Azure Cache for Redis is increasingly becoming critical to our customers’ data infrastructure. As a fully managed service, Azure Cache for Redis provides various high availability options. By default, caches in the standard or premium tier have built-in replication with a two-node configuration—a primary and a replica hosting two identical copies of your data. New in preview, Azure Cache for Redis can now support up to four nodes in a cache distributed across multiple availability zones. This update can significantly enhance the availability of your Azure Cache for Redis instance, giving you greater peace of mind and hardening your data architecture against unexpected disruption. High Availability for Azure Cache for Redis The new redundancy features deliver better reliability and resiliency. First, this update expands the total number of replicas you can create. You can now implement up to three replica nodes in addition to the primary node. Having more replicas generally improves resiliency (even if they are in the same availability zone) because of the additional nodes backing up the primary. Even with more replicas, a datacenter-wide outage can still disrupt your application. That’s why we’re also enabling zone redundancy, allowing replicas to be located in different availability zones. Replica nodes can be placed in one or multiple availability zones, with failover automatically occurring if needed across availability zones. With Zone Redundancy, your cache can handle situations where the primary zone is knocked offline due to issues like floods, power outages, or even natural disasters. This increases availability while maintaining the low latency required from a cache. Zone redundancy is currently only available on the premium tier of Azure Cache for Redis, but it will also be available on the enterprise and enterprise flash tiers when the preview is released. Industry-leading service level agreement Azure Cache for Redis already offers an industry-standard 99.9 percent service level agreement (SLA). With the addition of zone redundancy, the availability increases to a 99.95 percent level, allowing you to meet your availability needs while keeping your application nimble and scalable. Adding zone redundancy to Azure Cache for Redis is a great way to promote availability and peace of mind during turbulent situations. Learn more in our documentation and give it a try today. If you have any questions or feedback, please contact us at AzureCache@microsoft.com.
Read more
  • 0
  • 0
  • 18250

article-image-microsofts-immutable-storage-for-azure-storage-blobs-now-generally-available
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Microsoft’s Immutable storage for Azure Storage Blobs, now generally available

Melisha Dsouza
21 Sep 2018
3 min read
Microsoft’s new "immutable storage" feature for Azure Blobs, is now generally available. Financial Services organizations regulated by the Securities and Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), Financial Industry Regulatory Authority (FINRA), and others are required to retain business-related communications in a Write-Once-Read-Many (WORM) or immutable state. This ensures that the data is non-erasable and non-modifiable for a specific retention interval. Healthcare, insurance, media, public safety, and legal services industries will also benefit a great deal from this feature. Through configurable policies, users can only create and read Blobs, and not modify or delete them. There is no additional charge for using this feature. Immutable data is priced in the same way as mutable data. Read Also: Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure Upgrades that accompany this feature are: #1 Regulatory compliance Immutable storage for Azure Blobs will help financial institutions and related industries to store data immutably. Microsoft will soon release a technical white paper with details on how the feature addresses regulatory requirements. Head over to the Azure Trust Center for detailed information about compliance certifications. #2 Secure document retention The immutable storage feature for Azure Blobs service ensures that data cannot be modified or deleted by any user- even with administrative privileges. #3 Better Legal Hold Users can now store sensitive information related to a litigation, criminal investigation, and more in a tamper-proof state for the desired duration. #4 Time-based retention policy support Users can set policies to store data immutably for a specified interval of time. #5 Legal hold policy support When users do not know the data retention time, they can set legal holds to store data until the legal hold is cleared. #6 Support for all Blob tiers WORM policies are independent of the Azure Blob Storage tier and will apply to all the tiers. Therefore, Customers can store their data in the most cost-optimized tier for their workloads immutably. #7 Blob Container level configuration Users can configure time-based retention policies and legal hold tags at the container level. Simple container level settings can create time-based retention policies, lock policies, extend retention intervals, set legal holds, clear legal holds etc. 17a-4, LLC, Commvault , HubStor,Archive2Azure are among the few Microsoft partners that support Azure Blob immutable storage. To know how to upgrade to this feature, head over to the Microsoft Blog Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 18128
article-image-google-cloud-releases-a-beta-version-of-sparkr-job-types-in-cloud-dataproc
Natasha Mathur
21 Dec 2018
2 min read
Save for later

Google Cloud releases a beta version of SparkR job types in Cloud Dataproc

Natasha Mathur
21 Dec 2018
2 min read
Google released a beta version of SparkR jobs on Cloud Dataproc, a cloud service that lets you run Apache Spark and Apache Hadoop in a cost-effective manner, earlier this week. SparkR Jobs will build R support on GCP. It is a package that delivers a lightweight front-end to use Apache Spark from R. This new package supports distributed machine learning using MLlib. It can be used to process against large cloud storage datasets and for performing work that is computationally intensive. Moreover, this new package also allows the developers to use “dplyr-like operations” i.e. a powerful R-package, which transforms and summarizes tabular data with rows and columns on datasets stored in Cloud Storage. The R programming language is very efficient when it comes to building data analysis tools and statistical apps. With cloud computing all the rage, even newer opportunities have opened up for developers working with R. Using GCP’s Cloud Dataproc Jobs API, it gets easier to submit SparkR jobs to a cluster without any need to open firewalls for accessing web-based IDEs or SSH onto the master node. With the API, it is easy to automate the repeatable R statistics that users want to be running on their datasets. Additionally, GCP for R also helps avoid the infrastructure barriers that put a limit on understanding data. This includes selecting datasets that need to be sampled due to compute or data size limits. GCP also allows you to build large-scale models that help analyze the datasets of sizes that would previously require big investments in high-performance computing infrastructures. For more information, check out the official Google Cloud blog post. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices
Read more
  • 0
  • 0
  • 18083

article-image-google-opts-out-of-pentagons-10-billion-jedi-cloud-computing-contract-as-it-doesnt-align-with-its-ethical-use-of-ai-principles
Bhagyashree R
09 Oct 2018
3 min read
Save for later

Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles

Bhagyashree R
09 Oct 2018
3 min read
Yesterday, Google announced that they will be not be competing for the Pentagon’s cloud-computing contract which is supposedly worth $10 billion. They opted out of bidding for the project named, Joint Enterprise Defense Infrastructure (JEDI) saying the project may conflict with its principles for the ethical use of AI. The JEDI project involves moving massive amounts of Pentagon internal data to a commercially operated secure cloud system. The bidding for this contract began two months ago and closes this week (12th October). CNBC reported in July that Amazon is considered as the number one choice for the contract because it is already providing services for the cloud system used by U.S intelligence agencies. Cloud providers such as IBM, Microsoft, and Oracle are also top-contenders as they have worked with government agencies for many decades. This move could help their chances of winning the decade-long JEDI contract. Why Google has dropped out of this bidding? One of Google’s spokespersons told TechCrunch that the main reason for opting out of this bidding is because it doesn’t align with their AI principles: “While we are working to support the US government with our cloud in many areas, we are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles and second, we determined that there were portions of the contract that were out of scope with our current government certifications.” He further added that: “Had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it. Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload. At a time when new technology is constantly becoming available, customers should have the ability to take advantage of that innovation. We will continue to pursue strategic work to help state, local and federal customers modernize their infrastructure and meet their mission critical requirements.” Also, this decision is a result of thousands of Google employees protesting against the company's involvement in another US government project named Project Maven. Earlier this year, some of the Google employees reportedly quit over the company's work on this project. Its employees believed that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. An internal petition was also drafted for Google CEO Sundar Pichai to cancel Project Maven and was signed by over 3,000 employees. After this protest, Google said it would not renew the contract or pursue similar military contracts. Further, Google also formulated its principles for the ethical use of AI. You can read the full story on Bloomberg. Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology
Read more
  • 0
  • 0
  • 18076

article-image-microsoft-azure-reportedly-chooses-xilinx-chips-over-intel-altera-for-ai-co-processors-says-bloomberg-report
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report

Melisha Dsouza
31 Oct 2018
3 min read
Xilinx Inc., has reportedly won orders from Microsoft Corp.’s Azure cloud unit to account for half of the co-processors currently used on Azure servers to handle machine-learning workloads. This will replace the chips made by Intel Corp, according to people familiar with Microsoft's’ plans as reported by Bloomberg Microsoft’s decision comes with effect to add another chip supplier in order to serve more customers interested in machine learning. To date, this domain was run by Intel’s Altera division. Now that Xilinx has bagged the deal, does this mean Intel will no longer serve Microsoft? Bloomberg reported Microsoft’s confirmation that it will continue its relationship with Intel in its current offerings. A Microsoft spokesperson added that “There has been no change of sourcing for existing infrastructure and offerings”. Sources familiar with the arrangement also commented on how Xilinx chips will have to achieve performance goals to determine the scope of their deployment. Cloud vendors these days are investing heavily in research and development centering around the machine learning field. The past few years has seen an increase in need of flexible chips that can be configured to run machine-learning services. Companies like Microsoft, Google and Amazon are massive buyers of server chips and are always looking for alternatives to standard processors to increase the efficiency of their data centres. Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE that “Programmable chips are key to the success of infrastructure-as-a-service providers as they allow them to utilize existing CPU capacity better, They’re also key enablers for next-generation application technologies like machine learning and artificial intelligence.” Earlier this year, Xilinx CEO Victor Peng made clear his plans to focus on data center customers, saying “data center is an area of rapid technology adoption where customers can quickly take advantage of the orders of magnitude performance and performance per-watt improvement that Xilinx technology enables in applications like artificial intelligence (AI) inference, video and image processing, and genomics” Last month, Xilinx made headlines with the announcement of a new breed of computer chips designed specifically for AI inference. These chips combine FPGAs with two higher-performance Arm processors, plus a dedicated AI compute engine and relates to the application of deep learning models in consumer and cloud environments. The chips promise higher throughput, lower latency and greater power efficiency than existing hardware. Looks like Xilinx is taking noticeable steps to make itself seen in the AI market. Head over to Bloomberg for the complete coverage of this news. Microsoft Ignite 2018: New Azure announcements you need to know Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud    
Read more
  • 0
  • 0
  • 18059
article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 18056

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 17788
Modal Close icon
Modal Close icon