Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 18294

article-image-azure-devops-outage-root-cause-analysis-starring-greedy-threads-and-rogue-scale-units
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Azure DevOps outage root cause analysis starring greedy threads and rogue scale units

Prasad Ramesh
19 Oct 2018
4 min read
Azure DevOps suffered several outages earlier this month. Microsoft has done a root cause analysis to find the causes. This is after Azure cloud was affected by the environment last month. Incidents on October 3, 4 and 8 It started on October 3 with a networking issue in the North Central US region lasting over an hour. It happened again the following day which lasted an hour. On following up with the Azure networking team, it was found that there were no networking issues when the outages happened. Another incident happened on October 8. They realized that something was fundamentally wrong which is when an analysis on telemetry was done. The issue was not found after this. After the third incident, it was found that the thread count on the machine continued to rise. This was an indication that some activity was going on even with no load coming to the machine. It was found that all 1202 threads had the same call stack, the following being the key call. Server.DistributedTaskResourceService.SetAgentOnline Agent machines send a heartbeat signal every minute to the service to notify being online. On no signal from an agent over a minute it is marked offline and the agent needs to reconnect to signal. The agent machines were marked offline in this case and eventually, they succeeded after retries. On success, the agent was stored in an in-memory list. Potentially thousands of agents were reconnecting at a time. In addition, there was a cause for threads to get full with messages since asynchronous call patterns were adopted recently. The .NET message queue stores a queue of messages to process and maintains a thread pool where. As a thread becomes available, it will service the next message in queue. Source: Microsoft The thread pool, in this case, was smaller than the queue. For N threads, N messages are processed simultaneously. When an async call is made, the same message queue is used and it queues up a new message to complete the async call in order to read the value. This call is at the end of the queue while all the threads are occupied processing other messages. Hence, the call will not complete until the other previous messages have completed, tying up one thread. The process comes to a standstill when N messages are processed where N also equals to the number of threads. At this state, an device can no longer process requests causing the load balancer to take it out of rotation. Hence the outage. An immediate fix was to conditionalize this code so no more async calls were made. This was done as the pool providers feature isn’t in effect yet. Incident on October 10 On October 10, an incident with a 15-minute impact took place. The initial problem was the result of a spike in slow response times from SPS. It was ultimately caused by problems in one of the databases. A Team Foundation Server (TFS) put pressure on SPS, their authentication service. On deploying TFS, sets of scale units called deployment rings are also deployed. When the deployment for a scale unit completes, it puts extra pressure on SPS. There are built-in delays between scale units to accommodate the extra load. There is also sharding going on in SPS to break it into multiple scale units. These factors together caused a trip in the circuit breakers, in the database. This led to slow response times and failed calls. This was mitigated by manually recycling the unhealthy scale units. For more details and complete analysis, visit the Microsoft website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary. Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 18262

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 18258

article-image-zone-redundancy-for-azure-cache-for-redis-now-in-preview-from-microsoft-azure-blog-announcements
Matthew Emerick
14 Oct 2020
3 min read
Save for later

Zone Redundancy for Azure Cache for Redis now in preview from Microsoft Azure Blog > Announcements

Matthew Emerick
14 Oct 2020
3 min read
Between waves of pandemics, hurricanes, and wildfires, you don’t need cloud infrastructure adding to your list of worries this year. Fortunately, there has never been a better time to ensure your Azure deployments stay resilient. Availability zones are one of the best ways to mitigate risks from outages and disasters. With that in mind, we are announcing the preview for zone redundancy in Azure Cache for Redis. Availability Zones on Azure Azure Availability Zones are geographically isolated datacenter locations within an Azure region, providing redundant power, cooling, and networking. By maintaining a physically separate set of resources with the low latency from remaining in the same region, Azure Availability Zones provide a high availability solution that is crucial for businesses requiring resiliency and business continuity. Redundancy options in Azure Cache for Redis Azure Cache for Redis is increasingly becoming critical to our customers’ data infrastructure. As a fully managed service, Azure Cache for Redis provides various high availability options. By default, caches in the standard or premium tier have built-in replication with a two-node configuration—a primary and a replica hosting two identical copies of your data. New in preview, Azure Cache for Redis can now support up to four nodes in a cache distributed across multiple availability zones. This update can significantly enhance the availability of your Azure Cache for Redis instance, giving you greater peace of mind and hardening your data architecture against unexpected disruption. High Availability for Azure Cache for Redis The new redundancy features deliver better reliability and resiliency. First, this update expands the total number of replicas you can create. You can now implement up to three replica nodes in addition to the primary node. Having more replicas generally improves resiliency (even if they are in the same availability zone) because of the additional nodes backing up the primary. Even with more replicas, a datacenter-wide outage can still disrupt your application. That’s why we’re also enabling zone redundancy, allowing replicas to be located in different availability zones. Replica nodes can be placed in one or multiple availability zones, with failover automatically occurring if needed across availability zones. With Zone Redundancy, your cache can handle situations where the primary zone is knocked offline due to issues like floods, power outages, or even natural disasters. This increases availability while maintaining the low latency required from a cache. Zone redundancy is currently only available on the premium tier of Azure Cache for Redis, but it will also be available on the enterprise and enterprise flash tiers when the preview is released. Industry-leading service level agreement Azure Cache for Redis already offers an industry-standard 99.9 percent service level agreement (SLA). With the addition of zone redundancy, the availability increases to a 99.95 percent level, allowing you to meet your availability needs while keeping your application nimble and scalable. Adding zone redundancy to Azure Cache for Redis is a great way to promote availability and peace of mind during turbulent situations. Learn more in our documentation and give it a try today. If you have any questions or feedback, please contact us at AzureCache@microsoft.com.
Read more
  • 0
  • 0
  • 18250

article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 18214

article-image-microsoft-announces-azure-devops-bounty-program
Prasad Ramesh
18 Jan 2019
2 min read
Save for later

Microsoft announces Azure DevOps bounty program

Prasad Ramesh
18 Jan 2019
2 min read
Yesterday, the Microsoft Security Response Center (MSRC) announced the launch of the Azure DevOps Bounty program. This is a program launched to solidify the security provided to Azure DevOps customers. They are offering rewards up to US$20,000 if you can find eligible vulnerabilities in Azure DevOps online and Azure DevOps server. The bounty rewards range from $500 to $20,000 US. The reward will depend on Microsoft’s discretion on the severity and impact of a vulnerability. It will also depend on the quality of the submission subject to their bounty terms and conditions. Products in focus of this program are Azure DevOps services which was previously known as Visual Studio Team Services and the latest versions of Azure DevOps Server and Team Foundation Server. The goal of the program is to find any eligible vulnerabilities that may have a direct security impact on the customer base. For a submission to be eligible, it should fulfil the following criteria: Identifying a previously unreported vulnerability in one of the services or products. The web application vulnerabilities must impact supported browsers for Azure DevOps server, services, or plug-ins. The submission should have documented steps that are clear and reproducible. It can be text or video. Any necessary information to quickly reproduce and understand the issue can result in faster response and higher rewards. Any submissions that Microsoft thinks are not eligible in this criteria may be rejected. You can send your submissions to secure@microsoft.com with the help of bug submission guidelines. Participants are requested to use the Coordinated Vulnerability Disclosure when reporting the vulnerabilities. Note that there are no restrictions on how many vulnerabilities you can report or the rewards for it. When there are multiple submissions, the first one will be chosen for the reward. For more details about the eligible vulnerabilities and the Microsoft Azure DevOps bounty program, visit the Microsoft website. 8 ways Artificial Intelligence can improve DevOps Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”
Read more
  • 0
  • 0
  • 18135
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsofts-immutable-storage-for-azure-storage-blobs-now-generally-available
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Microsoft’s Immutable storage for Azure Storage Blobs, now generally available

Melisha Dsouza
21 Sep 2018
3 min read
Microsoft’s new "immutable storage" feature for Azure Blobs, is now generally available. Financial Services organizations regulated by the Securities and Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), Financial Industry Regulatory Authority (FINRA), and others are required to retain business-related communications in a Write-Once-Read-Many (WORM) or immutable state. This ensures that the data is non-erasable and non-modifiable for a specific retention interval. Healthcare, insurance, media, public safety, and legal services industries will also benefit a great deal from this feature. Through configurable policies, users can only create and read Blobs, and not modify or delete them. There is no additional charge for using this feature. Immutable data is priced in the same way as mutable data. Read Also: Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure Upgrades that accompany this feature are: #1 Regulatory compliance Immutable storage for Azure Blobs will help financial institutions and related industries to store data immutably. Microsoft will soon release a technical white paper with details on how the feature addresses regulatory requirements. Head over to the Azure Trust Center for detailed information about compliance certifications. #2 Secure document retention The immutable storage feature for Azure Blobs service ensures that data cannot be modified or deleted by any user- even with administrative privileges. #3 Better Legal Hold Users can now store sensitive information related to a litigation, criminal investigation, and more in a tamper-proof state for the desired duration. #4 Time-based retention policy support Users can set policies to store data immutably for a specified interval of time. #5 Legal hold policy support When users do not know the data retention time, they can set legal holds to store data until the legal hold is cleared. #6 Support for all Blob tiers WORM policies are independent of the Azure Blob Storage tier and will apply to all the tiers. Therefore, Customers can store their data in the most cost-optimized tier for their workloads immutably. #7 Blob Container level configuration Users can configure time-based retention policies and legal hold tags at the container level. Simple container level settings can create time-based retention policies, lock policies, extend retention intervals, set legal holds, clear legal holds etc. 17a-4, LLC, Commvault , HubStor,Archive2Azure are among the few Microsoft partners that support Azure Blob immutable storage. To know how to upgrade to this feature, head over to the Microsoft Blog Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 18128

article-image-docker-enterprise-edition-2-0-released
Gebin George
18 Apr 2018
3 min read
Save for later

What's new in Docker Enterprise Edition 2.0?

Gebin George
18 Apr 2018
3 min read
Docker Enterprise Edition 2.0 was released yesterday. The major focus of this new release (and the platform as a whole) is speeding up multi-cloud initiatives and automating the application delivery model, that go hand-in hand with DevOps and Agile philosophy. Docker has become an important tool for businesses in a very short space of time. With Docker EE 2.0, it looks like Docker will consolidate its position as the go-to containerization tool for enterprise organizations. Key features of Docker Enterprise Edition 2.0 Let’s look at some of the key capabilities included in Docker EE 2.0 release. Docker EE 2.0 is incredibly flexible  Flexibility is one of the biggest assets of Docker Enterprise Edition as today’s software delivery ecosystem demands freedom of choice. Organizations that are building applications on different platforms, using varied set of tools, deploying on different infrastructures and running them on different set of platforms require a huge amount of flexibility. Docker EE has addressed this concern with the following capabilities: Multi-Linux, Multi-OS, Multi-Cloud Many organizations have adopted a Hybrid cloud or Multi-cloud strategy, and build applications on different operating systems. Docker EE is registered across all the popular set of operating systems such as Windows, all the popular Linux distributions, Windows Server, and also on popular public clouds, enabling the users to deploy applications flexibly, wherever required. Docker EE 2.0 is interoperable with Docker Swarm and Kubernetes Container orchestration forms the core of DevOps and the entire ecosystem of containers revolve around Swarm or Kubernetes. Docker EE allows flexibility is switching between both these tools for application deployment and orchestration. Applications deployed on Swarm today, can be easily migrated to Kubernetes using the same compose file, making the life of developers simpler. Accelerating agile with Docker Enterprise Edition 2.0 Docker EE focuses on monitoring and managing containers to much greater extent than the open source version of Docker. The Enterprise Edition has specialized management and monitoring platform for looking after Kubernetes cluster and also has access to Kubernetes API, CLI and interfaces. Cluster management made simple: Easy-to-use cluster management services: Basic single line commands for adding cluster High availability of management plane Access to consoles and logs Securing configurations Secure application zones: With swift integration with corporate LDAPs and Active Directory system, we can divide a single cluster logically and physically into different teams. This seems to be the most convenient way to assign new namespaces to Kubernetes clusters. Layer 7 routing for Swarm: The new interlock 2.0 architecture provides new and optimized enhancements for network routing in Swarm. For more information on interlock architecture, refer the official Docker blog. Kubernetes: All the core components of Kubernetes environment like APIs, CLIs are available for users in a CCNF- conformant Kubernetes stack. There were few more enhancements related to the supply chain and security domains. For the complete set of improvements to Docker, check out the official Docker EE documentation.
Read more
  • 0
  • 0
  • 18102

article-image-google-cloud-releases-a-beta-version-of-sparkr-job-types-in-cloud-dataproc
Natasha Mathur
21 Dec 2018
2 min read
Save for later

Google Cloud releases a beta version of SparkR job types in Cloud Dataproc

Natasha Mathur
21 Dec 2018
2 min read
Google released a beta version of SparkR jobs on Cloud Dataproc, a cloud service that lets you run Apache Spark and Apache Hadoop in a cost-effective manner, earlier this week. SparkR Jobs will build R support on GCP. It is a package that delivers a lightweight front-end to use Apache Spark from R. This new package supports distributed machine learning using MLlib. It can be used to process against large cloud storage datasets and for performing work that is computationally intensive. Moreover, this new package also allows the developers to use “dplyr-like operations” i.e. a powerful R-package, which transforms and summarizes tabular data with rows and columns on datasets stored in Cloud Storage. The R programming language is very efficient when it comes to building data analysis tools and statistical apps. With cloud computing all the rage, even newer opportunities have opened up for developers working with R. Using GCP’s Cloud Dataproc Jobs API, it gets easier to submit SparkR jobs to a cluster without any need to open firewalls for accessing web-based IDEs or SSH onto the master node. With the API, it is easy to automate the repeatable R statistics that users want to be running on their datasets. Additionally, GCP for R also helps avoid the infrastructure barriers that put a limit on understanding data. This includes selecting datasets that need to be sampled due to compute or data size limits. GCP also allows you to build large-scale models that help analyze the datasets of sizes that would previously require big investments in high-performance computing infrastructures. For more information, check out the official Google Cloud blog post. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices
Read more
  • 0
  • 0
  • 18083

article-image-google-opts-out-of-pentagons-10-billion-jedi-cloud-computing-contract-as-it-doesnt-align-with-its-ethical-use-of-ai-principles
Bhagyashree R
09 Oct 2018
3 min read
Save for later

Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles

Bhagyashree R
09 Oct 2018
3 min read
Yesterday, Google announced that they will be not be competing for the Pentagon’s cloud-computing contract which is supposedly worth $10 billion. They opted out of bidding for the project named, Joint Enterprise Defense Infrastructure (JEDI) saying the project may conflict with its principles for the ethical use of AI. The JEDI project involves moving massive amounts of Pentagon internal data to a commercially operated secure cloud system. The bidding for this contract began two months ago and closes this week (12th October). CNBC reported in July that Amazon is considered as the number one choice for the contract because it is already providing services for the cloud system used by U.S intelligence agencies. Cloud providers such as IBM, Microsoft, and Oracle are also top-contenders as they have worked with government agencies for many decades. This move could help their chances of winning the decade-long JEDI contract. Why Google has dropped out of this bidding? One of Google’s spokespersons told TechCrunch that the main reason for opting out of this bidding is because it doesn’t align with their AI principles: “While we are working to support the US government with our cloud in many areas, we are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles and second, we determined that there were portions of the contract that were out of scope with our current government certifications.” He further added that: “Had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it. Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload. At a time when new technology is constantly becoming available, customers should have the ability to take advantage of that innovation. We will continue to pursue strategic work to help state, local and federal customers modernize their infrastructure and meet their mission critical requirements.” Also, this decision is a result of thousands of Google employees protesting against the company's involvement in another US government project named Project Maven. Earlier this year, some of the Google employees reportedly quit over the company's work on this project. Its employees believed that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. An internal petition was also drafted for Google CEO Sundar Pichai to cancel Project Maven and was signed by over 3,000 employees. After this protest, Google said it would not renew the contract or pursue similar military contracts. Further, Google also formulated its principles for the ethical use of AI. You can read the full story on Bloomberg. Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology
Read more
  • 0
  • 0
  • 18076
article-image-microsoft-azure-reportedly-chooses-xilinx-chips-over-intel-altera-for-ai-co-processors-says-bloomberg-report
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report

Melisha Dsouza
31 Oct 2018
3 min read
Xilinx Inc., has reportedly won orders from Microsoft Corp.’s Azure cloud unit to account for half of the co-processors currently used on Azure servers to handle machine-learning workloads. This will replace the chips made by Intel Corp, according to people familiar with Microsoft's’ plans as reported by Bloomberg Microsoft’s decision comes with effect to add another chip supplier in order to serve more customers interested in machine learning. To date, this domain was run by Intel’s Altera division. Now that Xilinx has bagged the deal, does this mean Intel will no longer serve Microsoft? Bloomberg reported Microsoft’s confirmation that it will continue its relationship with Intel in its current offerings. A Microsoft spokesperson added that “There has been no change of sourcing for existing infrastructure and offerings”. Sources familiar with the arrangement also commented on how Xilinx chips will have to achieve performance goals to determine the scope of their deployment. Cloud vendors these days are investing heavily in research and development centering around the machine learning field. The past few years has seen an increase in need of flexible chips that can be configured to run machine-learning services. Companies like Microsoft, Google and Amazon are massive buyers of server chips and are always looking for alternatives to standard processors to increase the efficiency of their data centres. Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE that “Programmable chips are key to the success of infrastructure-as-a-service providers as they allow them to utilize existing CPU capacity better, They’re also key enablers for next-generation application technologies like machine learning and artificial intelligence.” Earlier this year, Xilinx CEO Victor Peng made clear his plans to focus on data center customers, saying “data center is an area of rapid technology adoption where customers can quickly take advantage of the orders of magnitude performance and performance per-watt improvement that Xilinx technology enables in applications like artificial intelligence (AI) inference, video and image processing, and genomics” Last month, Xilinx made headlines with the announcement of a new breed of computer chips designed specifically for AI inference. These chips combine FPGAs with two higher-performance Arm processors, plus a dedicated AI compute engine and relates to the application of deep learning models in consumer and cloud environments. The chips promise higher throughput, lower latency and greater power efficiency than existing hardware. Looks like Xilinx is taking noticeable steps to make itself seen in the AI market. Head over to Bloomberg for the complete coverage of this news. Microsoft Ignite 2018: New Azure announcements you need to know Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud    
Read more
  • 0
  • 0
  • 18059

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 18056

article-image-oracle-releases-virtualbox-6-0-0-with-improved-graphics-user-interface-and-more
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more

Amrata Joshi
19 Dec 2018
2 min read
Yesterday, the team at Oracle released VirtualBox 6.0.0, a free and open-source hosted hypervisor for x86 computers. VirtualBox was initially developed by Innotek GmbH, which was then acquired by Sun Microsystems in 2008 and then by Oracle in 2010. VirtualBox is a virtualization product for enterprise as well as home use. It is an extremely feature rich, high-performance product for enterprise customers. Features of VirtualBox 6.0.0 User interface Virtual 6.0.0 comes with a greatly improved HiDPI and scaling support which includes better detection and per-machine configuration. User interface is simpler and more powerful. It also comes with a new file manager that enables users to control the guest file system and copy files between host and guest. Graphics VirtualBox 6.0.0 features 3D graphics support for Windows guests, and VMSVGA 3D graphics device emulation on Linux and Solaris guests. It comes with an added support for surround speaker setups. It also comes with added utility vboximg-mount on Apple hosts for accessing the content of guest disks on the host. In VirtualBox 6.0.0, there is an added support for Hyper-V to avoid the inability to run VMs at low performance. VirtualBox 6.0.0 comes with support for exporting a virtual machine to Oracle cloud infrastructure This release comes with a better application and virtual machine set-up Linux guests This release now supports Linux 4.20 and VMSVGA. The process of building vboxvideo on EL 7.6 standard kernel has been improved with this release. Other features Support for DHCP options. MacOS Guest initial support. Now it is possible to configure upto four custom ACPI tables for a VM. With this release, video and audio recordings can be separately enabled. Better support for attaching and detaching remote desktop connections. Major bug fixes The previous release used to throw wrong instruction after single-step exception with rdtsc. This issue has been resolved with this release. This release comes with improved audio/video recording. This issues with serial port emulation have been fixed. The resizing issue with disk images has been resolved. This release comes with an improved shared folder for auto-mounting. Issues with BIOS has been fixed. Read more about this news on VirtualBox’s changelog. Installation of Oracle VM VirtualBox on Linux Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS How to Install VirtualBox Guest Additions
Read more
  • 0
  • 0
  • 18035
article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 17788

article-image-389-directory-server-set-to-replace-openldap-as-red-hat-and-suse-withdraw-support-for-openldap-in-their-enterprise-linux-offerings
Bhagyashree R
29 Aug 2018
2 min read
Save for later

389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings

Bhagyashree R
29 Aug 2018
2 min read
Red Hat and SUSE have withdrawn their support for OpenLDAP in their Enterprise Linux offers, which will be replaced by Red Hat’s own 389 Directory Server. The openldap-server packages were deprecated starting from Red Hat Enterprise Linux (RHEL) 7.4, and will not be included in any future major release of RHEL. SUSE, in their release notes, have mentioned that the OpenLDAP server is still available on the Legacy Module for migration purposes, but it will not be maintained for the entire SUSE Linux Enterprise Server (SLE) 15 lifecycle. What is OpenLDAP? OpenLDAP is an open source implementation of Lightweight Directory Access Protocol (LDAP) developed by the OpenLDAP Project. It is a collective effort to develop a LDAP suite of applications and development tools, which are robust, commercial-grade, and open source. What is 389 Directory Server? The 389 Directory Server is an LDAP server developed by Red Hat as a part of Red Hat’s community-supported Fedora Project. The name “389” comes from the port number used by LDAP. It supports many operating systems including Fedora, Red Hat Enterprise Linux 3 and above, Debian, Solaris 8 and above. The 389 Directory Server packages provide the core directory services components for Identity Management (IdM) in Red Hat Enterprise Linux and the Red Hat Directory Server (RHDS). The package is not supported as a stand-alone solution to provide LDAP services. Why Red Hat and SUSE withdrew their support? According to Red Hat, customers prefer Identity Management (IdM) in Red Hat Enterprise Linux solution over OpenLDAP server for enterprise use cases. This is why, they decided to focus on the technologies that Red Hat historically had deep understanding, and expertise in, and have been investing into, for more than a decade. By focusing on Red Hat Directory Server and IdM offerings, Red Hat will be able to better serve their customers of those solutions and increase the value of subscription. To know more on Red Hat and SUSE withdrawing their support for OpenLDAP, check out Red Hat’s announcement and SUSE release notes. Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices
Read more
  • 0
  • 0
  • 17784
Modal Close icon
Modal Close icon