Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud & Networking

770 Articles
article-image-top-reasons-why-businesses-should-adopt-enterprise-collaboration-tools
Guest Contributor
05 Mar 2019
8 min read
Save for later

Top reasons why businesses should adopt enterprise collaboration tools

Guest Contributor
05 Mar 2019
8 min read
Following the trends of the modern digital workplace, organizations apply automation even to the domains that are intrinsically human-centric. Collaboration is one of them. And if we can say that organizations have already gained broad experience in digitizing business processes while foreseeing potential pitfalls, the situation is different with collaboration. The automation of collaboration processes can bring a significant number of unexpected challenges even to those companies that have tested the waters. State of Collaboration 2018 reveals a curious fact: even though organizations can be highly involved in collaborative initiatives, employees still report that both they and their companies are poorly prepared to collaborate. Almost a quarter of respondents (24%) affirm that they lack relevant enterprise collaboration tools, while 27% say that their organizations undervalue collaboration and don't offer any incentives for them to support it. Two reasons can explain these stats: The collaboration process can be hardly standardized and split into precise workflows. The number of collaboration scenarios is enormous, and it’s impossible to get them all into a single software solution. It’s also pretty hard to manage collaboration, assess its effectiveness, or understand bottlenecks. Unlike business process automation systems that play a critical role in an organization and ensure core production or business activities, enterprise collaboration tools are mostly seen as supplementary solutions, so they are the last to be implemented. Moreover, as organizations often don’t spend much effort on adapting collaboration tools to their specifics, the end solutions are frequently subject to poor adoption. At the same time, the IT market offers numerous enterprise collaboration tools Slack, Trello, Stride, Confluence, Google Suite, Workplace by Facebook, SharePoint and Office 365, to mention a few, compete to win enterprises’ loyalty. But how to choose the right enterprise Collaboration tools and make them effective? Or how to make employees use the implemented enterprise Collaboration tools actively? To answer these questions and understand how to succeed in their collaboration-focused projects, organizations have to examine both tech- and employee-related challenges they may face. Challenges rooted in technologies From the enterprise Collaboration tools' deployment model to its customization and integration flexibility, companies should consider a whole array of aspects before they decide which solution they will implement. Selecting a technologically suitable solution Finding a proper solution is a long process that requires companies to make several important decisions: Cloud or on-premises? By choosing the deployment type, organizations define their future infrastructure to run the solution, required management efforts, data location, and the amount of customization available. Cloud solutions can help enterprises save both technical and human resources. However, companies often mistrust them because of multiple security concerns. On-premises solutions can be attractive from the customization, performance, and security points of view, but they are resource-demanding and expensive due to high licensing costs. Ready-to-use or custom? Today many vendors offer ready-made enterprise collaboration tools, particularly in the field of enterprise intranets. This option is attractive for organizations because they can save on customizing a solution from scratch. However, with ready-made products, organizations can face a bigger risk of following a vendor’s rigid politics (subscription/ownership price, support rates, functional capabilities, etc.). If companies choose custom enterprise collaboration software, they have a wider choice of IT service providers to cooperate with and adjust their solutions to their needs. One tool or several integrated tools? Some organizations prefer using a couple of apps that cover different collaboration needs (for example, document management, video conferencing, instant messaging). At the same time, companies can also go for a centralized solution, such as SharePoint or Office 365 that can support all collaboration types and let users create a centralized enterprise collaboration environment. Exploring integration options Collaboration isn’t an isolated process. It is tightly related to business or organizational activities that employees do. That’s why integration capabilities are among the most critical aspects companies should check before investing in their collaboration stack. Connecting an enterprise Collaboration tool to ERP, CRM, HRM, or ITSM solutions will not only contribute to the business process consistency but will also reduce the risk of collaboration gaps and communication inconsistencies. Planning ongoing investment Like any other business solution, an enterprise collaboration tool requires financial investment to implement, customize (even ready-made solutions require tuning), and support it. The initial budget will strongly depend on the deployment type, the estimated number of users, and needed customizations. While planning their yearly collaboration investment, companies should remember that their budgets should cover not only the activities necessary to ensure the solution’s technical health but also a user adoption program. Eliminating duplicate functionality Let’s consider the following scenario: a company implements a collaboration tool that includes the project management functionality, while they also run a legacy project management system. The same situation can happen with time tracking, document management, knowledge management systems, and other stand-alone solutions. In this case, it will be reasonable to consider switching to the new suite completely and depriving the legacy one. For example, by choosing SharePoint Server or Online, organizations can unite various functions within a single solution. To ensure a smooth transition to a new environment, SharePoint developers can migrate all the data from legacy systems, thus making it part of the new solution. Choosing a security vector As mentioned before, the solution’s deployment model dictates the security measures that organizations have to take. Sometimes security is the paramount reason that holds enterprises’ collaboration initiatives back. Security concerns are particularly characteristic of organizations that hesitate between on-premises and cloud solutions. SharePoint and Office 365 trends 2018 show that security represents the major worry for organizations that consider changing their on-premises deployments for cloud environments. What’s even more surprising is that while software providers, like Microsoft, are continually improving their security measures, the degree of concern keeps on growing. The report mentioned above reveals that 50% of businesses were concerned about security in 2018 compared to 36% in 2017 and 32% in 2016. Human-related challenges Technology challenges are multiple, but they all can be solved quite quickly, especially if a company partners with a professional IT service provider that backs them up at the tech level. At the same time, companies should be ready to face employee-related barriers that may ruin their collaboration effort. Changing employees’ typical style of collaboration Don’t expect that your employees will welcome the new collaboration solution. It’s about to change their typical collaboration style, which may be difficult for many. Some employees won’t share their knowledge openly, while others will find it difficult to switch from one-to-one discussions to digitized team meetings. In this context, change management should work at two levels: a technological one and a mental one. Companies should not just explain to employees how to use the new solution effectively, but also show each team how to adapt the collaboration system to the needs of each team member without damaging the usual collaboration flow. Finding the right tools for collaborators and non-collaborators Every team consists of different personalities. Some people can be open to collaboration; others can be quite hesitant. The task is to ensure a productive co-work of these two very different types of employees and everyone in between. Teams shouldn’t wait for instant collaboration consistency or general satisfaction. These are only possible to achieve if the entire team works together to create an optimal collaboration area for each individual. Launching digital collaboration within large distributed teams When it’s about organizing collaboration within a small or medium-sized team, collaboration difficulties can be quite simple to avoid, as the collaboration flow is moderate. But when it comes to collaboration in big teams, the risk of failure increases dramatically. Organizing effective communication of remote employees, connecting distributed offices, offering relevant collaboration areas to the entire team and subteams, enable cross-device consistency of collaboration — these are just a few steps to undertake for effective teamwork. Preparing strategies to overcome adoption difficulties He biggest human-related the poor adoption of an enterprise collaboration system. It can be hard for employees to get used to the new solution, accept the new communication medium, its UI and logic. Adoption issues are critical to address because they may engender more severe consequences than the tech-related ones. Say, if there is a functional defect in a solution, a company can fix it within a few days. However, if there are adoption issues, it means that all the efforts an organization puts into technology polishing can be blown away because their employees don’t use the solution at all. Ongoing training and communication between collaboration manager and particular teams is a must to keep employees’ satisfied with the solution they use. Is there more pain than gain? On recognizing all the challenges, companies might feel that there are too many barriers to overcome to get a decent collaboration solution. So maybe it’s reasonable to stay away from the collaboration race? Is it the case? Not really. If you take a look at Internet Trends 2018, you will see that there are multiple improvements that companies get as they adopt enterprise collaboration tools. Typical advantages include reduced meeting time, quicker onboarding, less time required for support, more effective document management, and a substantial rise in teams’ productivity. If your company wants to get all these advantages, be brave to face the possible collaboration challenges to get a great reward. Author Bio Sandra Lupanova is SharePoint and Office 365 Evangelist at Itransition, a software development and IT consulting company headquartered in Denver. Sandra focuses on the SharePoint and Office 365 capabilities, challenges that companies face while adopting these platforms, as well as shares practical tips on how to improve SharePoint and Office 365 deployments through her articles.
Read more
  • 0
  • 0
  • 24308

article-image-new-programming-video-courses-for-march-2019
Richard Gall
04 Mar 2019
6 min read
Save for later

New programming video courses for March 2019

Richard Gall
04 Mar 2019
6 min read
It’s not always easy to know what to learn next if you’re a programmer. Industry shifts can be subtle but they can sometimes be dramatic, making it incredibly important to stay on top of what’s happening both in your field and beyond. No one person can make that decision for you. All the thought leadership and mentorship in the world isn’t going to be able to tell you what’s right for you when it comes to your career. But this list of videos, released last month, might give you a helping hand as to where to go next when it comes to your learning… New data science and artificial intelligence video courses for March Apache Spark is carving out a big presence as the go-to software for big data. Two videos from February focus on Spark - Distributed Deep Learning with Apache Spark and Apache Spark in 7 Days. If you’re new to Spark and want a crash course on the tool, then clearly, our video aims to get you up and running quickly. However, Distributed Deep Learning with Apache Spark offers a deeper exploration that shows you how to develop end to end deep learning pipelines that can leverage the full potential of cutting edge deep learning techniques. While we’re on the subject of machine learning, other choice video courses for March include TensorFlow 2.0 New Features (we’ve been eagerly awaiting it and it finally looks like we can see what it will be like), Hands On Machine Learning with JavaScript (yes, you can now do machine learning in the browser), and a handful of interesting videos on artificial intelligence and finance: AI for Finance Machine Learning for Algorithmic Trading Bots with Python Hands on Python for Finance Elsewhere, a number of data visualization video courses prove that communicating and presenting data remains an urgent challenge for those in the data space. Tableau remains one of the definitive tools - you can learn the latest version with Tableau 2019.1 for Data Scientists and Data Visualization Recipes with Python and Matplotlib 3.   New app and web development video courses for March 2019 There are a wealth of video courses for web and app developers to choose from this month. True, Hands-on Machine Learning for JavaScript is well worth a look, but moving past the machine learning hype, there are a number of video courses that take a practical look at popular tools and new approaches to app and web development. Angular’s death has been greatly exaggerated - it remains a pillar of the JavaScript world. While the project’s versioning has arguably been lacking some clarity, if you want to get up to speed with where the framework is today, try Angular 7: A Practical Guide. It’s a video that does exactly what it says on the proverbial tin - it shows off Angular 7 and demonstrates how to start using it in web projects. We’ve also been seeing some uptake of Angular by ASP.NET developers, as it offers a nice complement to the Microsoft framework on the front end side. Our latest video on the combination, Hands-on Web Development with ASP.NET Core and Angular, is another practical look at an effective and increasingly popular approach to full-stack development. Other picks for March include Building Mobile Apps with Ionic 4, a video that brings you right up to date with the recent update that launched in January (interestingly, the project is now backed by web components, not Angular), and a couple of Redux videos - Mastering Redux and Redux Recipes. Redux is still relatively new. Essentially, it’s a JavaScript library that helps you manage application state - because it can be used with a range of different frameworks and libraries, including both Angular and React, it’s likely to go from strength to strength in 2019. Infrastructure, admin and security video courses for March 2019 Node.js is becoming an important library for infrastructure and DevOps engineers. As we move to a cloud native world, it’s a great tool for developing lightweight and modular services. That’s why we’re picking Learn Serverless App Development with Node.js and Azure Functions as one of our top videos for this month. Azure has been growing at a rapid rate over the last 12 months, and while it’s still some way behind AWS, Microsoft’s focus on developer experience is making Azure an increasingly popular platform with developers. For Node developers, this video is a great place to begin - it’s also useful for anyone who simply wants to find out what serverless development actually feels like. Read next: Serverless computing wars: AWS Lambda vs. Azure Functions A partner to this, for anyone beginning Node, is the new Node.js Design Patterns video. In particular, if Node.js is an important tool in your architecture, following design patterns is a robust method of ensuring reliability and resilience. Elsewhere, we have Modern DevOps in Practice, cutting through the consultancy-speak to give you useful and applicable guidance on how to use DevOps thinking in your workflows and processes, and DevOps with Azure, another video that again demonstrates just how impressive Azure is. For those not Azure-inclined, there’s AWS Certified Developer Associate - A Practical Guide, a video that takes you through everything you need to know to pass the AWS Developer Associate exam. There’s also a completely cloud-agnostic video course in the form of Creating a Continuous Deployment Pipeline for Cloud Platforms that’s essential for infrastructure and operations engineers getting to grips with cloud native development.     Learn a new programming language with these new video courses for March Finally, there are a number of new video courses that can help you get to grips with a new programming language. So, perfect if you’ve been putting off your new year’s resolution to learn a new language… Java 11 in 7 Days is a new video that brings you bang up to date with everything in the latest version of Java, while Hands-on Functional Programming with Java will help you rethink and reevaluate the way you use Java. Together, the two videos are a great way for Java developers to kick start their learning and update their skill set.  
Read more
  • 0
  • 0
  • 9473

article-image-announcing-linux-5-0
Melisha Dsouza
04 Mar 2019
2 min read
Save for later

Announcing Linux 5.0!

Melisha Dsouza
04 Mar 2019
2 min read
Yesterday, Linus Torvalds, announced the stable release of Linux 5.0. This release comes with AMDGPU FreeSync support, Raspberry Pi touch screen support and much more. According to Torvalds, “I'd like to point out (yet again) that we don't do feature-based releases, and that ‘5.0’ doesn't mean anything more than that the 4.x numbers started getting big enough that I ran out of fingers and toes.” Features of Linux 5.0 AMDGPU FreeSync support, which will improve the display of fast-moving images and will prove advantageous especially for gamers. According to CRN, this will also make Linux a better platform for dense data visualizations and support “a dynamic refresh rate, aimed at providing a low monitor latency and a smooth, virtually stutter-free viewing experience.” Support for the Raspberry Pi’s official touch-screen. All information is copied into a memory mapped area by RPi's firmware, instead of using a conventional bus. Energy-aware scheduling feature, that lets the task scheduler to take scheduling decisions resulting in lower power usage on asymmetric SMP platforms. This feature will use Arm's big.LITTLE CPUs and help achieve better power management in phones Adiantum file system encryption for low power devices. Btrfs can support swap files, but the swap file must be fully allocated as "nocow" with no compression on one device. Support for binderfs, a binder filesystem that will help run multiple instances of Android and is backward compatible. Improvement to reduce Fragmentation by over 90%. This results in better transparent hugepage (THP) usage. Support for Speculation Barrier (SB) instruction This is introduced as part of the fallout from Spectre and Meltdown. The merge window for 5.1 is now open. Read Linux’s official documentation for the detailed list of upgraded features in Linux 5.0. Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers
Read more
  • 0
  • 0
  • 31343

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 39378

article-image-the-10-best-cloud-and-infrastructure-conferences-happening-in-2019
Sugandha Lahoti
23 Jan 2019
11 min read
Save for later

The 10 best cloud and infrastructure conferences happening in 2019

Sugandha Lahoti
23 Jan 2019
11 min read
The latest Gartner report suggests that the cloud market is going to grow an astonishing 17.3% ($206 billion) in 2019, up from $175.8 billion in 2018. By 2022, the report claims, 90% of organizations will be using cloud services. But the cloud isn’t one thing, and 2019 is likely to bring the diversity of solutions, from hybrid to multi-cloud, to serverless, to the fore. With such a mix of opportunities and emerging trends, it’s going to be essential to keep a close eye on key cloud computing and software infrastructure conferences throughout the year. These are the events where we’ll hear the most important announcements, and they’ll probably also be the place where the most important conversations happen too. But with so many cloud computing conferences dotted throughout the year, it’s hard to know where to focus your attention. For that very reason, we’ve put together a list of some of the best cloud computing conferences taking place in 2019. #1 Google Cloud Next When and where is Google Cloud Next 2019 happening? April 9-11 at the Moscone Center in San Francisco. What is it? This is Google’s annual global conference focusing on the company’s cloud services and products, namely Google Cloud Platform. At previous events, Google has announced enterprise products such as G Suite and Developer Tools. The three-day conference features demonstrations, keynotes, announcements, conversations, and boot camps. What’s happening at Google Cloud Next 2019? This year Google Cloud Next has more than 450 sessions scheduled. You can also meet directly with Google experts in artificial intelligence and machine learning, security, and software infrastructure. Themes covered this year include application development, architecture, collaboration, and productivity, compute, cost management, DevOps and SRE, hybrid cloud, and serverless. The conference may also serve as a debut platform for new Google Cloud CEO Thomas Kurian. Who’s it for? The event is a not-to-miss event for IT professionals and engineers, but it will also likely attract entrepreneurs. For those of us who won’t attend, Google Cloud Next will certainly be one of the most important conferences to follow. Early bird registration begins from March 1 for $999. #2 OpenStack Infrastructure Summit When and where is OpenStack Infrastructure Summit 2019 happening? April 29 - May 1 in Denver. What is it? The OpenStack Infrastructure Summit, previously the OpenStack Summit, is focused on open infrastructure integration and has evolved over the years to cover more than 30 different open source projects.  The event is structured around use cases, training, and related open source projects. The summit also conducts Project Teams Gathering, just after the main conference (this year May 2-4). PTG provides meeting facilities, allowing various technical teams contributing to OSF (Open Science Framework) projects to meet in person, exchange ideas and get work done in a productive setting. What’s happening at this year’s OpenStack Infrastructure Summit? This year the summit is expected to have almost 300 sessions and workshops on Container Infrastructure, CI/CD, Telecom + NFV, Public Cloud, Private & Hybrid Cloud, Security etc. The Summit is going to have members of open source communities like Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, Zuul among other topics. Who’s it for? This is an event for engineers working in operations and administration. If you’re interested in OpenStack and how the foundation fits into the modern cloud landscape there will certainly be something here for you. #3 DockerCon When and where is DockerCon 2019 happening? April 29 to May 2 at Moscone West, San Francisco. What is it? DockerCon is perhaps the container event of the year. The focus is on what’s happening across the Docker world, but it will offer plenty of opportunities to explore the ways Docker is interacting and evolving with a wider ecosystem of tools. What’s happening at DockerCon 2019? This three-day conference will feature networking opportunities and hands-on labs. It will also hold an exposition where innovators will showcase their latest products. It’s expected to have over 6,000 attendees with 5+ tracks and 100 sessions. You’ll also have the opportunity to become a Docker Certified Associate with an on-venue test. Who’s it for? The event is essential for anyone working in and around containers - so DevOps, SRE, administration and infrastructure engineers. Of course, with Docker finding its way into the toolsets of a variety of roles, it may be useful for people who want to understand how Docker might change the way they work in the future.  Pricing for DockerCon runs from around $1080 for early-bird reservations to $1350 for standard tickets. #4 Red Hat Summit When and where is Red Hat Summit 2019 happening? May 7–9 in Boston. What is it? Red Hat Summit is an open source technology event run by Red Hat. It covers a wide range of topics and issues, essentially providing a snapshot of where the open source world is at the moment and where it might be going. With open source shaping cloud and other related trends, it’s easy to see why the event could be important for anyone with an interest in cloud and infrastructure. What’s happening at Red Hat Summit 2019? The theme for this year is AND. The copy on the event’s website reads:  AND is about scaling your technology and culture in whatever size or direction you need, when you need to, with what you actually need―not a bunch of bulky add-ons. From the right foundation―an open foundation―AND adapts with you. It’s interoperable, adjustable, elastic. Think Linux AND Containers. Think public AND private cloud. Think Red Hat AND you. There’s clearly an interesting conceptual proposition at the center of this year’s event that hints at how Red Hat wants to get engineers and technology buyers to think about the tools they use and how they use them. Who’s it for? The event is big for any admin or engineer that works with open source technology - Linux in particular (so, quite a lot of people…). Given Red Hat was bought by IBM just a few months ago in 2018, this event will certainly be worth watching for anyone interested in the evolution of both companies as well as open source software more broadly. #5 KubeCon + CloudNativeCon Europe When and where is KubeCon + CloudNativeCon Europe 2019? May 20 to 23 at Fira Barcelona. What is it? KubeCon + CloudNativeCon is CCNF’s (Cloud Native Computing Foundation) flagship conference for open source and cloud-native communities. It features contributors from cloud-native applications and computing, containers, microservices, central orchestration processing, and related projects to further cloud-native education of technologies that support the cloud-native ecosystem. What’s happening at this year’s KubeCon? The conference will feature a range of events and sessions from industry experts, project leaders, as well as sponsors. The details of the conference still need development, but the focus will be on projects such as Kubernetes (obviously), Prometheus, Linkerd, and CoreDNS. Who’s it for? The conference is relevant to anyone with an interest in software infrastructure. It’s likely to be instructive and insightful for those working in SRE, DevOps and administration, but because of Kubernetes importance in cloud native practices, there will be something here for many others in the technology industry. . The cost is unconfirmed, but it can be anywhere between $150 and $1,100. #6 IEEE International Conference on Cloud Computing When and where is the IEEE International Conference on Cloud Computing? July 8-13 in Milan. What is it? This is an IEEE conference solely dedicated to Cloud computing. IEEE Cloud is basically for research practitioners to exchange their findings on the latest cloud computing advances. It includes findings across all “as a service” categories, including network, infrastructure, platform, software, and function. What’s happening at the IEEE International Conference on Cloud Computing? IEEE cloud 2019 invites original research papers addressing all aspects of cloud computing technology, systems, applications, and business innovations. These are mostly based on technical topics including cloud as a service, cloud applications, cloud infrastructure, cloud computing architectures, cloud management, and operations. Shangguang Wang and Stephan Reiff-Marganiec have been appointed as congress workshops chairs. Featured keynote speakers for the 2019 World Congress on Services include Kathryn Guarini, VP at IBM Industry Research and Joseph Sifakis, the Emeritus Senior CNRS Researcher at Verimag. Who’s it for? The conference has a more academic bent than the others on this list. That means it’s particularly important for researchers in the field, but there will undoubtedly be lots here for industry practitioners that want to find new perspectives on the relationship between cloud computing and business. #7 VMworld When and where is VMWorld 2019? August 25 - 29 in San Francisco. What is it? VMworld is a virtualization and cloud computing conference, hosted by VMware. It is the largest virtualization-specific event. VMware CEO Pat Gelsinger and the executive team typically provide updates on the company’s various business strategies, including multi-cloud management, VMware Cloud for AWS, end-user productivity, security, mobile, and other efforts. What’s happening at VMworld 2019? The 5-day conference starts with general sessions on IT and business. It then goes deeper into breakout sessions, expert panels, and quick talks. It also holds various VMware Hands-on Labs and VMware Certification opportunities as well as one-on-one appointments with in-house experts. The expected attendee is over 21000+. Who’s it for? VMworld maybe doesn’t have the glitz and glamor of an event like DockerCon or KubeCon, but for administrators and technological decision makers that have an interest in VMware’s products and services. #8 Microsoft Ignite When and where is Microsoft Ignite 2019? November 4-8 at Orlando, Florida What is it? Ignite is Microsoft's flagship enterprise event for everything cloud, data, business intelligence, teamwork, and productivity. What’s happening at Microsoft Ignite 2019? Microsoft Ignite 2019 is expected to feature almost 700 + deep-dive sessions and 100 + expert-led and self-paced workshops. The full agenda will be posted sometime in Spring 2019. You can pre-register for Ignite 2019 here. Microsoft will also be touring many cities around the world to bring the Ignite experience to more people. Who’s it for? The event should have wide appeal, and will likely reflect Microsoft’s efforts to bring a range of tech professionals into the ecosystem. Whether you’re a developer, infrastructure engineer, or operations manager, Ignite is, at the very least, an event you should pay attention to. #9 Dreamforce When and where is Dreamforce 2019? November 19-22, in San Francisco. What is it? Dreamforce, hosted by Salesforce, is a truly huge conference, attended by more than 100,000 people.. Focusing on Salesforce and CRM, the event is an opportunity to learn from experts, share experiences and ideas, and to stay up to speed with the trends in the field, like automation and artificial intelligence. What’s happening at Dreamforce 2019? Dreamforce covers over 25 keynotes, a vast range of breakout sessions (almost 2700) and plenty of opportunities for networking. The conference is so extensive that it has its own app to help delegates manage their agenda and navigate venues. Who’s it for? Dreamforce is primarily about Salesforce - for that reason, it’s very much an event for customers and users. But given the size of the event, it also offers a great deal of insight on how businesses are using SaaS products and what they expect from them. This means there is plenty for those working in more technical or product roles to learn at the event.. #10 Amazon re:invent When and where is Amazon re:invent 2019? December 2-6 at The Venetian, Las Vegas, USA What is it? Amazon re:invent is hosted by AWS. If you’ve been living on mars in recent years, AWS is the market leader when it comes to cloud. The event, then, is AWS’ opportunity to set the agenda for the cloud landscape, announcing updates and new features, as well as an opportunity to discuss the future of the platform. What’s happening at Amazon re:invent 2019? Around 40,000 people typically attend Amazon’s top cloud event.  Amazon Web Services and its cloud-focused partners typically reveal product releases on several fronts. Some of these include enterprise security, Transit Virtual Private Cloud service, and general releases. This year, Amazon is also launching a related conference dedicated exclusively to cloud security called re:Inforce. The inaugural event will take place in Boston on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. Who’s it for? The conference attracts Amazon’s top customers, software distribution partners (ISVs) and public cloud MSPs. The event is essential for developers and engineers, administrators, architects, and decision makers. Given the importance of AWS in the broader technology ecosystem, this is an event that will be well worth tracking, wherever you are in the world. Did we miss an important cloud computing conference? Are you attending any of these this year? Let us know in the comments – we’d love to hear from you. Also, check this space for more detailed coverage of the conferences. Cloud computing trends in 2019 Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity
Read more
  • 0
  • 0
  • 35821

article-image-ces-2019-is-bullshit-we-dont-need-after-2018s-techlash
Richard Gall
08 Jan 2019
6 min read
Save for later

CES 2019 is bullshit we don't need after 2018's techlash

Richard Gall
08 Jan 2019
6 min read
The asinine charade that is CES is running in Las Vegas this week. Describing itself as 'the global stage of innovation', CES attempts to set the agenda for a new year in tech. While ostensibly it's an opportunity to see how technology might impact the lives of all of us over the next decade (or more), it is, in truth, a vapid carnival that does nothing but make the technology industry look stupid. Okay, perhaps I'm being a fun sponge: what's wrong with smart doorbells, internet connected planks of wood and other madcap ideas? Well, nothing really - but those inventions are only the tip of the iceberg. Disagree? Don't worry: you can find the biggest announcements from day one of CES 2019 here. What CES gets wrong Where CES really gets it wrong - and where it drives down a dead end of vacuity - is how it showcases the mind numbing rush to productize and then commercialize some of the really serious developments that could transform the world in a way that is ultimately far less trivial than the glitz and glamor of the way it is presented in the media would suggest. This isn't to say that there there won't be important news and interesting discussions to come out of CES. But even the more interesting topics can be diluted, becoming buzzwords for marketers to latch onto. As Wired remarks on Twitter, "the term AI-powered is used loosely and is almost always a marketing ploy, whether or not a product is impacted by AI." In the same thread, the publication's account also notes that 5G, another big theme for the event, won't be widely available for at least another 12 months. https://twitter.com/WIRED/status/1082294957979910144 Ultimately, what this tells us is that the focus of CES isn't really technology - not in the sense of how we build it and how we should use it. Instead, it is an event dedicated to the ways we can sell it. Perhaps in previous years, the gleeful excitement of CES was nothing but a bit of light as we recover from the holiday period. But this year it's different. 2018 was a year of reckoning in tech, as a range of scandals emerged that underlined the ways in which exciting technological innovation can be misused and deployed against the very people we assume it should be helping. From the Cambridge Analytica scandal to the controversy surrounding Amazon's Rekognition, Google's Project Dragonfly, and Microsoft's relationship with ICE, 2018 was a year that made it clearer than ever that buried somewhere beneath novel and amusing inventions, and better quality television screens are a set of interests that have little interest in making life better for people. The corporate glamor of CES 2019 is just kitsch It's not news that there are certain organisations and institutions that don't have the interests of the majority at heart. But CES 2019 does take on a new complexion in the shadow of all that has happened in 2019. The question 'what's the point of all this' takes on a more serious edge. When you add in the dissent that has come from a growing part of the Silicon Valley workforce, CES 2019 starts to look like an event that, much like many industry leaders, wants to bury the messy and complex reality of building software in favor of marketing buzz. In The Unbearable Lightness of Being, the author Milan Kundera describes kitsch as "the absolute denial of shit." It's following this definition that you can see CES as a kitsch event. This is because the it pushes the decisions and inevitable trade offs that go into developing new technologies and products into the shadows. It doesn't take negative consequences seriously. It's all just 'shit' that should be ignored. This all adds up to a message that seems to be: better doesn't even need to be built. It's here already, no risks, no challenges. Developers don't really feature at CES. That's not necessarily a problem - after all, it's not an event for them, and what developer wants to spend time hearing marketers talk about AI? But if 2018 has taught us anything, it's that a culture of commercialization that refuses to consider consequences other than what can be done in the service of business growth can be immensely damaging. It hurts people, and it might even be hurting democracy. Okay, the way to correct things probably isn't to simply invite more engineers to CES. But by the same token, CES is hardly helping things either. Everything important is happening outside the event Everything important seems to be happening at the periphery of this year's CES, in some instances quite literally outside the building. Apple's ad, for example, might have been a clever piece of branding, but it has captured the attention of the world. Arguably, it's more memorable than much of what's happening inside the event. And although it's possible to be cynical, it does nevertheless raise important questions about a number of companies attitudes to user data. https://twitter.com/NateIngraham/status/1081612316532064257 Another big talking point as this year's event began is who isn't present. Due to the government shutdown a number of officials that were due to attend and speak have had to cancel. This acts as a reminder of the wider context in which CES 2019 is taking place, in which a nativist government looks set on controlling controlling who and how people move across borders. It also highlights how euphemistic the phrase 'consumer technology' really is. TVs and cloud connected toilets might take the headlines, but its government surveillance that will likely have the biggest impact on our lives in the future. Not that any of this seemed to matter to Gary Shapiro, the Chief Executive of the Consumer Technology Association (the organization that puts on CES). Speaking to the BBC, Shapiro said: “It’s embarrassing to be on the world stage with a dominant event in the world of technology, and our federal government... can't be there to host their colleague government executives from around the world.” Shapiro's frustration is understandable from an organizer's perspective. But it also betrays the apparent ethos of CES: what's happening outside doesn't matter. We all deserve better than CES 2019 The new products on show at CES 2019 won't make everything better. There's a chance they will make everything worse. Arguably, the more blindly optimistic we are that they'll make things better, the more likely they are to make things worse. It's only by thinking through complex questions, and taking time to consider the possible consequences of our decision making as developers, product managers, or business people that we can actually be sure that things will get better. This doesn't mean we need to stop getting excited about new inventions and innovations. But things like smart cities and driverless cars pose a whole range of issues that shouldn't be buried in the optimistic schmaltz of events like CES. They need care and attention from policy makers, designers, software engineers, and many others to ensure they are actually going to help to build a better world for people.
Read more
  • 0
  • 0
  • 20214
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cloud-computing-trends-in-2019
Guest Contributor
07 Jan 2019
8 min read
Save for later

Cloud computing trends in 2019

Guest Contributor
07 Jan 2019
8 min read
Cloud computing is a rapidly growing technology that many organizations are adopting to enable their digital transformation. As per the latest Gartner report, the cloud tech services market is projected to grow 17.3% ($206 billion) in 2019, up from $175.8 billion in 2018 and by 2022, 90% of organizations will be using cloud services. In today’s world, Cloud technology is a trending buzzword among business environments. It provides exciting new opportunities for businesses to compete on a global scale and is redefining the way we do business. It enables a user to store and share data like applications, files, and more to remote locations. These features have been realized by all business owners, from startup to well-established organizations, and they have already started using cloud computing. How Cloud technology helps businesses Reduced Cost One of the most obvious advantages small businesses can get by shifting to the cloud is saving money. It can provide small business with services at affordable and scalable prices. Virtualization expands the value of physical equipment, which means companies can achieve more with less. Therefore, an organization can see a significant decline in power consumption, rack space, IT requirements, and more. As a result, there is lower maintenance, installation, hardware, support & upgrade costs. For small businesses, particularly, those savings are essential. Enhanced Flexibility Cloud can access data and related files from any location and from any device at any time with an internet connection. As the working process is changing to flexible and remote working, it is essential to provide work-related data access to employees, even when they are not at a workplace. Cloud computing not only helps employees to work outside of the office premises but also allows employers to manage their business as and when required. Also, enhanced flexibility & mobility in cloud technology can lead to additional cost savings. For example, an employer can select to execute BYOD (bring your own device). Therefore, employees can bring and work on their own devices which they are comfortable in.. Secured Data Improved data security is another asset of cloud computing. With traditional data storage systems, the data can be easily stolen or damaged. There can also be more chances for serious cyber attacks like viruses, malware, and hacking. Human errors and power outages can also affect data security. However, if you use cloud computing, you will get the advantages of improved data security. In the cloud, the data is protected in various ways such as anti-virus, encryption methods, and many more. Additionally, to reduce the chance of data loss, the cloud services help you to remain in compliance with HIPAA, PCI, and other regulations. Effective Collaboration Effective collaboration is possible through the cloud which helps small businesses to track and oversee workflow and progress for effective results. There are many cloud collaboration tools available in the market such as Google Drive, Salesforce, Basecamp, Hive, etc. These tools allow users to create, edit, save and share documents for workplace collaboration. A user can also constrain the access of these materials. Greater Integration Cloud-based business solutions can create various simplified integration opportunities with numerous cloud-based providers. They can also get benefits of specialized services that integrate with back-office operations such as HR, accounting, and marketing. This type of integration makes business owners concentrate on the core areas of a business. Scalability One of the great aspects of cloud-based services is their scalability. Currently, a small business may require limited storage, mobility, and more. But in future, needs & requirements will increase significantly in parallel with the growth of the business.  Considering that growth does not always occur linearly, cloud-based solutions can accommodate all sudden and increased requirements of the organization. Cloud-based services have the flexibility to scale up or to scale down. This feature ensures that all your requirements are served according to your budget plans. Cloud Computing Trends in 2019 Hybrid & Multi-Cloud Solutions Hybrid Cloud will become the dominant business model in the future. For organizations, the public cloud cannot be a good fit for all type of solutions and shifting everything to the cloud can be a difficult task as they have certain requirements. The Hybrid Cloud model offers a transition solution that blends the current on-premises infrastructure with open cloud & private cloud services. Thus, organizations will be able to shift to the cloud technology at their own pace while being effective and flexible. Multi-Cloud is the next step in the cloud evolution. It enables users to control and run an application, workload, or data on any cloud (private, public and hybrid) based on their technical requirements. Thus, a company can have multiple public and private clouds or multiple hybrid clouds, all either connected together or not. We can expect multi-cloud strategies to dominate in the coming days. Backup and Disaster Recovery According to Spiceworks report,  15% of the cloud budget is allocated to Backup and Disaster Recovery (DR) solutions, which is the highest budget allocation, followed by email hosting and productivity tools. This huge percentage impacts the shared responsibility model that public cloud providers operate on. Public cloud providers, like as AWS (Amazon Web Services ), Microsoft Azure, Google Cloud are responsible for the availability of Backup and DR solutions and security of the infrastructure, while the users are in charge for their data protection and compliance. Serverless Computing Serverless Computing is gaining more popularity and will continue to do so in 2019. It is a procedure utilized by Cloud users, who request a container PaaS (Platform as a Service), and Cloud supplier charges for the PaaS as required. The customer does not need to buy or rent services before and doesn't need to configure them. The Cloud is responsible for providing the platform, it’s configuration, and a wide range of helpful tools for designing applications, and working with data. Data Containers The process of Data Container usage will become easier in 2019. Containers are more popular for transferring data, they store and organize virtual objects, and resolve the issues of having software run reliably while transferring the data from one system to another. However, there are some confinements. While containers are used to transport, they can only be used with servers having compatible operating system “kernels.” Artificial Intelligence Platforms The utilization of AI to process Big Data is one of the more important upgrades in collecting business intelligence data and giving a superior comprehension of how business functions. AI platform supports a faster, more effective, and more efficient approach to work together with data scientists and other team members. It can help to reduce costs in a variety of ways, such as making simple tasks automated, preventing the duplication of effort, and taking over some expensive labor tasks, such as copying or extraction of data. Edge computing Edge computing is a systematic approach to execute data processing at the edge of the network to streamline cloud computing. It is a result of ever increased use of IoT devices. Edge is essential to run real-time services as it streamlines the flow of traffic from IoT devices and provides real-time data analytics and analysis. Hence, it is also on the rise in 2019. Service mesh Service mesh is a dedicated system layer to enhance service to service communication across microservices applications. It's a new and emerging class of service management for the inter-microservice communication complexity and provides observability and tracing in a seamless way. As containers become more prevalent for cloud-based application development, the requirement for service mesh is increasing significantly. Service meshes can help oversee traffic through service discovery, load balancing, routing, and observability. Service meshes attempt to diminish the complexity of containers and improve network functionality. Cloud Security As we see the rise in technology, security is obviously another serious consideration. With the introduction of the GDPR (General Data Protection Regulation) security concerns have risen much higher and are the essential thing to look after. Many businesses are shifting to cloud computing without any serious consideration of its security compliance protocols. Therefore, GDPR will be an important thing in 2019 and the organization must ensure that their data practices are both safe and compliant. Conclusion As we discussed above, cloud technology is capable of providing better data storage, data security, collaboration, and it also changes the workflow to help small business owners to take better decisions. Finally, cloud connectivity is all about convenience, and streamlining workflow to help any business become more flexible, efficient, productive, and successful. If you want to set your business up for success, this might be the time to transition to cloud-based services. Author Bio Amarendra Babu L loves pursuing excellence through writing and has a passion for technology. He is presently working as a content contributor for Mindmajix.com and Tekslate.com. He is a tech-geek and love to explore new opportunities. His work has been published on various sites related to Big Data, Business Analytics & Intelligence, Blockchain, Cloud Computing, Data Science, AI & ML, Project Management, and more. You can reach him at amarendrabl18@gmail.com. He is also available on Linkedin. 8 programming languages to learn in 2019 18 people in tech every programmer and software engineer need to follow in 2019 We discuss the key trends for web and app developers in 2019 [Podcast]
Read more
  • 0
  • 0
  • 42518

article-image-key-trends-in-software-infrastructure-in-2019
Richard Gall
17 Dec 2018
10 min read
Save for later

Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity

Richard Gall
17 Dec 2018
10 min read
Software infrastructure has, over the last decade or so, become a key concern for developers of all stripes. Long gone are narrowly defined job roles; thanks to DevOps, accountability for how code is now shared between teams on both development and deployment sides. For anyone that’s ever been involved in the messy frustration of internal code wars, this has been a welcome change. But as developers who have traditionally sat higher up the software stack dive deeper into the mechanics of deploying and maintaining software, for those of us working in system administration, DevOps, SRE, and security (the list is endless, apologies if I’ve forgotten you), the rise of distributed systems only brings further challenges. Increased complexity not only opens up new points of failure and potential vulnerability, at a really basic level it makes understanding what’s actually going on difficult. And, essentially, this is what it will mean to work in software delivery and maintenance in 2019. Understanding what’s happening, minimizing downtime, taking steps to mitigate security threats - it’s a cliche, but finding strategies to become more responsive rather than reactive will be vital. Indeed, many responses to these kind of questions have emerged this year. Chaos engineering and observability, for example, have both been gaining traction within the SRE world, and are slowly beginning to make an impact beyond that particular job role. But let’s take a deeper look at what is really going to matter in the world of software infrastructure and architecture in 2019. Observability and the rise of the service mesh Before we decide what to actually do, it’s essential to know what’s actually going on. That seems obvious, but with increasing architectural complexity, that’s getting harder. Observability is a term that’s being widely thrown around as a response to this - but it has been met with some cynicism. For some developers, observability is just a sexed up way of talking about good old fashioned monitoring. But although the two concepts have a lot in common, observability is more of an approach, a design pattern maybe, rather than a specific activity. This post from The New Stack explains the difference between monitoring and observability incredibly well. Observability is “a measure of how well internal states of a system can be inferred from knowledge of its external outputs.” which means observability is more a property of a system, rather than an activity. There are a range of tools available to help you move towards better observability. Application management and logging tools like Splunk, Datadog, New Relic and Honeycomb can all be put to good use and are a good first step towards developing a more observable system. Want to learn how to put monitoring tools to work? Check out some of these titles: AWS Application Architecture and Management [Video]     Hands on Microservices Monitoring and Testing       Software Architecture with Spring 5.0      As well as those tools, if you’re working with containers, Kubernetes has some really useful features that can help you more effectively monitor your container deployments. In May, Google announced StackDriver Kubernetes Monitoring, which has seen much popularity across the community. Master monitoring with Kubernetes. Explore these titles: Google Cloud Platform Administration     Mastering Kubernetes      Kubernetes in 7 Days [Video]        But there’s something else emerging alongside observability which only appears to confirm it’s importance: that thing is the notion of a service mesh. The service mesh is essentially a tool that allows you to monitor all the various facets of your software infrastructure helping you to manage everything from performance to security to reliability. There are a number of different options out there when it comes to service meshes - Istio, Linkerd, Conduit and Tetrate being the 4 definitive tools out there at the moment. Learn more about service meshes inside these titles: Microservices Development Cookbook     The Ultimate Openshift Bootcamp [Video]     Cloud Native Application Development with Java EE [Video]       Why is observability important? Observability is important because it sets the foundations for many aspects of software management and design in various domains. Whether you’re an SRE or security engineer, having visibility on the way in which your software is working will be essential in 2019. Chaos engineering Observability lays the groundwork for many interesting new developments, chaos engineering being one of them. Based on the principle that modern, distributed software is inherently unreliable, chaos engineering ‘stress tests’ software systems. The word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. Using something called chaos experiments - adding something unexpected into your system, or pulling a piece of it out like a game of Jenga - chaos engineering helps you to better understand the way it will act in various situations. In turn, this allows you to make the necessary changes that can help ensure resiliency. Chaos engineering is particularly important today simply because so many people, indeed, so many things, depend on software to actually work. From an eCommerce site to a self driving car, if something isn’t working properly there could be terrible consequences. It’s not hard to see how chaos engineering fits alongside something like observability. To a certain extent, it’s really another way of achieving observability. By running chaos experiments, you can draw out issues that may not be visible in usual scenarios. However, the caveat is that chaos engineering isn’t an easy thing to do. It requires a lot of confidence and engineering intelligence. Running experiments shouldn’t be done carelessly - in many ways, the word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. While chaos engineering isn’t straightforward, there are tools and platforms available to make it more manageable. Gremlin is perhaps the best example, offering what they describe as ‘resiliency-as-a-service’. But if you’re not ready to go in for a fully fledged platform, it’s worth looking at open source tools like Chaos Monkey and ChaosToolkit. Want to learn how to put the principles of chaos engineering into practice? Check out this title: Microservice Patterns and Best Practices       Learn the principles behind resiliency with these SRE titles: Real-World SRE       Practical Site Reliability Engineering       Better integrated security and code testing Both chaos engineering and observability point towards more testing. And this shouldn’t be surprising: testing is to be expected in a world where people are accountable for unpredictable systems. But what’s particularly important is how testing is integrated. Whether it’s for security or simply performance, we’re gradually moving towards a world where testing is part of the build and deploy process, not completely isolated from it. There are a diverse range of tools that all hint at this move. Archery, for example, is a tool designed for both developers and security testers to better identify and assess security vulnerabilities at various stages of the development lifecycle. With a useful dashboard, it neatly ties into the wider trend of observability. ArchUnit (sounds similar but completely unrelated) is a Java testing library that allows you to test a variety of different architectural components. Similarly on the testing front, headless browsers continue to dominate. We’ve seen some of the major browsers bringing out headless browsers, which will no doubt delight many developers. Headless browsers allow developers to run front end tests on their code as if it were live and running in the browser. If this sounds a lot like PhantomJS, that’s because it is actually quite a bit like PhantomJS. However, headless browsers do make the testing process much faster. Smarter software purchasing and the move to hybrid cloud The key trends we’ve seen in software architecture are about better understanding your software. But this level of insight and understanding doesn’t matter if there’s no alignment between key decision makers and purchasers. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This can manifest itself in various ways. Essentially, it’s a symptom of decision makers being disconnected from engineers buried deep in their software. This is by no means a new problem, cloud coming to define just about every aspect of software, it’s now much easier for confusion to take hold. The best thing about cloud is also the worst thing - the huge scope of opportunities it opens up. It makes decision making a minefield - which provider should we use? What parts of it do we need? What’s going to be most cost effective? Of course, with hybrid cloud, there's a clear way of meeting those issues. But it's by no means a silver bullet. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This is something that ThoughtWorks references in its most recent edition of Radar (November 2018). Identifying two trends they call ‘bounded buy’ and ‘risk commensurate vendor strategy’ ThoughtWorks highlights how organizations can find their SaaS of choice shaping their strategy in its own image (bounded buy) or look to outsource business critical applications, functions or services. T ThoughtWorks explains: “This trade-off has become apparent as the major cloud providers have expanded their range of service offerings. For example, using AWS Secret Management Service can speed up initial development and has the benefit of ecosystem integration, but it will also add more inertia if you ever need to migrate to a different cloud provider than it would if you had implemented, for example, Vault”. Relatedly, ThoughtWorks also identifies a problem with how organizations manage cost. In the report they discuss what they call ‘run cost as architecture fitness function’ which is really an elaborate way of saying - make sure you look at how much things cost. So, for example, don’t use serverless blindly. While it might look like a cheap option for smaller projects, your costs could quickly spiral and leave you spending more than you would if you ran it on a typical cloud server. Get to grips with hybrid cloud: Hybrid Cloud for Architects       Building Hybrid Clouds with Azure Stack     Become an effective software and solutions architect in 2019: AWS Certified Solutions Architect - Associate Guide     Architecting Cloud Computing Solutions     Hands-On Cloud Solutions with Azure       Software complexity needs are best communicated in a simple language: money In practice, this takes us all the way back to the beginning - it’s simply the financial underbelly of observability. Performance, visibility, resilience - these matter because they directly impact the bottom line. That might sound obvious, but if you’re trying to make the case, say, for implementing chaos engineering, or using a any other particular facet of a SaaS offering, communicating to other stakeholders in financial terms can give you buy-in and help to guarantee alignment. If 2019 should be about anything, it’s getting closer to this fantasy of alignment. In the end, it will keep everyone happy - engineers and businesses
Read more
  • 0
  • 0
  • 35887

article-image-the-future-of-cloud-lies-in-revisiting-the-designs-and-limitations-of-todays-notion-of-serverless-computing-say-uc-berkeley-researchers
Savia Lobo
17 Dec 2018
5 min read
Save for later

The Future of Cloud lies in revisiting the designs and limitations of today’s notion of ‘serverless computing’, say UC Berkeley researchers

Savia Lobo
17 Dec 2018
5 min read
Last week, researchers at the UC Berkeley released a research paper titled ‘Serverless Computing: One Step Forward, Two Steps Back’, which highlights some pitfalls in the current serverless architectures. Researchers have also explored the challenges that should be addressed to utilize the complete potential that the cloud can offer to innovative developers. Cloud isn’t being used to the fullest The researchers have termed cloud as “the biggest assemblage of data capacity and distributed computing power ever available to the general public, managed as a service”. The cloud today is being used as an outsourcing platform for standard enterprise data services. In order to leverage the actual potential of the cloud to the fullest, creative developers need programming frameworks. The majority of cloud services are simply multi-tenant, easier-to-administer clones of legacy enterprise data services such as object storage, databases, queueing systems, and web/app servers. Off late, the buzz for serverless computing--a platform in the cloud where developers simply upload their code, and the platform executes it on their behalf as needed at any scale--is on the rise. This is because public cloud vendors have started offering new programming interfaces under the banner of serverless computing. The researchers support this with a Google search trend comparison where the term “serverless” recently matched the historic peak of popularity of the phrase “Map Reduce” or “MapReduce”. Source: arxiv.org They point out that the notion of serverless computing is vague enough to allow optimists to project any number of possible broad interpretations on what it might mean. Hence, in this paper, they have assessed the field based on the serverless computing services that vendors are actually offering today and also see why these services are a disappointment given that the cloud has a bigger potential. A Serverless architecture based on FaaS (Function-as-a-Service) Functions-as-a-Service (FaaS) is the commonly used and more descriptive name for the core of serverless offerings from the public cloud providers. Typical FaaS offerings today support a variety of languages (e.g., Python, Java, Javascript, Go), allow programmers to register functions with the cloud provider, and enable users to declare events that trigger each function. The FaaS infrastructure monitors the triggering events, allocates a runtime for the function, executes it, and persists the results. The user is billed only for the computing resources used during function invocation. Building applications on FaaS not only requires data management in both persistent and temporary storage but also mechanisms to trigger and scale function execution. According to the researchers, cloud providers are quick to emphasize that serverless is not only FaaS, but it is, FaaS supported by a “standard library”: the various multi-tenanted, autoscaling services provided by the vendor; for instance, S3 (large object storage), DynamoDB (key-value storage), SQS (queuing services), and more. However, current FaaS solutions are good for simple workloads of independent tasks such as parallel tasks embedded in Lambda functions, or jobs to be run by the proprietary cloud services. However, when it comes to use cases that involve stateful tasks, these FaaS have a surprisingly high latency. These realities limit the attractive use cases for FaaS today, discouraging new third-party programs that go beyond the proprietary service offerings from the vendors. Limitations of the current FaaS offering No recoverability Function invocations are shut down by the Lambda infrastructure automatically after 15 minutes. Lambda may keep the function’s state cached in the hosting VM in order to support a ‘warm start’ state. However, there is no way to ensure that subsequent invocations are run on the same VM. Hence functions must be written assuming that state will not be recoverable across invocations. I/O Bottlenecks Lambdas usually connect to cloud services or shared storage across a network interface. This means moving data across nodes or racks. With FaaS, things appear even worse than the network topology would suggest. Recent studies show that a single Lambda function can achieve on average 538 Mbps network bandwidth. This is an order of magnitude slower than a single modern SSD. Worse, AWS appears to attempt to pack Lambda functions from the same user together on a single VM, so the limited bandwidth is shared by multiple functions. The result is that as compute power scales up, per-function bandwidth shrinks proportionately. With 20 Lambda functions, average network bandwidth was 28.7Mbps—2.5 orders of magnitude slower than a single SSD. Communication Through Slow Storage Lambda functions can only communicate through an autoscaling intermediary service. As a corollary, a client of Lambda cannot address the particular function instance that handled the client’s previous request: there is no “stickiness” for client connections. Hence maintaining state across client calls require writing the state out to slow storage, and reading it back on every subsequent call. No Specialized Hardware FaaS offerings today only allow users to provision a time slice of a CPU hyperthread and some amount of RAM; in the case of AWS Lambda, one determines the other. There is no API or mechanism to access specialized hardware. These constraints, combined with some significant shortcomings in the standard library of FaaS offerings, substantially limit the scope of feasible serverless applications. The researchers conclude, “We see the future of cloud programming as far, far brighter than the promise of today’s serverless FaaS offerings. Getting to that future requires revisiting the designs and limitations of what is being called ‘serverless computing’ today.” They believe cloud programmers need to build a programmable framework that goes beyond FaaS, to dynamically manage the allocation of resources in order to meet user-specified performance goals for both compute and for data. The program analysis and scheduling issues are likely to open up significant opportunities for more formal research, especially for data-centric programs. To know more this research in detail, read the complete research paper. Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Introducing numpywren, a system for linear algebra built on a serverless architecture
Read more
  • 0
  • 0
  • 18665

article-image-kelsey-hightower-on-serverless-and-security-on-kubernetes-at-kubecon-cloudnativecon
Prasad Ramesh
14 Dec 2018
4 min read
Save for later

Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon

Prasad Ramesh
14 Dec 2018
4 min read
In a stream hosted earlier this week by The New Stack, Kelsey Hightower, developer advocate, Google Cloud Platform, talked about the serverless and security aspects of Kubernetes. The stream was from KubeCon + CloudNativeCon 2018. What are you exploring right now with respect to serverless? There are many managed services these days. Database, security etc is fully managed i.e., serverless. People have been on this trajectory for a while if you consider DNS, email, and even Salesforce. Now we have serverless since managed services are ‘eating that world as well’. That world being the server side world and related workloads. How are managed services eating the server side world? If someone has to run and build an API, one approach would be to use Kubernetes and manage the cluster and build the container, run it on Kubernetes and manage that. Even if it a fully managed cluster, you may still have to manage the things around Kubernetes. Another approach is to deal with a higher form of extraction. Serverless is coupled often with FaaS (Function as a Service). There are a lot of abstractions in terms of resources, i.e., resources are abstracted more these days. Hightower talks about a test: “If I walk up to a platform and the delta between me and my code is short, you’re probably closer to the serverless mindset.” This is different from creating a VM, then installing something, configuring something, and then running some code—this is not really serverless. Serverless in a Kubernetes context The point of view should be—can we improve the experience on Kubernetes by adopting some things from serverless? You can add a layer that does functions, so developers can stop worrying about containers and focus on the source. The big picture is—who autoscales the whole cluster? Kubernetes and just an additional layer can’t really be called serverless but it is going in that direction. Over time, if you do enough so that people don’t have to think about or even know that Kubernetes is there, you’re getting closer to being truly serverless. Security in Kubernetes Hightower loves the granular controls of serverless technologies. Comparing the serverless security model to other models For a long time in the industry, companies have been trying to do a least privilege approach. That is, limiting the access of applications so that it can perform only a specific action that is required. So if one server is compromised and it does not have access to anything else, then the effects are isolated. The Kubernetes approach can be different. The cloud providers try to make sure that all the credentials needed to do important things are segmented from VM, cloud functions, app engine or Kubernetes. Imagine if Kubernetes is where everything lives free. Instead of one machine being taken down, it is now easier for the whole cluster to be taken down in one shot. This is called ‘broadening the blast radius’. If you have Kubernetes and you give it keys to everything in your cluster, then everything is compromised when the Kubernetes API is compromised. Having just one cluster trades off on security. Another approach to serverless security A different security model is where you explicitly give credentials that may be needed. So there is no scope to ask for any credentials etc, it will not be allowed. You can also go wrong on a serverless but the system is better defined in ways that it limits what can be done. It’s easier to secure when the attack surface is smaller. For serverless security the same principles from engineering techniques apply, you just have to apply it to these new platforms. So you just need knowledge about what these new platforms are doing. The same principles apply, admins just have a different layer of abstraction that they may add some additional security to. The more people use the system, more flaws are continuously found. It takes a community to identify flaws and patch them. So as a community is more mature, dedicated security researchers come up and patch flaws before they can be exploited. To see the complete talk where Hightower talks about his views on what he is working on, go to The New Stack YouTube Channel. DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes NeuVector upgrades Kubernetes container security with the release of Containerd and CRI-O run-time support
Read more
  • 0
  • 0
  • 16630
article-image-microsoft-becomes-the-worlds-most-valuable-public-company-moves-ahead-of-apple
Sugandha Lahoti
03 Dec 2018
3 min read
Save for later

Microsoft becomes the world's most valuable public company, moves ahead of Apple

Sugandha Lahoti
03 Dec 2018
3 min read
Last week, Microsoft moved ahead of Apple as the world’s most valuable publicly traded U.S. company. On Friday, the company closed on with a market value of $851 billion with Apple a few steps short at $847 billion. The move from Windows to Cloud Microsoft's success can be attributed to its able leadership under CEO Satya Nadella and his focus on moving away from the flagship Windows operating system and focusing on cloud-computing services with long-term business contracts. The organization's biggest growth has happened in its Azure Cloud platform. Cloud computing now accounts for more than a quarter of Microsoft’s revenue rivaling Amazon, which is also a leading provider. Microsoft is also building new products and features for Azure. Last month, it announced container support for Azure Cognitive Services to build intelligent applications. In October, it invested in Grab to together conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud. In September, at the Ignite 2018, the company announced major changes and improvements to their cloud offering. It also came up with Azure Functions 2.0 with better workload support for serverless, general availability of Microsoft’s Immutable storage for Azure Storage Blobs, and Azure DevOps. In August, Microsoft made Azure supported for NVIDIA GPU Cloud (NGC), and a new governance DApp for Azure. Wedbush analyst Dan Ives commented that “Azure is still in its early days, meaning there’s plenty of room for growth, especially considering the company’s large customer base for Office and other products. While the tech carnage seen over the last month has been brutal, shares of (Microsoft) continue to hold up like the Rock of Gibraltar” he said. Focus on business and values Microsoft has also prioritized business-oriented services such as Office and other workplace software, as well as newer additions such as LinkedIn and Skype. In 2016, Microsoft bought LinkedIn, the social network for professionals, for $26.2 billion. This year, Microsoft paid $7.5 billion for GitHub, an open software platform used by 28 million programmers. Another reason Microsoft is flourishing is because of its focus on upholding its founding values without compromising on issues like internet censorship and surveillance. Daniel Morgan, senior portfolio manager for Synovus Trust, says “Microsoft is outperforming its tech rivals in part because it doesn’t face as much regulatory scrutiny as advertising-hungry Google and Facebook, which have attracted controversy over their data-harvesting practices. Unlike Netflix, it’s not on a hunt for a diminishing number of international subscribers. And while Amazon also has a strong cloud business, it’s still more dependent on online retail.” In a recent episode of Pivot with Kara Swisher and Scott Galloway, the two speakers also talked about why Microsoft is more valuable than Apple. Scott said that Microsoft’s success is because of Nadella’s decision of diversifying Microsoft’s business into enough verticals which is the reason why the company hasn’t been as impacted by tech stocks’ recent decline. He argues that Satya Nadella deserves the title of “tech CEO of the year”. Microsoft wins $480 million US Army contract for HoloLens. Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots. Microsoft announces official support for Windows 10 to build 64-bit ARM apps
Read more
  • 0
  • 0
  • 14296

article-image-observability-as-code-secrets-as-a-service-and-chaos-katas-thoughtworks-outlines-key-engineering-techniques-to-trial-and-assess
Richard Gall
14 Nov 2018
5 min read
Save for later

Observability as code, secrets as a service, and chaos katas: ThoughtWorks outlines key engineering techniques to trial and assess

Richard Gall
14 Nov 2018
5 min read
ThoughtWorks has just published vol. 19 of its essential radar report. As always, it's a vital insight into what's beginning to emerge in the technology field. In the techniques quadrant of its radar, there were some really interesting new entries. Let's take a look at some of them now, so you can better plan and evaluate your roadmap and skill set for 2019. 8 of the best new techniques you should be trialling (according to ThoughtWorks) 1% canary: a way to build better feedback loops This sounds like a weird one, but the concept is simple. It's essentially about building a quick feedback loop to a tiny segment of customers - say, 1%. This can allow engineering teams to learn things quickly and make changes on other aspects of the project as it evolves. Bounded buy: a smarter way to buy out-of-the-box software solutions Bounded buy mitigates the scope creep that can cause headaches for businesses dealing with out-of-the-box software. It means those responsible for purchasing software focus only on solutions that are modular, with each 'piece' directly connecting into a particular department's needs or workflow. Crypto shredding: securing sensitive data Crypto shredding is a method of securing data that might otherwise be easily replicated or copied. Essentially, it overwrites sensitive data with encryption keys which can easily be removed or deleted. It adds an extra layer of control over a large data set - a technique that could be particularly useful in a field like healthcare. Four key metrics - focus on what's most important to build a high performance team Building a high performance team, can be challenging. Accelerate, the team behind the State of DevOps report, highlighted key drivers that engineers and team leaders should focus on: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. According to ThoughtWorks "each metric creates a virtuous cycle and focuses the teams on continuous improvement." Observability as code - breaking through the limits of traditional monitoring tools Observability has emerged as a bit of a buzzword over the last 12 months. But in the context of microservices, and increased complexity in software architecture, it is nevertheless important. However, the means through which you 'do' observability - a range of monitoring tools and dashboards - can be limiting in terms of making adjustments and replicating dashboards. This is why treating observability as code is going to become increasingly more important. It makes sense - if infrastructure as code is the dominant way we think about building software, why shouldn't it be the way we monitor it too? Run cost as architecture fitness function There's a wide assumption that serverless can save you money. This is true when you're starting out, or want to do something quickly, but it's less true as you scale up. If you're using serverless functions repeatedly, you're likely to be paying a lot - more than if you has a slightly less fashionable cloud or on premise server. To combat this complacency, you should instead watch how much services cost against the benefit delivered by them. Seems obvious, but easy to miss if you've just got excited about going serverless. Secrets as a service Without wishing to dampen what sounds incredibly cool, secrets as a service are ultimately just elaborate password managers. They can help organizations more easily decouple credentials, API keys from their source code, a move which should ensure improved security - and simplicity. By using credential rotation, organizations can be much better prepared at tackling and mitigating any security issues. AWS has 'Secrets Manager' while HashiCorp's Vault offers similar functionality. Security chaos engineering In the last edition of Radar, security chaos engineering was in the assess phase - which means ThoughtWorks thinks it's worth looking at, but maybe too early to deploy. With volume 19, security chaos engineering has moved into trial. Clearly, while chaos engineering more broadly has seen slower adoption, it would seem that over the last 12 months the security field has taken chaos engineering to heart. 2 new software engineering techniques to assess Chaos katas If chaos engineering is finding it hard to gain mainstream adoption, perhaps chaos katas is the way forward. This is essentially a technique that helps engineers deploy chaos practices in their respective domains using the training approach known as kata - a Japanese word that simply refers to a set of choreographed movements. In this context, the 'katas' are a set of code patterns that implement failures in a structured way, which engineers can then identify and explore. This is essentially a bottom up way of doing chaos engineering that also gives engineers a deeper insight into their software infrastructure. Infrastructure configuration scanner The question of who should manage your infrastructure is still a tricky one, with plenty of conflicting perspectives. However, from a productivity and agility perspective, putting the infrastructure in the hands of engineers makes a lot of sense. Of course, this could feel like an extra burden - but with an infrastructure configuration scanner, like Scout2 or Watchmen, engineers can ensure that everything is configured correctly. Software engineering techniques need to maintain simplicity as complexity increases There's clearly a diverse range of techniques on the ThoughtWorks Radar. Ultimately, however, the picture that emerges is one where efficiency and observability are key. A crucial part of software engineering will managing increased complexity and developing new tools and processes to instil some degree of simplicity and clarity. Was there anything ThoughtWorks missed?
Read more
  • 0
  • 0
  • 11976

article-image-4-reasons-ibm-bought-red-hat-for-34-billion
Richard Gall
29 Oct 2018
8 min read
Save for later

4 reasons IBM bought Red Hat for $34 billion

Richard Gall
29 Oct 2018
8 min read
The news that IBM is to buy Red Hat - the enterprise Linux distribution - shocked the software world this weekend. It took many people by surprise because it signals a weird new world where the old guard of tech conglomerates - almost prehistoric in the history of the industry - are revitalizing themselves by diving deep into the open source world for pearls. So, why did IBM decide to buy Red Hat? And why has it spent so much to do it? Why did IBM decide to buy Red Hat? For IBM this was an expensive step into a new world. But they wouldn't have done it without good reason. And although it's hard to center on one single reason that forced IBM's decision makers to put money on the table, there are certainly a combination of factors that meant this move simply makes sense from IBM's perspective. Here are 4 reasons why IBM is buying Red Hat: Competing in the cloud market Disappointment around the success of IBM Watson Catching up with Microsoft To help provide support for an important but struggling Linux organization Let's take a look at each of these in more detail. IBM wants to get serious about cloud computing IBM has been struggling in a competitive cloud market. It's not exactly out of the running, with some reports placing them in third after AWS and Microsoft Azure, and others in fourth, with Google's cloud offering above them. But wherever the company stands, it's true that it is not growing at anywhere near the rate of its competitors. Put simply, if it didn't act, IBM would lose significant ground in the cloud computing race. It's no coincidence that cloud was right at the top of the IBM press release. Ginni Rometty, IBM Chairman, President and Chief Executive Officer, is quoted as saying "The acquisition of Red Hat is a game-changer. It changes everything about the cloud market... IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses." Clearly, IBM wants to bring itself up to date. As The Register wrote when they covered the story on Sunday IBM "really, really, really wants to transform itself into a cool and trendy hybrid cloud platform, rather than be seen eternally as a maintainer of legacy mainframes and databases." But why buy Red Hat? You might still be thinking well, why does IBM need Red Hat to do all this? Can't it just do it itself? It ultimately comes down to expanding what businesses can do with cloud - and bringing an open source company into the IBM family will allow IBM to deliver much more effectively on this than they have before. AWS appears to implicitly understand that features and capabilities are everything when it comes to cloud - to be truly successful, IBM needs to adopt both an open source mindset and toolset to innovate at a fast pace. This is what Rometty is referring to when she talks about "the next chapter of the cloud." This is where cloud becomes more about "extracting more data and optimizing every part of the business, from supply chains to sales" than storage space. IBM's artificial intelligence product, Watson, hasn't taken off IBM is a company with its proverbial finger in many pies. Its artificial intelligence product, Watson, hasn't had the success that the company expected. Instead, it has suffered a number of disappointing setbacks this year, resulting in Deborah DiSanzo, the head of Watson Health, stepping down just a week ago. One of the biggest stories was MD Anderson Cancer Center stepping away from a contract with IBM, after a report by analysts at investment bank Jeffries claimed that the software was "not ready for human investigational or clinical use." But there are other stories too - all of which come together to paint a picture of a project that doesn't live up to or deliver on its hype. By contrast, AI has been most impactful as a part of a cloud product. Just look at the furore around the AI tools within AWS - there's no way government agencies and the military would be quite so interested in the product if it wasn't packaged in a way that could be easily deployed. AWS, unlike IBM, understood that AI is only worth the hype if organizations can use it easily. In effect, we're past the period where AI deserves hype on its own - it needs to be part of a wider suite of capabilities that enable innovation and invention with minimal friction. If IBM is to offer out Watson's capabilities to a wide portion of users, all with varying use cases, IBM can begin to think much more about how the end product can deliver the biggest impact for these individual cases. IBM is playing catch up with Microsoft in terms of open source IBM's move might be surprising, but in the context of Microsoft's transformation over the last decade, it's part of a wider pattern. The only difference is that Microsoft's attitude to open source has slowly thawed, whereas IBM has gone all out, taking an unexpected leap into the unknown. It's a neat coincidence that this was the weekend that GitHub officially became part of Microsoft. It's as if IBM saw Microsoft basking in the glow of an open source embrace and thought we want that. Envy aside, there are serious implications. The future is now quite clearly open source - in fact, it has been for some time. You might even say that Microsoft hasn't been as quick as it could have been. But for IBM, open source has been seen simply as a tasty slice of the software pie - great, but not the whole thing. This was a misunderstanding - open source is everything. It almost doesn't even make sense to talk about open source as if it were distinctive from everything else - it is software today. It's defining the future. Joseph Jacks, the founder of Open Source Capital, said  that "IBM buying @RedHat is not about dominating the cloud. It is about becoming an OSS company. The largest proprietary software and tech companies in the world are now furiously rushing towards the future. An open future. An open source software driven future. OSS eats everything." https://twitter.com/asynchio/status/1056693588640194560   IBM is heavily invested in Linux - and RedHat isn't exactly thriving However, although open source might be the dominant mode of software in 2018, there are a few murmurs about it's sustainability and resilience. So, despite being central to just about everything we build and use when it comes to software, from a business perspective it isn't exactly thriving. Red Hat is a brilliant case in point. Despite being one of the first and most successful open source software businesses, providing free, open source software to customers in return for a support fee, revenues are down. Shares fell 14% in June following a disappointing financial forecast - and have fallen further since then. This piece in TechCrunch, almost 5 years old, does a good job of explaining the relative success of Red Hat, as well as its limitations: "When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat." From this perspective, this sets the stage for an organisation like IBM to come in and start investing in Red Hat as a foundational component of its future product and software strategy. Given that both organizations are heavily invested in Linux, this could be a really important relationship in supporting the project in the future. And although a multi-billion acquisition might not look like open source in action, it might also be one of the only ways that it's going to survive and thrive in the future. Thanks to Amarabha Banerjee, Aarthi Kumaraswamy, and Amey Varangaonkar for their help with this post. Update on 9th July, 2019 As pert the reports from The Fortune, IBM on Tuesday morning closed its $34 billion acquisition of Red Hat, which was announced last October. The pricey deal, which paid Red Hat owners a hefty premium of more than 60%, marks IBM CEO Ginni Rometty’s biggest bet yet in transforming her 108-year-old technology company. In an interview Tuesday morning, she said some tech analysts have assumed the move to the cloud would lead to a “winner take all” scenario, where one giant platform—Amazon Web Services?—ends up with all the business. Read the full story here.
Read more
  • 0
  • 0
  • 28050
article-image-ibm-acquired-red-hat-for-34-billion-making-it-the-biggest-open-source-acquisition-ever
Sugandha Lahoti
29 Oct 2018
4 min read
Save for later

IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever

Sugandha Lahoti
29 Oct 2018
4 min read
In probably the biggest open source acquisition ever, IBM has acquired all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total enterprise value of approximately $34 billion. However, if this deal is more of a business proposition than a community contributor is a question. Red Hat has been struggling on the market recently. Red Hat missed its most recent revenue estimates and its guidance fell below Wall Street targets. Prior to this deal, it had a market capitalization of about $20.5 billion. With this deal, Red Hat may soon take control of it’s sinking ship. It will also remain a distinct unit within IBM. The company will continue to be led by Jim Whitehurst, Red Hat’s CEO and Red Hat's current management team. Jim Whitehurst also will join IBM's senior management team and report to Ginni Rometty, IBM Chairman, President, and Chief Executive Officer. Why is Red Hat joining forces with IBM? In the announcement, Jim assured that IBM’s acquisition of Red Hat will help them accelerate without compromising their culture and policies. He said, "Open source is the default choice for modern IT solutions, and I'm incredibly proud of the role Red Hat has played in making that a reality in the enterprise.” He also added that, “Joining forces with IBM will provide us with a greater level of scale, resources, and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience--all while preserving our unique culture and unwavering commitment to open source innovation." What is IBM gaining from this acquisition? IBM believes this acquisition to be a game changer. "It changes everything about the cloud market," said Ginni, "IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses. IBM and Red Hat will accelerate hybrid multi-cloud adoption across all companies. They plan to together, “help clients create cloud-native business applications faster, drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management.” "IBM is committed to being an authentic multi-cloud provider, and we will prioritize the use of Red Hat technology across multiple clouds," said Arvind Krishna, Senior Vice President, IBM Hybrid Cloud. "In doing so, IBM will support open source technology wherever it runs, allowing it to scale significantly within commercial settings around the world." IBM assures that it will continue to build and enhance Red Hat partnerships with major cloud providers. It will also remain committed to Red Hat's open governance, open source contributions, participation in the open source community and development model. The company is keen on preserving the independence and neutrality of Red Hat's open source development culture and go-to-market strategy. The news was well received by the top Red Hat decision makers who embraced this with open arms. However, ZDNet reported that many RedHat employees were skeptical: "I can't imagine a bigger culture clash." "I'll be looking for a job with an open-source company." "As a Red Hat employee, almost everyone here would prefer it if we were bought out by Microsoft." People’s reactions on twitter on this acquisition are also varied: https://twitter.com/samerkamal/status/1056611186584604672 https://twitter.com/pnuojua/status/1056787520845955074 https://twitter.com/CloudStrategies/status/1056666824434020352 https://twitter.com/svenpet/status/1056646295002247169 Read more about the news on IBM’s newsroom. Red Hat infrastructure migration solution for proprietary and siloed infrastructure. IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support IBM Watson announces pre-trained AI tools to accelerate IoT operations
Read more
  • 0
  • 0
  • 28198

article-image-linux-4-19-kernel-releases-with-open-arms-and-aio-based-polling-interface-linus-back-to-managing-the-linux-kernel
Natasha Mathur
22 Oct 2018
4 min read
Save for later

Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel

Natasha Mathur
22 Oct 2018
4 min read
It was last month when Linus Torvalds took a break from kernel development. During his break, he had assigned Greg Kroah-Hartman as Linux's temporary leader, who went ahead and released the Linux 4.19 today at the ongoing Linux Foundation Open Source Summit in Edinburg, after eight release candidates. The new release includes features such as new AIO-based polling interface, L1TF vulnerability mitigations, the block I/O latency controller, time-based packet transmission, and the CAKE queuing discipline, among other minor changes. The Linux 4.19 kernel release announcement is slightly different and longer than usual as apart from mentioning major changes, it also talks about welcoming newcomers by helping them learn things with ease. “By providing a document in the kernel source tree that shows that all people, developers, and maintainers alike, will be treated with respect and dignity while working together, we help to create a more welcome community to those newcomers, which our very future depends on if we all wish to see this project succeed at its goals”, mentions Hartman. Moreover, Hartman also welcomed Linus back into the game as he wrote, “And with that, Linus, I'm handing the kernel tree back to you.  You can have the joy of dealing with the merge window”. Let’s discuss the features in Linux 4.19 Kernel. AIO-based polling interface A new polling API based on the asynchronous I/O (AIO) mechanism was posted by Christoph Hellwig, earlier this year.  AIO enables submission of I/O operations without waiting for their completion. Polling is a natural addition to AIO and point of polling is to avoid waiting for operations to get completed. Linux 4.19 kernel release comes with AIO poll operations that operate in the "one-shot" mode. So, once a poll notification gets generated, a new IOCB_CMD_POLL IOCB is submitted for that file descriptor. To provide support for AIO-based polling, two functions, namely,  poll() method in struct file_operations:  int (*poll) (struct file *file, struct poll_table_struct *table) (supports the polling system calls in previous kernels), are split into separate file_operations methods. Hence, it then adds these two new entries to that structure:    struct wait_queue_head *(*get_poll_head)(struct file *file, int mask);    int (*poll_mask) (struct file *file, int mask); L1 terminal fault vulnerability mitigations The Meltdown CPU vulnerability was first disclosed earlier this year and allowed unprivileged attackers to easily read the arbitrary memory in systems. Then, "L1 terminal fault" (L1TF) vulnerability (also going by the name Foreshadow) was disclosed which brought back both threats, namely, easy attacks against host memory from inside a guest. Mitigations are available in Linux 4.19 kernel and have been merged into the mainline kernel. However, they can be expensive for some users. The block I/O latency controller Large data centers make use of control groups that help them balance the use of the available computing resources among competing users. Block I/O bandwidth can be considered .as one of the most important resources for specific types of workloads. However, kernel's I/O controller was not a complete solution to the problem. This is where block I/O latency controller comes into the picture. Linux 4.19 kernel has a block I/O latency controller now.  It regulates latency (instead of bandwidth) at a relatively low level in the block layer. When in use, each control group directory comprises an io.latency file that sets the parameters for that group. A line is written to that file following this pattern: major:minor target=target-time Here major and minor are used to identify the specific block device of interest. Target-time is the maximum latency that this group should be experiencing (in milliseconds). Time-based packet transmission The time-based packet transmission comes with a new socket option, and a new qdisc, which is designed so that it can buffer the packets until a configurable time before their deadline (tx times). Packets intended for timed transmission should be sent with sendmsg(), with a control-message header (of type SCM_TXTIME) which indicates the transmission deadline as a 64-bit nanoseconds value. CAKE queuing discipline “Common Applications Kept Enhanced" (CAKE) queuing discipline in Linux 4.19 exists between the higher-level protocol code and the network interface. It decides which packets need to be dispatched at any given time. It also comprises four different components that are designed to make things work on home links. It prevents the overfilling of buffers along with improving various aspects of networking performance such as bufferbloat reduction and queue management. For more information, check out the official announcement. The kernel community attempting to make Linux more secure KUnit: A new unit testing framework for Linux Kernel Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux
Read more
  • 0
  • 0
  • 18787
Modal Close icon
Modal Close icon