Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud & Networking

770 Articles
article-image-cloudflare-terminates-services-to-8chan-following-yet-another-set-of-mass-shootings-in-the-us-tech-awakening-or-liability-avoidance
Sugandha Lahoti
06 Aug 2019
9 min read
Save for later

Cloudflare terminates services to 8chan following yet another set of mass shootings in the US. Tech awakening or liability avoidance?

Sugandha Lahoti
06 Aug 2019
9 min read
Update: Jim Watkins, the owner of 8chan has spoken against the ongoing backlash in a defensive video statement on uploaded 6th August on YouTube. "My company takes a firm stand in helping law enforcement and within minutes of these two tragedies, we were working with FBI agents to find out what information we could to help in their investigations. There are about 1 million users of 8chan. 8chan is an empty piece of paper for writing on it is disturbing to me that it can be so easily shut down. Over the weekend the domain name service for 8chan was abruptly terminated by the provider Cloudflare.", he states in the video. He adds, "First of all the El Paso shooter posted on Instagram, not 8chan. Later someone uploaded a manifesto; however, that manifesto was not uploaded by the Walmart shooter. It is unfortunate that this place of free speech has temporarily been removed we are working to restore service. It is clearly a political move to remove 8chan from CloudFlare; it has dispersed a peacefully assembled group of people. " Watkins went on to call Cloudflare's decision 'cowardly'. He said, "Contrary to the unfounded claim by Mr. Prince of CloudFlare 8-chan is a lawful community abiding by the laws of the United States and enforced in the Ninth Circuit Court. His accusation has caused me tremendous damage. In the meantime, I wish his company the best and hold no animosity towards him or his cowardly and not thought-out actions against 8-chan." Saturday witnessed two horrific mass shooting tragedies, one when a maniac gunman shot at least 20 people at a sprawling Walmart shopping complex in El Paso, Texas. The other in Dayton, Ohio at the entrance of Ned Peppers Bar where ten people were killed, including the perpetrator, and at least 27 others were injured. The gunman in the El Paso shooting has been identified as Patrick Crusius according to CNN sources. He appears to have been inspired by the online forum known as 8chan. 8chan is an online message board which is home to online extremists who share racist and anti-Semitic conspiracy theories. According to police officials, a four-page document was posted to 8chan, 20 minutes before the shootings that they believe was written by Crusius. The post said, "I'm probably going to die today." His post blamed white nationalists and immigrants for taking away jobs and spewed racist hatred towards immigrants and Hispanics. The El Paso post is not the only incident. 8chan has been filled with unmoderated violent and extremist content over time. Nearly the same thing happened on 8chan before the terror attack in Christchurch, New Zealand. In his post, the El Paso shooter referenced the Christchurch incident saying he was inspired by the Christchurch content on 8chan which glorified the previous massacre. The suspected killer in the synagogue shootings in Poway, California also posted a hate-filled “open letter” on 8chan. In March, this year Australian telecom company Telstra denied access to millions of Australians to the websites 4chan, 8chan, Zero Hedge, and LiveLeak as a reaction to the Christchurch mosque shootings. Cloudflare first defends 8chan citing ‘moral obligations’ but later cuts all ties Post this disclosure, Cloudflare, that provides internet infrastructure services to 8chan continued to defend hosting 8chan calling it their 'moral obligation' to provide 8chan their services. Keeping 8chan within its network is a “moral obligation”, said Cloudflare, adding: “We, as well as all tech companies, have an obligation to think about how we solve real problems of real human suffering and death. What happened in El Paso today is abhorrent in every possible way, and it’s ugly, and I hate that there’s any association between us and that … For us, the question is which is the worse evil? Is the worse evil that we kick the can down the road and don’t take responsibility? Or do we get on the phone with people like you and say we need to own up to the fact that the internet is home to many amazing things and many terrible things and we have an absolute moral obligation to deal with that.” https://twitter.com/slpng_giants/status/1158214314198745088 https://twitter.com/iocat/status/1158218861658791937 Cloudflare has been under the spotlight over the past few years for continuing to work with websites that foster hate. Previous to 8chan, in 2017, Cloudflare had to discontinue services to neo-Nazi blog, The Daily Stormer, after the terror at Charlottevelle. However, Daily Stormer continues to run today having moved to a different infrastructure service with allegedly more readers than ever. After an intense public and media backlash over the weekend, Cloudflare announced that it would completely stop providing support for 8chan. Cloudflare is also readying for an initial public offering in September which may have been the reason why they cut ties with 8chan. In a blog post today, they explained the decision to cut off 8chan. "We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time. The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths." Cloudflare has also cut off 8chan's access to its DDOS protection service. Although, this will have a short term impact; 8chan can always come up with another cloud partner and resume operations. Cloudflare acknowledges it as well, “While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online. It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we've solved our own problem, but we haven't solved the Internet’s.” The company added, “We feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often,” adding that this is not “due to some conception of the United States’ First Amendment,” since Cloudflare is a private company (and most of its customers, and more than half of its revenue, are outside the United States). Instead, Cloudflare “will continue to engage with lawmakers around the world as they set the boundaries of what is acceptable in those countries through due process of law. And we will comply with those boundaries when and where they are set.” Founder of 8chan wants the site to be shut off 8chan founder Fredrick Brennan also appreciated Cloudfare’s decision to block the site. Post the gruesome El Paso shootings, he also told the Washington Post that the site’s owners should “do the world a favor and shut it off.” However, he told Buzzfeed News, shutting down 8chan wouldn't stop the extremism we're now seeing entirely, but it would make it harder for them to organize. https://twitter.com/HW_BEAT_THAT/status/1158194175755485191 In a March interview with The Wall Street Journal, he expressed his regrets over his role in the site’s creation and warned that the violent culture that had taken root on 8chan’s boards could lead to more mass shootings. Brennan founded the site in 2011 and announced his departure from the company in July 2016. 8Chan is owned by Jim Watkins and run by his son, Ron. He posted on Twitter that 8chan will be moving to another service ASAP. He has also resisted calls to moderate or shut down the site. On Sunday, a banner at the top of 8chan’s home page read, “Welcome to 8chan, the Darkest Reaches of the Internet.” https://twitter.com/CodeMonkeyZ/status/1158202303096094720 Cloudflare acted too late, too little Cloudflare's decision to simply block 8chan was not seen as an adequate response by some who say Cloudflare should have acted earlier. 8chan has been known for enabling child pornography in 2015 and as a result, was removed from Google Search. Coupled with the Christchurch mosque and the Poway synagogue shootings earlier in the year, there was increased pressure on those providing 8chan's Internet and financial service infrastructures to terminate their support. https://twitter.com/BinaryVixen899/status/1158216197705359360 Laurie Voss, the cofounder of npmjs, called out Cloudflare and subsequently, other content sites (Facebook, Twitter) for shirking responsibility under the guise of them being infrastructure companies and therefore cannot enforce content standards. https://twitter.com/seldo/status/1158204950595420160 https://twitter.com/seldo/status/1158206331662323712 “Facebook, Twitter, Cloudflare, and others pretend that they can't. They can. They just don't want to.” https://twitter.com/seldo/status/1158206867438522374 “I am super, super tired of companies whose profits rely on providing maximum communication with minimum moderation pretending this is some immutable law and not just the business model they picked,” he tweeted. Others also agreed that Cloudflare’s statement eschews responsibility. https://twitter.com/beccalew/status/1158196518983045121 https://twitter.com/slpng_giants/status/1158214314198745088 Voxility, 8chan’s hardware provider also bans the site Web services company Voxility has also banned 8chan and it’s new host Epik, which had been leasing web space from it. Epik’s website remains accessible, but 8chan now returns an error message. “As soon as we were notified of the content that Epik was hosting, we made the decision to totally ban them,” Voxility business development VP Maria Sirbu told The Verge. Sirbu said it was unlikely that Voxility would work with Epik again. “This is the second situation we’ve had with the reseller and this is not tolerable,” she said. https://twitter.com/alexstamos/status/1158392795687575554 Does de-platforming even work? De-platforming or banning people that spread extremist or banning these people is not a solution since they will eventually migrate to other platforms and still able to circulate their ideology. Closing 8chan is not the solution to the bigger problem of controlling racism and extremism. Closing one 8chan will sprout another 20chan. “8chan is no longer a refuge for extremist hate — it is a window opening onto a much broader landscape of racism, radicalization, and terrorism. Shutting down the site is unlikely to eradicate this new extremist culture because 8chan is anywhere. Pull the plug, it will appear somewhere else, in whatever locale will host it. Because there's nothing particularly special about 8chan, there are no content algorithms, hosting technology immaterial. The only thing radicalizing 8chan users are other 8chan users.”, Ryan Broderick from Buzzfeed wrote. A group of users told BuzzFeed that it’s now common for large 4chan threads to migrate over into Discord servers before the 404. After Cloudflare, Amazon is beginning to face public scrutiny as 8chan’s operator Jim Watkins sells audiobooks on Amazon.com and Audible. https://twitter.com/slpng_giants/status/1158213239697747968 Facebook will ban white nationalism, and separatism content in addition to white supremacy content. 8 tech companies and 18 governments sign the Christchurch Call to curb online extremism; the US backs off. How social media enabled and amplified the Christchurch terrorist attack
Read more
  • 0
  • 0
  • 21463

article-image-docker-19-03-introduces-an-experimental-rootless-docker-mode-that-helps-mitigate-vulnerabilities-by-hardening-the-docker-daemon
Savia Lobo
29 Jul 2019
3 min read
Save for later

Docker 19.03 introduces an experimental rootless Docker mode that helps mitigate vulnerabilities by hardening the Docker daemon

Savia Lobo
29 Jul 2019
3 min read
Tõnis Tiigi, a software engineer at Docker and also a maintainer of Moby/Docker engine, in his recent post on Medium, explained how users can now leverage Docker’s non-root user privileges with Docker 19.03 release. He explains the Docker engine provides functionalities which are often tightly coupled to that of the Linux Kernel. For instance, to create namespaces in Linux users need privileged capabilities this is because a component of container isolation is based on Linux namespaces. “Historically Docker daemon has always needed to be started by the root user”, Tiigi explains. Docker is looking forward to changing this notion by introducing rootless support. With the help of Moby and BuildKit maintainer, Akihiro Suda, “we added rootless support to BuildKit image builder in 2018 and from February 2019 the same rootless support was merged to Moby upstream and is available for all Docker users to try in experimental mode”, Tiigi mentions. The rootless mode will help reduce the security footprint of the daemon and expose Docker capabilities to systems where users cannot gain root privileges. Rootless Docker and its benefits As the name suggests, a rootless mode in Docker allows a user to run Docker daemon, including the containers, as a non-root user on the host. The benefit to this is, even if it gets compromised the attacker will not be able to gain root access to the host. Akhiro Suda in his presentation, “Hardening Docker daemon with Rootless mode” explains rootless mode does not entirely fix vulnerabilities and misconfigurations but can mitigate attacks. With rootless mode attacker would not be able to access files owned by other users, modify firmware and kernel with an undetectable malware, or perform ARP spoofing. A few Caveats to the rootless Docker mode Docker engineers say the rootless mode cannot be considered a replacement for the complete suite of Docker engine features. Some limitation to the rootless mode include: cgroups resource controls, apparmor security profiles, checkpoint/restore, overlay networks etc. do not work on rootless mode. Exposing ports from containers currently requires manual socat helper process. Only Ubuntu-based distros support overlay filesystems in rootless mode. Rootless mode is currently only provided for nightly builds that may not be as stable as you are used to. As a lot of Linux features that Docker needs require privileged capabilities, the rootless mode takes advantage of user namespaces. “User namespaces map a range of user ID-s so that the root user in the inner namespace maps to an unprivileged range in the parent namespace. A fresh process in user namespace also picks up a full set of process capabilities”, Tiigi explains. Source: Medium In the recent release of the experimental rootless mode on GitHub, engineers mention rootless mode allows running dockerd as an unprivileged user, using user_namespaces(7), mount_namespaces(7), network_namespaces(7). Users need to run dockerd-rootless.sh instead of dockerd. $ dockerd-rootless.sh --experimental As Rootless mode is experimental, users need to always run dockerd-rootless.sh with --experimental. To know more about the Docker rootless mode in detail, read Tõnis Tiigi’s Medium post. You can also check out Akihiro Suda’s presentation Hardening Docker daemon with Rootless mode. Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Announcing Docker Enterprise 3.0 Public Beta! Are Debian and Docker slowly losing popularity?
Read more
  • 0
  • 0
  • 25787

article-image-what-is-hcl-hashicorp-configuration-language-how-does-it-relate-to-terraform-and-why-is-it-growing-in-popularity
Savia Lobo
18 Jul 2019
6 min read
Save for later

What is HCL (Hashicorp Configuration Language), how does it relate to Terraform, and why is it growing in popularity?

Savia Lobo
18 Jul 2019
6 min read
HCL (Hashicorp Configuration language), is rapidly growing in popularity. Last year's Octoverse report by GitHub showed it to be the second fastest growing language on the platform, more than doubling in contributors since 2017 (Kotlin was top, with GitHub contributors growing 2.6 times). However, despite its growth, it hasn’t had the level of attention that other programming languages have had. One of the reasons for this is that HCL is a configuration language. It's also part of a broader ecosystem of tools built by cloud automation company HashiCorp that largely center around Terraform. What is Terraform? Terraform is an infrastructure-as-code tool that makes it easier to define and manage your cloud infrastructure. HCL is simply the syntax that allows you to better leverage its capabilities. It gives you a significant degree of control over your infrastructure in a way that’s more ‘human-readable’ than other configuration languages such as YAML and JSON. HCL and Terraform are both important parts of the DevOps world. They are not only built for a world that has transitioned to infrastructure-as-code, but also one in which this transition demands more from engineers. By making HCL a more readable, higher-level configuration language, the language can better facilitate collaboration and transparency between cross-functional engineering teams. With all of this in mind, HCL’s growing popularity can be taken to indicate broader shifts in the software development world. HashiCorp clearly understands them very well and is eager to help drive them forward. But before we go any further, let's dive a bit deeper into why HCL was created, how it works, and how it sits within the Terraform ecosystem. Why did Hashicorp create HCL? The development of HCL was borne from of HashiCorp’s experience of trying multiple different options for configuration languages. “What we learned,” the team explains on GitHub, “is that some people wanted human-friendly configuration languages and some people wanted machine-friendly languages.” The HashiCorp team needed a compromise - something that could offer a degree of flexibility and accessibility. As the team outlines their thinking, it’s clear to see what the drivers behind HCL actually are. JSON, they say, “is fairly verbose and... doesn't support comments” while YAML is viewed as too complex for beginners to properly parse and use effectively. Traditional programming languages also pose problems. Again, they’re too sophisticated and demand too much background knowledge from users to make them a truly useful configuration language. Put together, this underlines the fact that with HCL HashiCorp wanted to build something that is accessible to engineers of different abilities and skill sets, while also being clear enough to enable appropriate levels of transparency between teams. It is “designed to be written and modified by humans.” Listen: Uber engineer Yuri Shkuro talks distributed tracing and observability on the Packt Podcast How does the Hashicorp Configuration Language work? HCL is not a replacement for the likes of YAML or JSON. The team’s aim “is not to alienate other configuration languages. It is,” they say, “instead to provide HCL as a specialized language for our tools, and JSON as the interoperability layer.” Effectively, it builds on some of the things you can get with JSON, but reimagines them in the context of infrastructure and application configuration. According to the documentation, we should see HCL as a “structured configuration language rather than a data structure serialization language.” HCL is “always decoded using an application-defined schema,” which gives you a level of flexibility. It quite means the application is always at the center of the language. You don't have to work around it. If you want to learn more about the HCL syntax and how it works at a much deeper level, the documentation is a good place to start, as is this page on GitHub. Read next: Why do IT teams need to transition from DevOps to DevSecOps? The advantages of HCL and Terraform You can’t really talk about the advantages of HCL without also considering the advantages of Terraform. Indeed, while HCL might well be a well designed configuration language that’s accessible and caters to a wide range of users and use cases, it’s only in the context of Terraform that its growth really makes sense. Why is Terraform so popular? To understand the popularity of Terraform, you need to place it in the context of current trends and today’s software marketplace for infrastructure configuration. Terraform is widely seen as a competitor to configuration management tools like Chef, Ansible and Puppet. However, Terraform isn’t exactly a configuration management - it’s more accurate to call it a provisioning tool (config management tools configure software on servers that already exist - provisioning tools set up new ones). This is important because thanks to Docker and Kubernetes, the need for configuration has radically changed - you might even say that it’s no longer there. If a Docker container is effectively self-sufficient, with all the configuration files it needs to run, then the need for ‘traditional’ configuration management begins to drop. Of course, this isn’t to say that one tool is intrinsically better than any other. There are use cases for all of these types of tools. But the fact remains is that Terraform suits use cases that are starting to grow. Part of this is due to the rise of cloud agnosticism. As multi-cloud and hybrid cloud architectures become prevalent, DevOps teams need tools that let them navigate and manage resources across different platforms. Although all the major public cloud vendors have native tools for managing resources, these can sometimes be restrictive. The templates they offer can also be difficult to reuse. Take Azure ARM templates, for example - it can only be used to create an Azure resource. In contrast, Terraform allows you to provision and manage resources across different cloud platforms. Conclusion: Terraform and HCL can make DevOps more accessible It’s not hard to see why ThoughtWorks sees Terraform as such an important emerging technology. (In the last edition of ThoughtWorks Radar is claimed that now is the time to adopt it.) But it’s also important to understand that HCL is an important element in the success of Terraform. It makes infrastructure-as-code not only something that’s accessible to developers that might have previously only dipped their toes in operations, but also something that can be more collaborative, transparent, and observable for team members. The DevOps picture will undoubtedly evolve over the next few years, but it would appear that HashiCorp is going to have a big part to play in it.
Read more
  • 0
  • 0
  • 24331

article-image-linux-kernel-announces-a-patch-to-allow-0-0-0-0-8-as-a-valid-address-range
Savia Lobo
15 Jul 2019
6 min read
Save for later

Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range

Savia Lobo
15 Jul 2019
6 min read
Last month, the team behind Linux kernel announced a patch that allows 0.0.0.0/8 as a valid address range. This patch allows for these 16m new IPv4 addresses to appear within a box or on the wire. The aim is to use this 0/8 as a global unicast as this address was never used except the 0.0.0.0. In a post written by Dave Taht, Director of the Make-Wifi-Fast, and committed by David Stephen Miller, an American software developer working on the Linux kernel mentions that the use of 0.0.0.0/8 has been prohibited since the early internet due to two issues. First, an interoperability problem with BSD 4.2 in 1984, which was fixed in BSD 4.3 in 1986. “BSD 4.2 has long since been retired”, the post mentions. The second issue is that addresses of the form 0.x.y.z were initially defined only as a source address in an ICMP datagram, indicating "node number x.y.z on this IPv4 network", by nodes that know their address on their local network, but do not yet know their network prefix, in RFC0792 (page 19). The use of 0.x.y.z was later repealed in RFC1122 because the original ICMP-based mechanism for learning the network prefix was unworkable on many networks such as Ethernet. This is because these networks have longer addresses that would not fit into the 24 "node number" bits. Modern networks use reverse ARP (RFC0903) or BOOTP (RFC0951) or DHCP (RFC2131) to find their full 32-bit address and CIDR netmask (and other parameters such as default gateways). 0.x.y.z has had 16,777,215 addresses in 0.0.0.0/8 space left unused and reserved for future use, since 1989. The whole discussion of using allowing these IP address and making them available started early this year at the NetDevConf 2019, The Technical Conference on Linux Networking. The conference took place in Prague, Czech Republic, from March 20th to 22nd, 2019. One of the sessions, “Potential IPv4 Unicast Expansions”, conducted by  Dave Taht, along with John Gilmore, and Paul Wouters explains how IPv4 success story was in carrying unicast packets worldwide. The speakers say, service sites still need IPv4 addresses for everything, since the majority of Internet client nodes don't yet have IPv6 addresses. IPv4 addresses now cost 15 to 20 dollars apiece (times the size of your network!) and the price is rising. In their keynote, they described, the IPv4 address space includes hundreds of millions of addresses reserved for obscure (the ranges 0/8, and 127/16), or obsolete (225/8-231/8) reasons, or for "future use" (240/4 - otherwise known as class E). They highlighted the fact: “instead of leaving these IP addresses unused, we have started an effort to make them usable, generally. This work stalled out 10 years ago, because IPv6 was going to be universally deployed by now, and reliance on IPv4 was expected to be much lower than it in fact still is”. “We have been reporting bugs and sending patches to various vendors. For Linux, we have patches accepted in the kernel and patches pending for the distributions, routing daemons, and userland tools. Slowly but surely, we are decontaminating these IP addresses so they can be used in the near future. Many routers already handle many of these addresses, or can easily be configured to do so, and so we are working to expand unicast treatment of these addresses in routers and other OSes”, they further mentioned. They said they wanted to carry out an “authorized experiment to route some of these addresses globally, monitor their reachability from different parts of the Internet, and talk to ISPs who are not yet treating them as unicast to update their networks”. Here’s the patch code for 0.0.0.0/8 for Linux: Users have a mixed reaction to this announcement and assumed that these addresses would be unassigned forever. A few are of the opinion that for most business, IPv6 is an unnecessary headache. A user explained the difference between the address ranges in a reply to Jeremy Stretch’s (a network engineer) post, “0.0.0.0/8 - Addresses in this block refer to source hosts on "this" network. Address 0.0.0.0/32 may be used as a source address for this host on this network; other addresses within 0.0.0.0/8 may be used to refer to specified hosts on this network [RFC1700, page 4].” A user on Reddit writes, this announcement will probably get “the same reaction when 1.1.1.1 and 1.0.0.1 became available, and AT&T blocked it 'by accident' or most equipment vendors or major ISP will use 0.0.0.0/8 as a loopback interface or test interface because they never thought it would be assigned to anyone.” Another user on Elegant treader writes, “I could actually see us successfully inventing, and implementing, a multiverse concept for ipv4 to make these 32 bit addresses last another 40 years, as opposed to throwing these non-upgradable, hardcoded v4 devices out”. Another writes, if they would have “taken IPv4 and added more bits - we might all be using IPv6 now”. The user further mentions, “Instead they used the opportunity to cram every feature but the kitchen sink in there, so none of the hardware vendors were interested in implementing it and the backbones were slow to adopt it. So we got mass adoption of NAT instead of mass adoption of IPv6”. A user explains, “A single /8 isn’t going to meaningfully impact the exhaustion issues IPv4 faces. I believe it was APNIC a couple of years ago who said they were already facing allocation requests equivalent to an /8 a month”. “It’s part of the reason hand-wringing over some of the “wasteful” /8s that were handed out to organizations in the early days is largely pointless. Even if you could get those orgs to consolidate and give back large useable ranges in those blocks, there’s simply not enough there to meaningfully change the long term mismatch between demand and supply”, the user further adds. To know about these developments in detail, watch Dave Taht’s keynote video on YouTube: https://www.youtube.com/watch?v=92aNK3ftz6M&feature=youtu.be An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! Amazon adds UDP load balancing support for Network Load Balancer
Read more
  • 0
  • 0
  • 36604

article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 27962

article-image-salesforce-is-buying-tableau-in-a-15-7-billion-all-stock-deal
Richard Gall
10 Jun 2019
4 min read
Save for later

Salesforce is buying Tableau in a $15.7 billion all-stock deal

Richard Gall
10 Jun 2019
4 min read
Salesforce, one of the world's leading CRM platforms, is buying data visualization software Tableau in an all-stock deal worth $15.7 billion. The news comes just days after it emerged that Google is buying one of Tableau's competitors in the data visualization market, Looker. Taken together, the stories highlight the importance of analytics to some of the planet's biggest companies. They suggest that despite years of the big data revolution, it's only now that market-leading platforms are starting to realise that their customers want the level of capabilities offered by the best in the data visualization space. Salesforce shareholders will use their stock to purchase Tableau. As the press release published on the Salesforce site explains "each share of Tableau Class A and Class B common stock will be exchanged for 1.103 shares of Salesforce common stock, representing an enterprise value of $15.7 billion (net of cash), based on the trailing 3-day volume weighted average price of Salesforce's shares as of June 7, 2019." The acquisition is expected to be completed by the end of October 2019. https://twitter.com/tableau/status/1138040596604575750 Why is Salesforce buying Tableau? The deal is an incredible result for Tableau shareholders. At the end of last week, its market cap was $10.7 billion. This has led to some scepticism about just how good a deal this is for Salesforce. One commenter on Hacker News said "this seems really high for a company without earnings and a weird growth curve. Their ticker is cool and maybe sales force [sic] wants to be DATA on nasdaq. Otherwise, it will be hard to justify this high markup for a tool company." With Salesforce shares dropping 4.5% as markets opened this week, it seems investors are inclined to agree - Salesforce is certainly paying a premium for Tableau. However, whatever the long term impact of the acquisition, the price paid underlines the fact that Salesforce views Tableau as exceptionally important to its long term strategy. It opens up an opportunity for Salesforce to reposition and redefine itself as much more than just a CRM platform. It means it can start compete with the likes of Microsoft, which has a full suite of professional and business intelligence tools. Moreover, it also provides the platform with another way of potentially onboarding customers - given Tableau is well-known as a powerful yet accessible data visualization tool, it create an avenue through which new users can find their way to the Salesforce product. Marc Benioff, Chair and co-CEO of Salesforce, said "we are bringing together the world’s #1 CRM with the #1 analytics platform. Tableau helps people see and understand data, and Salesforce helps people engage and understand customers. It’s truly the best of both worlds for our customers--bringing together two critical platforms that every customer needs to understand their world.” Tableau has been a target for Salesforce for some time. Leaked documents from 2016 found that the data visualization was one of 14 companies that Salesforce had an interest in (another was LinkedIn, which would eventually be purchased by Microsoft). Read next: Alteryx vs. Tableau: Choosing the right data analytics tool for your business What's in it for Tableau (aside from the money...)? For Tableau, there are many other benefits of being purchased by Salesforce alongside the money. Primarily this is about expanding the platform's reach - Salesforce users are people who are interested in data with a huge range of use cases. By joining up with Salesforce, Tableau will become their go-to data visualization tool. "As our two companies began joint discussions," Tableau CEO Adam Selipsky said, "the possibilities of what we might do together became more and more intriguing. They have leading capabilities across many CRM areas including sales, marketing, service, application integration, AI for analytics and more. They have a vast number of field personnel selling to and servicing customers. They have incredible reach into the fabric of so many customers, all of whom need rich analytics capabilities and visual interfaces... On behalf of our customers, we began to dream about we might accomplish if we could combine our ability to help people see and understand data with their ability to help people engage and understand customers." What will happen to Tableau? Tableau won't be going anywhere. It will continue to exist under its own brand with the current leadership all remaining, including Selipsky. What does this all mean for the technology market? At the moment, it's too early to say - but the last year or so has seen some major high-profile acquisitions by tech companies. Perhaps we're seeing the emergence of a tooling arms race as the biggest organizations attempt to arm themselves with ecosystems of established market-leading tools. Whether this is good or bad for users remains to be seen, however.  
Read more
  • 0
  • 0
  • 33073
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-businesses-need-to-learn-how-to-manage-cloud-costs-to-get-real-value-from-serverless-and-machine-learning-as-a-service
Richard Gall
10 Jun 2019
7 min read
Save for later

Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service

Richard Gall
10 Jun 2019
7 min read
This year’s Skill Up survey threw a spotlight on the challenges developers and engineering teams face when it comes to cloud. Indeed, it even highlighted the extent to which cloud is still a nascent trend for many developers, even though it feels so mainstream within the industry - almost half of respondents aren’t using cloud at all. But for those that do use cloud, the survey results also illustrated some of the specific ways that people are using or plan to use cloud platforms, as well as highlighting the biggest challenges and mistakes organisations are making when it comes to cloud. What came out as particularly important is that the limitations and the opportunities of cloud must be thought of together. With our research finding that cost only becomes important once a cloud platform is being used, it’s clear that if we’re to successfully - and cost effectively - use the cloud platforms we do, understanding the relationship with cost and opportunity over a sustained period of time (rather than, say, a month) is absolutely essential. As one of our respondents told us “businesses are still figuring out how to leverage cloud computing for their business needs and haven't quite got the cost model figured out.” Why does cost pose such a problem when it comes to cloud computing? In this year’s survey, we asked people what their primary motivations for using cloud are. The key motivators were use case and employment (ie. the decision was out of the respondent’s hands), but it was striking to see cost as only a minor consideration. Placed in the broader context of discussions around efficiency and a tightening global market, this seemed remarkable. It appears that people aren’t entering the cloud marketplace with cost as a top consideration. In contrast however, this picture changes when we asked respondents about the biggest limiting factors for their chosen cloud platforms. At this point, cost becomes a much more important factor. This highlights that the reality of cloud costs only become apparent - or rather, becomes more apparent - once a cloud platform is implemented and being used. From this we can infer that there is a lack of strategic planning in cloud purchasing. It’s almost as if technology leaders are falling into certain cloud platforms based on commonplace assumptions about what’s right. This then has consequences further down the line. We need to think about cloud cost and functionality together The fact that functionality is also a key limitation is also important to note here - in fact, it is actually closely tied up with cost, insofar as the functionality of each respective cloud platform is very neatly defined by its pricing structure. Take serverless, for example - although it’s typically regarded as something that can be cost-effective for organizations, it can prove costly when you start to scale workloads. You might save more money simply by optimizing your infrastructure. What this means in practice is that the features you want to exploit within your cloud platform should be approached with a clear sense of how it’s going to be used and how it’s going to fit in the evolution of your business and technology in the medium and long term future. Getting the most from leading cloud trends There were two distinct trends that developers identified as the most exciting: machine learning and serverless. Although both are very different, they both hold a promise of efficiency. Whether that’s the efficiency in moving away from traditional means of hosting to cloud-based functions to powerful data processing and machine-led decision making at scale, the fundamentals of both trends are about managing economies of scale in ways that would have been impossible half a decade ago. This plays into some of the issues around cost. If serverless and machine learning both appear to offer ways of saving on spending or radically driving growth, when that doesn’t quite turn out in the way technology purchasers expected it would, the relationship between cost and features can become a little bit strained. Serverless The idea that serverless will save you money is popular. And in general, it is inexpensive. The pricing structures of both AWS and Azure make Functions as a Service (FaaS) particularly attractive. It means you’ll no longer be spending money on provisioning compute resources you don’t actually need, with your provider managing the necessary elasticity. Read next: The Future of Cloud lies in revisiting the designs and limitations of today’s notion of ‘serverless computing’, say UC Berkeley researchers However, as we've already seen, serverless doesn't guarantee cost efficiency. You need to properly understand how you're going to use serverless to ensure that it's not costing you big money without you realising it. One way of using it might be to employ it for very specific workloads, allowing you to experiment in a relatively risk-free manner before employing it elsewhere - whatever you decide, you must ensure that the scope and purpose of the project is clear. Machine learning as a Service Machine learning - or deep learning in particular - is very expensive to do. This is one of the reasons that machine learning on cloud - machine learning as a service - is one of the most attractive features of many cloud platforms. But it’s not just about cost. Using cloud-based machine learning tools also removes some of the barriers to entry, making it easier for engineers who don’t necessarily have extensive training in the field to actually start using machine learning models in various ways. However, this does come with some limitations - and just as with serverless, you really do need to understand and even visualize how you’re going to use machine learning to ensure that you’re not just wasting time and energy with machine learning cloud features. You need to be clear about exactly how you’re going to use machine learning, what data you’re going to use, where it’s going to be stored, and what the end result should look like. Perhaps you want to embed machine learning capabilities inside an app? Or perhaps you want to run algorithms on existing data to inform internal decisions? Whatever it is, all these questions are important. These types of questions will also impact the type of platform you select. Google’s Cloud Platform is far and away the go-to platform for machine learning (this is one of the reasons why so many respondents said their motivation for using it was use case), but bear in mind that this could lead to some issues if the bulk of your data is typically stored on, say, AWS - you’ll need to build some kind of integration, or move your data to GCP (which is always going to be a headache). The hidden costs of innovation These types of extras are really important to consider when it comes to leveraging exciting cloud features. Yes you need to use a pricing calculator and spend time comparing platforms, but factoring additional development time to build integrations or move things is something that a calculator clearly can’t account for. Indeed, this is true in the context of both machine learning and serverless. The organizational implications of your purchases are perhaps the most important consideration and one that’s often the easiest to miss. Control the scope and empower your team However, although the organizational implications aren’t necessarily problems to be resolved - they could well be opportunities that you need to embrace. You need to prepare and be ready for those changes. Ultimately, preparation is key when it comes to leveraging the benefits of cloud. Defining the scope is critical and to do that you need to understand what your needs are and where you want to get to. That sounds obvious, but it’s all too easy to fall into the trap of focusing on the possibilities and opportunities of cloud without paying careful consideration to how to ensure it works for you. Read the results of Skill Up 2019. Download the report here.
Read more
  • 0
  • 0
  • 24641

article-image-5-reasons-node-js-developers-might-actually-love-using-azure-sponsored-by-microsoft
Richard Gall
24 May 2019
6 min read
Save for later

5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft]

Richard Gall
24 May 2019
6 min read
If you’re a Node.js developer, it might seem odd to be thinking about Azure. However, as the software landscape becomes increasingly cloud native, it’s well worth thinking about the cloud solution you and your organization uses. It should, after all, make life easier for you as much as it should help your company scale and provide better services and user experiences for customers. We don’t often talk about it, but cloud isn’t one thing: it’s a set of tools that provide developers with new ways of building and managing apps. It helps you experiment and learn. In the development of Azure, developer experience is at the top of the agenda. In many ways the platform represents Microsoft’s transformation as an organization, from one that seemed to distrust open source developers, to one which is hell bent on making them happier and more productive. So, if you’re a Node.js developer - or any kind of JavaScript developer, for that matter - reading this with healthy scepticism (and why wouldn’t you), let’s look at some of the reasons and ways Azure can support you in your work... This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free courtesy of Microsoft here. Deploy apps quickly with Azure App Service  As a developer, deploying applications quickly is one of your top priorities. Azure does that thanks to Azure App Service. Essentially, Azure App Service is a PaaS that brings together a variety of other Azure services and resources helping you to develop and host applications without worrying about your infrastructure.   There are lots of reasons to love Azure App Service, not least the speed with which it allows you to get up and running, but most importantly it gives application developers access to a range of Azure features, such as load balancing and security, as well as the platforms integrations with tools for DevOps processes.   Azure App Service works for developers working with a range of platforms, from Python to PHP - Node.js developers that want to give it a try should start here.   Manage application and infrastructure resources with the Azure CLI The Azure CLI is a useful tool for managing cloud resources. It can also be used to deploy an application quickly. If you’re a developer that likes working with the CLI, this feature really does offer a nice way of working, allowing you to easily move between each step in the development and deployment process. If you want to try deploying a Node.js application using the Azure CLI, check out this tutorial, or learn more about the Azure CLI here. Go serverless with Azure Functions Serverless has been getting serious attention over the last 18 months. While it’s true that serverless is a hyped field, and that in reality there are serious considerations to be made about how and where you choose to run your software, it’s relatively easy to try it out for yourself using Azure. In fact, the name itself is useful in demystifying serverless. The word ‘functions’ is a much more accurate description what you’re doing as a developer. A function is essentially a small piece of code that runs in the cloud that execute certain actions or tasks in specific situations. There are many reasons to go serverless, from a pay per use pricing model to support for your preferred dependencies. And while there are plenty of options in terms of cloud providers, Azure is worth exploring because it makes it so easy for developers to leverage. Learn more about Azure Functions here. Simple, accessible dashboards for logging and monitoring In 2019 building more reliable and observable systems will expand from the preserve of SREs and become something developers are accountable for too. This is the next step in the evolution of software engineering, as new silos are broken down. It’s for this reason that the monitoring tools offered by Azure could prove to be so valuable for developers. With Application Insights and Azure Monitor, you can gain the level of transparency you need to properly manage your application. Learn how to successfully monitor a Node.js app here. Build and deploy applications with Azure DevOps DevOps shouldn’t really require additional effort and thinking - but more often than not it does. Azure is a platform that appears to understand this implicitly, and the team behind it have done a lot to make it easier to cultivate a DevOps culture with several useful tools and integrations. Azure Test Plans is a toolkit for testing applications, which can seriously help you improve the way you test in your development processes, while Azure Boards can support project management from inside the Azure ecosystem - useful if you’re looking for a new way to manage agile workflows. But perhaps the most important feature within Azure DevOps - for developers at least - is Azure Pipelines. Azure Pipelines is particularly useful for JavaScript developers as it gives you the option to run a build pipeline on a Microsoft hosted agent that has a wide range of common JavaScript tools (like Yarn and Gulp) pre-installed. Microsoft claims this is the “simplest way to build and deploy” because “maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.” Find out how to build, test, and deploy Node.js apps using Azure Pipelines with this tutorial. Read next: 5 developers explain why they use Visual Studio Code Conclusion: Azure is a place to experiment and learn We often talk about cloud as a solution or service. And although it can provide solutions to many urgent problems, it’s worth remembering that cloud is really a set of many different tools. It isn’t one thing. Because of this, cloud platforms like Azure are as much places to experiment and try out new ways of working as it is simply someone else’s server space. With that in mind, it could be worth experimenting with Azure to try out new ideas - after all, what’s the worst that can happen? More than anything, cloud native should make development fun. Find out how to get started with Node.js on Azure. Download Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 37191

article-image-kubecon-cloudnativecon-eu-2019-highlights-microsofts-service-mesh-interface-enhancements-to-gke-virtual-kubelet-1-0-and-much-more
Savia Lobo
22 May 2019
7 min read
Save for later

KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!

Savia Lobo
22 May 2019
7 min read
The KubeCon+CloudNativeCon 2019 is live (May 21- May 23) at the Fira Gran Via exhibition center in Barcelona, Spain. This conference has a huge assemble of announcements for topics including Kubernetes, DevOps, and cloud-native application. There were many exciting announcements from Microsoft, Google, The Cloud Native Computing Foundation, and more!! Let’s have a brief overview of each of these announcements. Microsoft Kubernetes Announcements: Service Mesh Interface(SMI), Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and Helm 3 alpha Service Mesh Interface(SMI) Microsoft launched the Service Mesh Interface (SMI) specification, the company’s new community project for collaboration around Service Mesh infrastructure. SMI defines a set of common, portable APIs that provide developers with interoperability across different service mesh technologies including Istio, Linkerd, and Consul Connect. The Service Mesh Interface provides: A standard interface for meshes on Kubernetes A basic feature set for the most common mesh use cases Flexibility to support new mesh capabilities over time Space for the ecosystem to innovate with mesh technology To know more about the Service Mesh Interface, head over to Microsoft’s official blog. Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and first alpha of Helm 3 Microsoft released its Visual Studio Code’s open source Kubernetes extension version 1.0. The extension brings native Kubernetes integration to Visual Studio Code, and is fully supported for production management of Kubernetes clusters. Microsoft has also added an extensibility API that makes it possible for anyone to build their own integration experiences on top of Microsoft’s baseline Kubernetes integration. Microsoft also announced Virtual Kubelet 1.0. Brendan Burns, Kubernetes cofounder and Microsoft distinguished engineer said, “The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. We developed it and in the context of the Cloud Native Computing Foundation, where it’s a sandbox project.” He further added, “With 1.0, we’re saying ‘It’s ready.’ We think we’ve done all the work that we need in order for people to take production level dependencies on this project.” Microsoft also released the first alpha of Helm 3. Helm is the defacto standard for packaging and deploying Kubernetes applications. Helm 3 is simpler, supports all the modern security, identity, and authorization features of today’s Kubernetes. Helm 3 allows users to revisit and simplify Helm’s architecture, due to the growing maturity of Kubernetes identity and security features, like role-based access control (RBAC), and advanced features, such as custom resource definitions (CRDs). Know more about Helm 3 in detail on Microsoft’s official blog post. Google announces enhancements to Google Kubernetes Engine; Stackdriver Kubernetes Engine Monitoring ‘generally available’ On the first day of the KubeCon+CloudNative Con 2019, yesterday, Google announced the three release channels for its Google Kubernetes Engine (GKE), Rapid, Regular and Stable. Google, in its official blog post states, “Each channel offers different version maturity and freshness, allowing developers to subscribe their cluster to a stream of updates that match risk tolerance and business requirements.” This new feature will be launched into alpha with the first release in the Rapid channel, which will give developers early access to the latest versions of Kubernetes. Google also announced the general availability of Stackdriver Kubernetes Engine Monitoring, a tool that gives users a GKE observability (metrics, logs, events, and metadata) all in one place, to help provide faster time-to-resolution for issues, no matter the scale. To know more about the three release channels and the Stackdriver Kubernetes Engine Monitoring in detail, head over to Google’s official blog post. Cloud Native Foundation announcements: Announcing Harbor 1.8,  launches a new online course ‘Cloud Native Logging with Fluentd’, Intuit Inc. wins the CNCF End User Award, and Kong Inc. is now a Gold Member Harbor 1.8 The VMWare team released Harbor 1.8, yesterday, with new features and improvements, including enhanced automation integration, security, monitoring, and cross-registry replication support. Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor 1.8 also brings various  other capabilities for both administrators and end users: Health check API, which shows detailed status and health of all Harbor components. Harbor extends and builds on top of the open source Docker Registry to facilitate registry operations like the pushing and pulling of images. In this release, we upgraded our Docker Registry to version 2.7.1 Support for defining cron-based scheduled tasks in the Harbor UI. Administrators can now use cron strings to define the schedule of a job. Scan, garbage collection, and replication jobs are all supported. API explorer integration. End users can now explore and trigger Harbor’s API via the Swagger UI nested inside Harbor’s UI. Enhancement of the Job Service engine to include internal webhook events, additional APIs for job management, and numerous bug fixes to improve the stability of the service. To know more about this release, read Harbor 1.8 official blogpost. A new online course on ‘Cloud Native Logging with Fluentd’ The Cloud Native Computing Foundation and The Linux Foundation have together designed a new, self-paced and hands-on course Cloud Native Logging with Fluentd. This course will provide users with the necessary skills to deploy Fluentd in a wide range of production settings. Eduardo Silva, Principal Engineer at Arm Treasure Data, said, “This course will explore the full range of Fluentd features, from installing Fluentd and running it in a container, to using it as a simple log forwarder or a sophisticated log aggregator and processor.” “As we see the Fluentd project growing into a full ecosystem of third party integrations and components, we are thrilled that this course will be offered so more people can realize the benefits it provides”, he further added. To know more about this course and its benefits in detail, visit the official blogpost. Intuit Inc. won the CNCF End User Award At the conference, yesterday, CNCF announced that Intuit Inc. has won the CNCF End User Award in recognition of its contributions to the cloud native ecosystem. Intuit is an active user, contributor and developer of open source technologies. As a part of its journey to the public cloud, Intuit has advanced the way it leverages cloud native technologies in production, including CNCF projects like Kubernetes and OPA. To know more about this achievement by Intuit in detail, read the official blog post. Kong Inc. is now a Gold Member of the CNCF The CNCF announced that Kong Inc., which provides open source API and service lifecycle management tool has upgraded its membership to Gold. The company backs the Kong project, a cloud native, fast, scalable and distributed microservice abstraction layer. Kong Kong is focused on building a service control platform that acts as the nervous system for an organization’s modern software architectures by intelligently brokering information across all services. Dan Kohn, Executive Director of the Cloud Native Computing Foundation, said, “With their focus on open source and cloud native, Kong is a strong member of the open source community and their membership provides resources for activities like bug bounties and security audits that help our community continue to thrive.” Head over to CNCF’s official announcement post. More announcements can be expected from this conference, to stay updated visit KubeCon+CloudNativeCon 2019 official website. F8 Developer Conference Highlights: Redesigned FB5 app, Messenger update, new Oculus Quest and Rift S, Instagram shops, and more RSA Conference 2019 Highlights: Top 5 cybersecurity products announced NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference
Read more
  • 0
  • 0
  • 15142

article-image-does-it-make-sense-to-talk-about-devops-engineers-or-devops-tools
Richard Gall
10 May 2019
6 min read
Save for later

Does it make sense to talk about DevOps engineers or DevOps tools?

Richard Gall
10 May 2019
6 min read
DevOps engineers are in high demand - the job represents an engineering unicorn, someone that understands both development and operations and can help to foster a culture where the relationship between the two is almost frictionless. But there's some debate as to whether it makes sense to talk about a DevOps engineer at all. If DevOps is a culture of a set of practices that improves agility and empowers engineers to take more ownership over their work, should we really be thinking about DevOps as a single job that someone can simply train for? The quotes in this piece are taken from DevOps Paradox by Viktor Farcic, which will be published in June 2019. The book features interviews with a diverse range of figures drawn from across the DevOps world. Is DevOps engineer a 'real' job or just recruitment spin? Nirmal Mehta (@normalfaults), Technology Consultant at Booz Allen Hamilton, says "There's no such thing as a DevOps engineer. There shouldn't even be a DevOps team, because to me DevOps is more of a cultural and philosophical methodology, a process, and a way of thinking about things and communicating within an IT organization..." Mehta is cyncical about organizations that put out job descriptions asking for DevOps engineers. It is, he argues, a way of cutting costs - a way of simply doing more with less. "A DevOps engineer is just a job posting that signals an organization wants to hire one less person to do twice as much work rather than hire both a developer and an operator." This view is echoed by other figures associated with the DevOps world. Mike Kail (@mdkail), CTO at Everest, says "I certainly don't view DevOps as a tool or a job title. In my view, at the core, it's a cultural approach to leveraging automation and orchestration to streamline both code development, infrastructure, application deployments and subsequently, the managing of those resources." Similarly, Damian Duportal (@DamienDuportal), Træfik's Developer Advocate, says "there is no such thing as a DevOps engineer or even a DevOps team.  The main purpose of DevOps is to focus on value, finding the optimal for the organization, and the value it will bring." For both Duportal and Kail, then, DevOps is primarily a cultural thing, something which needs to be embedded inside the practices of an organization. Is it useful to talk about a DevOps team? There are big question marks over the concept of a DevOps engineer. But what about a specific team? It's all well and good talking about organizational philosophy, but how do you actually affect change in a practical manner? Julian Simpson (@builddoctor), Neo4J's Global IT Manager is sceptical about the concept of a DevOps team: “Can we have something called a DevOps team? I don't believe so. You might spin up a team to solve a DevOps problem, but then I wouldn't even say we specifically have a DevOps problem. I'd say you just have a problem." DevOps consultant Chris Riley (@HoardingInfo) has a similar take, saying: “DevOps Engineer as a title makes sense to me, but I don't think you necessarily have DevOps departments, nor do you seek that out. Instead, I think DevOps is a principle that you spread throughout your entire development organization. Rather, you look to reform your organization in a way that supports those initiatives versus just saying that we need to build this DevOps unit, and there we go, we're done, we're DevOps. Because by doing that you really have to empower that unit and most organizations aren't willing to do that." However, Red Hat Solutions Architect Wian Vos  (@wianvos) has a different take. For Vos the idea of a DevOps team is actually crucial if you are to cultivate a DevOps mindset inside your organization: "Imagine... you and I were going to start a company. We're going to need a DevOps team because we have a burning desire to put out this awesome application. The questions we have when we're putting together a DevOps team is both ‘Who are we hiring?’ and ‘What are we hiring for? Are we going to hire DevOps engineers? No. In that team, we want the best application developers, the best tester, and maybe we want a great infrastructure guy and a frontend/backend developer. I want people with specific roles who fit together as a team to be that DevOps team." For Vos, it's not so much about finding and hiring DevOps engineers - people with a specific set of skills and experience - but rather building a team that's constructed in such a way that it can put DevOps principles into practice. Is there such a thing as a DevOps tool? One of the interesting things about DevOps is that the debate seems to lead you into a bit of a bind. It's almost as if the more concrete we try and make it - turning it into a job, or a team - the less useful it becomes. This is particularly true when we consider tooling. Surely thinking about DevOps technologically, rather than speculatively makes it more real? In general, it appears there is a consensus against the idea of DevOps tools. On this point Julian Simpson said "my original thinking about the movement from 2009 onwards, when the name was coined, was that it would be about collaboration and perhaps the tools would sort of come out of that collaboration." James Turnbull (@kartar), CEO of Rethink Robotics is critical of the notion of DevOps tools. He says "I don't think there are such things as DevOps tools. I believe there are tools that make the process of being a cross-functional team better... Any tool that facilitates building that cross-functionality is probably a DevOps tool to the point where the term is likely meaningless." When it comes to DevOps, everyone's still learning With even industry figures disagreeing on what terms mean, or which ones are relevant, it's pretty clear that DevOps will remain a field that's contested and debated. But perhaps this is important - if we expect it to simply be a solution to the engineering challenges we face, it's already failed as a concept. However, if we understand it as a framework or mindset for solving problems then that is when it acquires greater potency. Viktor Farcic is a Developer Advocate at CloudBees, a member of the Google Developer Experts and Docker Captains groups, and published author. His big passions are DevOps, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
Read more
  • 0
  • 0
  • 46702
article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 37033

article-image-announcing-docker-enterprise-3-0-public-beta
Savia Lobo
02 May 2019
3 min read
Save for later

Announcing Docker Enterprise 3.0 Public Beta!

Savia Lobo
02 May 2019
3 min read
Update: On July 22, 2019, the Docker team announced that the Docker Enterprise 3.0 will be generally available. He also added that more than 2,000 people have tried the Docker Enterprise 3.0 public beta program On April 24, the team at Docker announced Docker Enterprise 3.0, an end-to-end container platform that enables developers to quickly build and share any type of application (from legacy to cloud-native) and securely run them anywhere, from hybrid cloud to the edge. It is now available in Public Beta Docker Enterprise 3.0 delivers new desktop capabilities, advanced development productivity tools, a simplified and secure Kubernetes stack, and a managed service option to make Docker Enterprise 3.0 the platform for digital transformation. Jay Lyman, the Principal Analyst for 451 Research, “Docker’s new Enterprise 3.0 promises to automate the 'development to production' experience with new tooling that aims to reduce the friction between dev and ops teams.” What can you do with the new Docker Enterprise 3.0? Integrated Docker Desktop Enterprise Docker Desktop Enterprise provides a consistent development-to-production experience with a set of automation tools. This makes it possible to start with the developer desktop, deliver an integrated and secure image registry with access to the Hub ecosystem, and then deploy to an enterprise-ready and Kubernetes-conformant environment. Docker Kubernetes Services (DKS) can simplify the scaling and deployment of applications Compatible with Docker Compose, Kubernetes YAML and Helm charts, DKS provides an automated and repeatable way to install, configure, manage and scale Kubernetes-based applications across hybrid and multi-cloud. DKS includes enhanced security, access controls, and automated lifecycle management bringing a new level of security to Kubernetes that integrates seamlessly with the Docker Enterprise platform. Customers will also have the option to use Docker Swarm Services (DSS) as part of the platform’s orchestration services. Docker Applications for high-velocity innovation Docker Applications are based on the CNAB open standard. It removes the friction between Dev and Ops by enabling teams to collaborate on an application by defining a group of related containers that work together to form an application. It also eliminates the configuration overhead by integrating and automating the creation of the Docker Compose and Kubernetes YAML files, Helm charts, etc. It also includes Application Templates, Application Designer and Version Packs, using which Docker Applications makes it possible for flexible deployment across different environments, delivering on the “code once, deploy anywhere” promise. With the announcement of Docker Enterprise 3.0, Docker also introduced Docker Enterprise-as-a-service - a fully-managed service on-premise or in the cloud. To know more about this news in detail, head over to Docker’s official announcement. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 16495

article-image-how-visual-studio-code-can-help-bridge-the-gap-between-full-stack-development-and-devops
Richard Gall
16 Apr 2019
8 min read
Save for later

How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsored by Microsoft]

Richard Gall
16 Apr 2019
8 min read
The popularity of Visual Studio Code is an indication of how the developer landscape is changing. It’s also a testament to the way the development team behind it have been proactive in responding to these changes and using them to inform iterations of the product. But what are these changes, exactly? Well, one way of understanding them is to focus on the growth of full-stack development as a job role. Thanks to cloud native and increased levels of abstraction across the stack, developers are being asked to do more. They’re not so much writing specialized code for the part of the stack that they ‘own’ in the way they might have been five years ago. Instead, a more holistic approach has become the norm. This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free from Microsoft here. This holistic approach is one where developers assemble applications, with an even greater focus - and indeed deliberation - on matters of infrastructure and architecture. Shipping code is important, but there is more to development roles than simply producing the building blocks of software. A good full stack developer is one who can shift between contexts. This could be context shifting in a literal sense, but it might also more broadly mean shifting between different types of problems. Crucial to this is an in-built DevOps mindset. Siloed compartmentalization is the enemy of the modern developer in just about every respect. This is why Visual Studio Code is so valuable today. It tackles compartmentalization, helping developers move between different problems and contexts seamlessly. In turn, this bridges the gap between what we would typically consider a full-stack engineer role and responsibilities and those of a DevOps engineer. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin How does Visual Studio Code support a DevOps mindset? From writing and optimizing code, to deploying to the cloud and managing those resources in the cloud, Visual Studio allows developers to do multiple things easily. Let's take a look at some of the features of the editor (or IDE, depending on your perspective) to see what this means in practical terms. VSCode helps developers assume responsibility for how their code performs Code optimization becomes important when you really start to take the issue of accountability seriously. Yes, that could be frustrating (especially if you’re a bad developer…) but it also makes writing code much more interesting. And Visual Studio Code has tools that help you to do just that. Like a good and friendly DevOps engineer, the text editor has a set of in-built features and extensions that can help you write cleaner and more performant code. There are, of course, plenty of extensions to help improve code, but before we even get there, Visual Studio Code has out of the box features that can aid productivity. IntelliSense, for example, Microsoft’s useful autocomplete feature, takes some of the leg work out of writing code. But it’s once you get into the extensions that you can begin to see how much support Visual Studio Code can bring. IntelliSense has several useful extensions - one for npm, one for Node.js modules, for example, but there are plenty of other useful tools, from code snippets to Chrome Debugger and Prettier, an extension for the popular code formatter that has seen 5 million downloads in a matter of months. Individually these tools might not seem like a big deal. But it’s the fact that you have access to these various features - if you want them, of course - that marks VSC out as somewhere that wants you to be productive, that wants you to focus on code health. That’s not to say that Atom, emacs, or any other editor doesn’t do this - it’s just that VSC steps in and takes up and performs in lieu of your brain. Which means you can focus your mind elsewhere. VSCode has features that encourages really effective developer collaboration DevOps is, fundamentally, a methodology about the way people build software together. Conversely, code editors often feel like a personal decision - something as unique to you as any other piece of technology that allows you to interface with information and with other people. And although VSC can feel as personal as any code editor (check out the themes - Dracula is particularly lovely…), because it is more than a typical code editor, it offers plenty of ways to enhance project management and collaboration. Perhaps the most notable example of how Visual Studio Code supports collaboration is the Live Share extension.  This nifty extension allows you to work on code with a colleague working on a different machine, as if you were collaborating on a Word document. As well as being pretty novel, the implications of this could be greater than we realise. By making it easier to open up the way we work at a very individual - almost atomized level - the way we work together to build and deploy code immediately changes. Software engineering has always been a team sport but coding itself was all too often a solitary activity. With Live Share you can quite deliberately begin to break down silos in a way that you've never been able to do before. Okay, but what does Live Share really have to do with DevOps? Well, if we can improve collaboration between people, we can immediately cut through any residual siloes. But more than that, it also opens new work practices for the future. Pair Programming, for example, might feel like a bit of a flash in the plan programming practice, but the principles become the norm if its better facilitated. And while an individual agile technique certainly does not make DevOps magically happen, making it normal can go a long way in helping to improve code quality and accountability. Visual Studio Code brings the cloud to full-stack developers (and vice versa) Collaboration, productivity - great. Code optimization - also great. But if you really want to think about how Visual Studio Code bridges the gap between full-stack development and DevOps, you have to look at the ways in which it allows developers to leverage Azure. An article published by TechCrunch in 2016 argued that cloud - managed services - had ‘killed DevOps’. That might have been a bit of an overstatement, but if you follow the logic you can begin to see that the world is moving towards a world where cloud actually and supports and facilitates DevOps processes. It almost removes the need for a separate operations engineer to deploy your applications appropriately. But even if the lack of friction here is appealing, you will nevertheless to be open to a new way of working. And that is never straightforward. And that’s where Visual Studio Code can help. With Azure App Service or Azure Functions extensions, you can deploy and manage Azure resources directly from your editor. That’s a productivity bonus, but it also immediately makes the concept of cloud or serverless more accessible, especially if you’re new to it. In VSC you’re literally working with it in a familiar space. The conclusion of that TechCrunch article is instructive: “DevOps as a team may be gone sooner than later, but its practices will be carried on to whole development teams so that we can continue to build upon what it has brought us in the past years.” Arguably, Visual Studio Code is an editor that takes us to that point - one where DevOps practices are embedded in development teams. Visual Studio Code: built for the needs of individual developers and the future of the industry Choosing a text editor is a subjective thing. Part of it is about what you’re trying to achieve, and what projects you’re working on, but often it’s not even about that: it’s simply about what feels right. This means that there are always going to be some developers for whom Visual Studio Code doesn’t work – and that’s fine. But unlike many other text editors, Visual Studio Code is so adaptable and flexible to every developers' individual needs, that it’s actually relatively straightforward to find a way to make it work for you. In turn, this means its advantages in terms of cloud native development, and improved collaboration aren’t at the expense of that subjective feeling of ‘yes... this works for me’. And if you’re going to build great software, both things are important. Try Visual Code for yourself. Download it now. Learn how to use Node.js on Azure by downloading Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 45597
article-image-google-cloud-next19-day-1-open-source-partnerships-hybrid-cloud-platform-cloud-run-and-more
Bhagyashree R
10 Apr 2019
6 min read
Save for later

Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more

Bhagyashree R
10 Apr 2019
6 min read
Google Cloud Next’ 19 kick started yesterday in San Francisco. On day 1 of the event, Google showcased its new tools for application developers, its partnership with open-source companies, and outlined its strategy to make a mark in the Cloud industry, which is currently dominated by Amazon and Microsoft. Here’s the rundown of the announcements Google made yesterday: Google Cloud’s new CEO is set to expand its sales team Cloud Next’19 is the first event where the newly-appointed Google Cloud CEO, Thomas Kurian took on stage to share his plans for Google Cloud. He plans to make Google Cloud "the best strategic partner" for organizations modernizing their IT infrastructure. To step up its game in the Cloud industry, Google needs to put more focus on understanding its customers, providing them better support, and making it easier for them to conduct business. This is why Kurian is planning to expand the sales team and add more technical specialists. Kurian, who joined Google after working at Oracle for 22 years, also shared that the team is rolling out new contracts to make contracting easier and also promised simplified pricing. Anthos, Google’s hybrid cloud platform is coming to AWS and Azure During the opening keynote, Sundar Pichai, Google’s CEO confirmed the rebranding of Cloud Services Platform, a platform for building and managing hybrid applications, as it enters general availability. This rebranded version named Anthos provides customers a single managed service, which is not limited to just Google-based environments and comes with extended support for Amazon Web Services (AWS) and Azure. With this extended support, Google aims to provide organizations that have multi-cloud sourcing strategy a more consistent experience across all three clouds. Urs Hölzle, Google’s Senior Vice President for Technical Infrastructure, shared in a press conference, “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three clouds — and actually on-premise environments, too — look the same.” Along with this extended support, another plus point of Anthos is that it is hardware agnostic, which means customers can run the service on top of their current hardware without having to immediately invest in new servers. It is a subscription-based service, with prices starting at $10,000/month per 100 vCPU block. Google also announced the first beta release of Anthos Migrate, a service that auto-migrates VMs from on-premises, or other clouds, directly into containers in Google Kubernetes Environment (GKE) with minimum effort. Explaining the advantage of this tool, Google wrote in a blog post, “Through this transformation, your IT team is free from managing infrastructure tasks like VM maintenance and OS patching, so it can focus on managing and developing applications.” Google Cloud partners with top open-source projects challenging AWS Google has partnered with several top open-source data management and analytics companies including Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs. The services and products provided by these companies will be deeply integrated into the Google Cloud Platform. With this integration, Google aims to provide customers a seamless experience by allowing them to use these open source technologies at a single place, Google Cloud. These will be managed services and the invoicing and billing of these services will be handled by Google Cloud. Customer support will also be the responsibility of Google so that users manage and log tickets across all of these services via a single platform. Google’s approach of partnering with these open source companies is quite different from that of other cloud providers. Over the past few years, we have come across cases where cloud providers sell open-source projects as service, often without giving any credits to the original project. This led to companies revisiting their open-source licenses to stop such behavior. For instance, Redis adopted the Common Clause license for its Redis Modules and later dropped its revised license in February. Similarly, MongoDB, Neo4j, and Confluent also embraced a similar strategy. Kurian said, “In order to sustain the company behind the open-source technology, they need a monetization vehicle. If the cloud provider attacks them and takes that away, then they are not viable and it deteriorates the open-source community.” Cloud Run for running stateless containers serverlessly Google has combined serverless computing and containerization into a single product called Cloud Run. Yesterday, Oren Teich, Director Product Management for Serverless, announced the beta release of Cloud Run and also explained how it works. Cloud Run is a managed compute platform for running stateless containers that can be invoked via HTTP requests. It is built on top of Knative, a Kubernetes-based platform for building, deploying, and managing serverless workloads. You get two options to choose from, either you can run your containers fully-managed with Cloud Run or in your Google Kubernetes Engine cluster with Cloud Run on GKE. Announcing the release of Cloud Run, Teich wrote in a blog post, “Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and it’s end-to-end managed.” Google releases closed source VS Code plugin Google announced the beta release of “Cloud Code for VS Code” as a closed source library. It allows you to extend the VS Code to bring the convenience of IDEs to developing cloud-native Kubernetes applications. This extension aims to speed up the builds, deployment, and debugging cycles. You can deploy your applications to either local clusters or across multiple cloud providers. Under the hood, Cloud Code for VS Code uses Google’s popular command-line tools such as skaffold and kubectl, to provide users continuous feedback as they build their projects. It also supports deployment profiles that lets you define different environments to make testing and debugging easier on your workstation or in the cloud. Cloud SQL now supports PostgreSQL 11.1 Beta Cloud SQL is Google’s fully-managed database service that makes it easier to set up, maintain, manage, and administer your relational databases on GCP. It now comes with support for PostgreSQL 11.1 Beta. Along with that, it supports the following relational databases: MySQL 5.5, 5.6, and 5.7 PostgreSQL 9.6 Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Podcasts is transcribing full podcast episodes for improving search results  
Read more
  • 0
  • 0
  • 21072

article-image-chef-goes-open-source-ditching-the-loose-open-core-model
Richard Gall
02 Apr 2019
5 min read
Save for later

Chef goes open source, ditching the Loose Open Core model

Richard Gall
02 Apr 2019
5 min read
Chef, the infrastructure automation tool, has today revealed that it is going completely open source. In doing so, the project has ditched the loose open core model. The news is particularly intriguing as it comes at a time when the traditional open source model appears to be facing challenges around its future sustainability. However, it would appear that from Chef's perspective the switch to a full open source license is being driven by a crowded marketplace where automation tools are finding it hard to gain a foothold inside organizations trying to automate their infrastructure. A further challenge for this market is what Chef has identified as 'The Coded Enterprise' - essentially technologically progressive organizations driven by an engineering culture where infrastructure is primarily viewed as code. Read next: Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Why is Chef going open source? As you might expect,  there's actually more to Chef's decision than pure commercialism. To get a good understanding, it's worth picking apart Chef's open core model and how this was limiting the project. The limitations of Open Core The Loose Open Core model has open source software at its center but is wrapped in proprietary software. So, it's open at its core, but is largely proprietary in how it is deployed and used by businesses. While at first glance this might make it easier to monetize the project, it also severely limits the projects ability to evolve and develop according to the needs of people that matter - the people that use it. Indeed, one way of thinking about it is that the open core model positions your software as a product - something that is defined by product managers and lives and dies by its stickiness with customers. By going open source, your software becomes a project, something that is shared and owned by a community of people that believe in it. Speaking to TechCrunch, Chef Co-Founder Adam Jacob said "in the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect... the value was always in the totality of the product." Read next: Chef Language and Style Removing the friction between product and project Jacob published an article on Medium expressing his delight at the news. It's an instructive look at how Chef has been thinking about itself and the challenges it faces. "Deciding what’s in, and what’s out, or where to focus, was the hardest part of the job at Chef," Jacob wrote. "I’m stoked nobody has to do it anymore. I’m stoked we can have the entire company participating in the open source community, rather than burning out a few dedicated heroes. I’m stoked we no longer have to justify the value of what we do in terms of what we hold back from collaborating with people on." So, what's the deal with the Chef Enterprise Automation Stack? As well as announcing that Chef will be open sourcing its code, the organization also revealed that it was bringing together Chef Automate, Chef Infra, Chef InSpec, Chef Habitat and Chef Workstation under one single solution: the Chef Enterprise Automation Stack. The point here is to simplify Chef's offering to its customers to make it easier for them to do everything they can to properly build and automate reliable infrastructure. Corey Scobie, SVP of Product and Engineering said that "the introduction of the Chef Enterprise Automation Stack builds on [the switch to open source]... aligning our business model with our customers’ stated needs through Chef software distribution, services, assurances and direct engagement. Moving forward, the best, fastest, most reliable way to get Chef products and content will be through our commercial distributions.” So, essentially the Chef Enterprise Automation Stack will be the primary Chef distribution that's available commercially, sitting alongside the open source project. What does all this mean for Chef customers and users? If you're a Chef user or have any questions or concerns, the team have put together a very helpful FAQ. You can read it here. The key points for Chef users Existing commercial and non-commercial users don't need to do anything - everything will continue as normal. However, anyone else using current releases should be aware that support will be removed from those releases in 12 months time. The team have clarified that "customers who choose to use our new software versions will be subject to the new license terms and will have an opportunity to create a commercial relationship with Chef, with all of the accompanying benefits that provides." A big step for Chef - could it help determine the evolution of open source? This is a significant step for Chef and it will be of particular interest to its users. But even for those who have no interest in Chef, it's nevertheless a story that indicates that there's a lot of life in open source despite the challenges it faces. It'll certainly interesting to see whether Chef makes it work and what impact it has on the configuration management marketplace.
Read more
  • 0
  • 0
  • 19356
Modal Close icon
Modal Close icon