Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-what-do-we-really-mean-when-we-say-that-software-is-dead-or-dying
Richard Gall
22 Oct 2019
12 min read
Save for later

What do we really mean when we say that software is ‘dead’ or ‘dying’?

Richard Gall
22 Oct 2019
12 min read
“The report of my death was an exaggeration,” Mark Twain once wrote in a letter to journalist Frank Marshall White. Twain's quip is a fitting refrain for much of the software industry. Year after year there is a new wave of opinion from experts declaring this or that software or trend to be dead or, if it’s lucky, merely dying. I was inclined to think this was a relatively new phenomenon, but the topic comes up on Jeff Atwood’s blog Coding Horror as far back as 2009. Atwood quotes from an article from influential software engineer Tom DeMarco in which DeMarco writes that “software engineering is an idea whose time has come and gone.” (The hyperlink that points to DeMarco’s piece is, ironically, now dead). So, it’s clearly not something new to the software industry. In fact, this rhetorical trope can tell us a lot about identity, change, and power in tech. Declaring something to be dead is an expression of insecurity, curiosity and sometimes plain obnoxiousness. Consider these questions from Quora - all of them appear to reflect an odd combination of status anxiety and technical interest: Is DevOps dead? Is React.js dead? Is Linux dead? Why? (yes, really) Is web development a dying career? (It’s also something I wrote about last year.) These questions don’t come out of a vacuum. They’re responses to existing opinions and discussions that are ongoing in different areas. To a certain extent they’re valuable: asking questions like those above and using these sorts of metaphors are ways of assessing the value and relevance of different technologies. That being said, they probably should be taken with a pinch of salt. Although they can be an indicator of community feeling towards a particular approach or tool (the plural of anecdote isn’t exactly data, but it’s not as far off as many people pretend it is), they often tell you as much about the person doing the talking as the thing they’re talking about. To be explicit: saying something is dead or dying is very often a mark of perspective or even identity. This might be frustrating, but it nevertheless provides an insight on how different technologies are valued or being used at any given time. What’s important, then, is to be mindful about why someone would describe this or that technology as dead. What might they be really trying to say? “X software is dead because companies just aren’t hiring for it” One of the reasons you might hear people proclaim a certain technology to be dead is because companies are, apparently, no longer hiring for those skills. It stops appearing on job boards; it’s no longer ‘in-demand’. While this undoubtedly makes sense, and it’s true that what’s in demand will shift and change over time, all too often assertions that ‘no one is hiring x developers any more’ is anecdotal. For example, although there have been many attempts to start rumors that JavaScript is ‘dead’ or that its time has come - like this article from a few years back in which the writer claims that JavaScript developers have been “mind-fucked into thinking that JavaScript is a good programming language” - research done earlier this year shows that 70% of companies are searching for JavaScript developers. So, far from dead. In the same survey Java came out as another programming language that is eagerly sought out by companies: 48% were on the lookout for Java developers. And while this percentage has almost certainly decreased over the last decade (admittedly I couldn’t find any research to back this up), that’s still a significant chunk to debunk the notion that Java is ‘dead’ or ‘dying.’ Software ecosystems don't develop chronologically This write up of the research by Free Code Camp argues that it shows that variation can be found “not between tech stacks but within them.” This suggests that the tech world isn’t quite as Darwinian as it’s often made out to be. It’s not a case of life and death, but instead different ecosystems all evolving in different ways and at different times. So, yes, maybe there is some death (after all, there aren’t as many people using Ember or Backbone as there were in 2014), but, as with many things, it’s actually a little more complicated… “X software is dead because no one’s learning it” If no one’s learning something, it’s presumably a good sign that X technology is dead or dying, right? Well, to a certain extent. It might give you an indication of how wider trends are evolving and even the types of problems that engineers and their employers are trying to solve, but again, it’s important to be cautious. It sounds obvious but just because no one seems to be learning something, it doesn’t mean that people aren’t. So yes, in certain developer communities it might be weird to consider people are learning Java. But with such a significant employer demand there are undoubtedly thousands of people trying to get to grips with it. Indeed, they might well be starting out in their career - but you can’t overlook the importance of established languages as a stepping stone into more niche and specialised roles and into more ‘exclusive’ communities. That said, what people are learning can be instructive in the context of the argument made in the Free Code Camp article mentioned above. If variation inside tech stacks is where change and fluctuation is actually happening, then the fact that people are learning a given library or framework will give a clear indication as to how that specific ecosystem is evolving. Software only really dies when their use cases do But even then, it’s still a leap to say that something’s dead. It’s also somewhat misleading and unhelpful. So, although it might be the case that more people are learning React or Kotlin than other related technologies, that doesn’t cancel out the fact that those other technologies may still have a part to play in a particular use case. Another important aspect to consider when thinking about what people are learning is that it’s often part of the whole economics of the hype cycle. The more something gets talked about, the more individual developers might be intrigued about how it actually works. This doesn’t, of course, mean they would necessarily start using it for professional projects or that we’d start to see adoption across large enterprises. There is a mesh of different forces at play when thinking about tech life cycles - sometimes our metaphors don’t really capture what’s going on. “It’s dead because there are better options out there” There’s one thing I haven’t really touched on that’s important when thinking about death and decline in the context of software: the fact that there are always options. You use one tool, language, framework, library, whatever, because it’s the best suited to your needs. If one option comes to supersede another for whatever reason it’s only natural to regard what came before as obsolete in some way. You can see this way of thinking on a large scale in the context of infrastructure - from virtual machines, to containers, to serverless, the technologies that enable each of those various phases might be considered ‘dead’ as we move from one to the other. Except that just isn’t the case. While containerized solutions might be more popular than virtual machines, and while serverless hints at an alternative to containers, each of these different approaches are still very much in play. Indeed, you might even see these various approaches inside the same software architecture - virtual machines might make sense here, but for this part of an application over there serverless functions are the best option for the job. With this in mind, throwing around the D word is - as mentioned above - misleading. In truth it’s really just a way for the speaker to signal that X just doesn’t work for them anymore and that Y is a much better option for what they’re trying to do. The vanity of performed expertise And that’s fine - learning from other people’s experiences is arguably the best way to learn when it comes to technology (far better than, say, a one dimensional manual or bone dry documentation). But when we use the word ‘dead’ we hide what actually might still be interesting or valuable about a given technology. In our bid to signal our own expertise and knowledge we close down avenues of exploration. And vanity only feeds the potentially damaging circus of hype cycles and burn out even more. So, if Kotlin really is a better option for you then that’s great. But it doesn’t mean Java is dead or dying. Indeed, it’s more likely the case that what we’re seeing are use cases growing and proliferating, with engineering teams and organizations requiring a more diverse set of options for an increasingly diverse and multi faceted range of options. If software does indeed die, then it’s not really a linear process. Various use cases will all evolve and over time they will start to impact one another. Maybe eventually we’ll see Kotlin replace Java as the language evolves to tackle a wider range of use cases. “X software pays more money, so Y software must be dying” The idea that certain technologies are worth more than others feeds into the narrative that certain technologies and tools are dying. But it’s just a myth - and a dangerous one at that. Although there is some research on which technologies are the most high paying, much of it lacks context. So, although this piece on Forbes might look insightful (wow, HBase engineers earn more than $120K!) it doesn’t really give you the wider picture of why these technologies command certain salaries. And, more to the point, it ignores the fact that these technologies are just tools used by people in certain job roles. Indeed, it’s more accurate to say that big data engineers and architects are commanding high salaries than to think anything as trite as Kafka developers are really well-respected by their employers! Talent gaps and industry needs It’s probably more useful to look at variation within specific job roles. By this I mean look at what tools the highest earning full-stack developers or architects are using. At least that would be a little more interesting and instructive. But even then it wouldn’t necessarily tell you whether something has ‘died’. It would simply hint at two things: where the talent gaps are, and what organizations are trying to do. That might give you a flavor of how something is evolving - indeed, it might be useful if you’re a developer or engineer looking for a new job. However, it doesn’t mean that something is dead. Java developers might not be paid a great deal but that doesn’t mean the language is dead. If anything, the opposite is true. It’s alive and well with a massive pool of programmers from which employers can choose. The hype cycle might give us an indication of new opportunities and new solutions, but it doesn’t necessarily offer the solutions we need right now. But what about versioning? And end of life software? Okay, these are important issues. In reality, yes, these are examples of when software really is dead. Well, not quite - there are still complications. For example, even as a new version of a particular technology is released it still takes time for individual projects and wider communities to make the move. Even end of life software that's no longer supported by vendors or maintainers can still have an afterlife in poorly managed projects (the existence of this articles like this suggest that this is more common than you’d hope). In a sense this is zombie software that keeps on living years after debates about whether its alive or dead have ceased. In theory, versioning should be the formal way through which we manage death in the software industry. But the fact that even then our nicely ordered systems still fail to properly bury and retire editions of software packages, languages, and tools, highlights that in reality it's actually really hard to properly kill off software. For all the communities that want to kill software, there are always other groups, whether through force of will or plain old laziness, that want to keep it alive. Conclusion: Software is hard to kill Perhaps that’s why we like to say that certain technologies are dead: not only do such proclamations help to signify how we identify (ie. the type of developer we are), it’s also a rhetorical trick that banishes nuance and complexity. If we are managing complexity and solving tricky and nuanced problems every day, the idea that we can simplify our own area of expertise into something that is digestible - quotable, even - is a way of establishing some sort of control in a field where it feels we have anything but. So, if you’re tempted to ever say that one piece of software product is ‘dead,’ ask yourself what you really mean. And if you overhear someone obnoxiously proclaiming a framework, library or language to be dying, consider what they’re trying to say. Are they just trying to make a point about themselves? What’s the other side of the story? And even if they’re almost correct, to what extent aren’t they correct?
Read more
  • 0
  • 0
  • 23285

article-image-microsoft-launches-open-application-model-oam-and-dapr-to-ease-developments-in-kubernetes-and-microservices
Vincy Davis
17 Oct 2019
5 min read
Save for later

Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices

Vincy Davis
17 Oct 2019
5 min read
Yesterday, Microsoft announced the launch of two new open-source projects- Open Application Model (OAM) and DAPR. OAM, developed by Microsoft and Alibaba Cloud under the Open Web Foundation, is a specification that enables the developer to define a coherent model to represent an application. The Dapr project, on the other hand, will allow developers to build portable microservice applications using any language and framework for a new or existing code. Open Application Model (OAM) In OAM, an application is made of many components like a MySQL database or a replicated PHP server with a corresponding load balancer. These components are further used to build an application, thus enabling the platform architects to utilize reusable components for the easy building of reliable applications. OAM will also empower the application developers to separate the application description from the application deployment details, allowing them to focus on the key elements of their application, instead of its operational details. Microsoft also asserted that OAM consists of unique characteristics like platform agnostic. The official blog states, “While our initial open implementation of OAM, named Rudr, is built on top of Kubernetes, the Open Application Model itself is not tightly bound to Kubernetes. It is possible to develop implementations for numerous other environments including small-device form factors, like edge deployments and elsewhere, where Kubernetes may not be the right choice. Or serverless environments where users don’t want or need the complexity of Kubernetes.” Another important feature of OAM is its design extensibility. OAM also enables the platform providers to expose the unique characteristics of their platform through the trait system which will help them to build cross-platform apps wherever the necessary traits are supported. In an interview with TechCrunch, Microsoft Azure CTO Mark Russinovich said that currently, Kubernetes is “infrastructure-focused” and does not provide any resource to build a relationship between the objects of an application. Russinovich believes that OAM will solve the problem that many developers and ops teams are facing today. Commenting on the cooperation with Alibaba Cloud on this specification, Russinovich observed that both the companies encountered the same problems when they talked to their customers and internal teams. He further said that over time, Alibaba Cloud will launch a managed service based on OAM, and chances are that Microsoft will do the same over time. The Dapr project for building microservice applications This is an alpha release of Dapr with an event-driven runtime to help developers build resilient, microservice stateless and stateful applications for the cloud and edge. It also allows the application to be built using any programming language and developer framework. “In addition, through the open source project, we welcome the community to add new building blocks and contribute new components into existing ones. Dapr is completely platformed agnostic, meaning you can run your applications locally, on any Kubernetes cluster, and other hosting environments that Dapr integrates with. This enables developers to build microservice applications that can run on both the cloud and edge with no code changes,” stated the official blog. Image Source: Microsoft APIs in Dapr are exposed as a sidecar architecture (either as a container or as a process) and does not require the application code to include any Dapr runtime code. This simplifies Dapr integration from other runtimes, as well as provides separate application logic for improved supportability. Image Source: Microsoft Building blocks of Dapr Resilient service-to-service invocation: It enables method calls, including retries, on remote services wherever they are running in the supported hosting environment. State management for key/value pairs: This allows long-running, highly available, stateful services to be easily written, alongside stateless services in the same application. Publish and subscribe messaging between services: It enables event-driven architectures to simplify horizontal scalability and makes them resilient to failure. Event-driven resource bindings: This helps in building event-driven architectures for scale and resiliency by receiving and sending events to and from any external resources such as databases, queues, file systems, blob stores, webhooks, etc. Virtual actors: This is a pattern for stateless and stateful objects that makes concurrency simple with method and state encapsulation. Dapr also provides state, life-cycle management for actor activation/deactivation and timers and reminders to wake up actors. Distributed tracing between services: It enables easy diagnose and inter-service calls in production using the W3C Trace Context standard. It also allows push events for tracing and monitoring systems. Users have liked both the opensource projects, especially Dapr. A user on Hacker News comments, “I'm excited by Dapr! If I understand it correctly, it will make it easier for me to build applications by separating the "plumbing" (stateful & handled by Dapr) from my business logic (stateless, speaks to Dapr over gRPC). If I build using event-driven patterns, my business logic can be called in response to state changes in the system as a whole. I think an example of stateful "plumbing" is a non-functional concern such as retrying a service call or a write to a queue if the initial attempt fails. Since Dapr runs next to my application as a sidecar, it's unlikely that communication failures will occur within the local node.” https://twitter.com/stroker/status/1184810311263629315 https://twitter.com/ThorstenHans/status/1184513427265523712 The new WebSocket Inspector will be released in Firefox 71 Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia What to expect from D programming language in the near future An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements
Read more
  • 0
  • 0
  • 29656

article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 23119

article-image-made-by-google-2019-hardware-event-pixel-4-date-of-google-stadia
Sandesh Deshpande
17 Oct 2019
5 min read
Save for later

Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia

Sandesh Deshpande
17 Oct 2019
5 min read
After Apple, Microsoft and OnePlus tech giant Google is the next to showcase its new range of products in techtober. At its annual hardware event 'Made by Google' in New York City, Google launched a variety of new gears. Though the focus of this event was on Google's flagship phone Pixel 4 and Pixel 4 XL, the tech giant also announced the launch date of its highly anticipated gaming subscription service Google Stadia. The theme of the event was Ambient computing and how it is combining with Artificial Intelligence, wireless tech, and cloud computing. The theme was quite evident from all the products Google showcased at the event. “Our vision for ambient computing is to create a single, consistent experience at home, at work or on the go, whenever you need it,” said Rick Osterloh, Google’s head of hardware. “Your devices and services work together,” Osterloh said. “And it’s fluid so it disappears into the background.” Let’s look at the various products announced at ‘Made by Google’. Google Stadia to release on November 19 The event kicked off with release date (November 19) of Google's revolutionary Game Streaming service 'Stadia'. With Google Stadia you can play games from your desktop, Google’s Chrome web browser, smartphones, smart televisions, and tablets, or Chromecast.  Stadia is a cloud-based platform that streams games directly instead of rendering it on a console or a powerful local PC. Google also shared the inspiration behind the design of Stadia Controller. https://youtube.com/watch?v=Pwb6d2wK3Qw Pixel 4 and Pixel 4 XL now available After much anticipation, Google finally launched the next phone of its flagship Pixel series in two variants - Pixel 4 and Pixel 4 XL. The Pixel 4 has 5.7″ screen with a 2,800mAh battery, while the Pixel 4 XL comes in at 6.3″ screen with a 3,700mAh battery. They’re both running on the Snapdragon 855 chipset with 6GB of RAM. Both phones come with dual rear cameras and display comes with 90 Hz refresh rate. They also have “Project Soli radar” powering face unlock and gesture recognition allowing you to switch songs, snooze alarms or silence calls with just a wave of your hand. Pixel 4 also comes with crash detection (the US only for now), which can recognize if you’ve been in an automobile accident and can automatically call 911 for you. https://youtube.com/watch?v=_F7YRde8DuE Pixel 4 also has an advance recording app that can both record and transcribe audio at the same time.  It can also identify specific words and sounds allowing you to search specific parts of a recording with an inbuilt search function. This search functionality processing happens on your local device. One cannot talk about Pixel phones without mentioning camera features. Both Pixel 4 and Pixel 4 XL come with 3 cameras with 12.2-megapixel f/1.7 main, 16-megapixel f/2.4 telephoto rear and 8-megapixel selfie cameras. With respect to videos, these cameras can record 4K at 30 fps, and 1080p at 120 fps. The Google Pixel 4 will release on October 24 but is available for pre-order today. The 64GB version of the Pixel 4 will start from $799, with a 128GB option available for $899. The 64GB Pixel 4 XL will start from $899, with the option for 128GB for $999. New wireless headphones: Pixel Buds 2 Google also announced its next-generation wireless headphones – Pixel Buds 2. Now you can have direct access to Google assistant with the 'Hey Google' command, Pixel buds also support long range Bluetooth connectivity (According to Google, Pixel buds will remain connected to your phones in the range of three rooms or a Football field's length). Pixel buds will be available in Spring 2020 for around $179. https://youtube.com/watch?v=2MmAbDJK8YY New Chrome OS laptop: Pixelbook Go Google has refreshed its Chromebook series and launched a new product, the Pixelbook Go. “We wanted to create a thin and light laptop that was really fast, and also have it last all day. And of course, we wanted it to look and feel beautiful”, said Google's Ivy Ross, VP head of design hardware, in the event. Weighing only two pounds and 13 mm thin, Pixelbook Go is portable while also being loaded with 16GB of RAM and up to 256GB of storage. The company has promised around 12 hours of battery life. The base model starts at $649. Source: Google Blog Google home mini is now 'Nest Mini' Google’s Home Mini assistant smart speaker is now renamed as 'Nest Mini'. It is now made of recycled plastic bottles, is wall-mountable and consists of a machine learning chip for faster response time.  The smart speaker also has additional microphones suitable for louder environments. Nest Mini will be available from October 22 for $49. Source: Google Blog Google also launched Nest WiFi which is a combination of Router/Smart speaker which is faster and features 25% better coverage compared to its predecessor ‘Google WiFi’. The routers come in 2-pack for $269 or 3-pack for $349 and go on sale on November 4. You can watch the event on YouTube. Bazel 1.0, Google’s polyglot build system switches to semantic versioning for better stability Google Project Zero discloses a zero-day Android exploit in Pixel, Huawei, Xiaomi and Samsung devices Google Chrome Keystone update can render your Mac system unbootable
Read more
  • 0
  • 0
  • 22379

article-image-python-3-8-available-walrus-operator-positional-only-parameters-vectorcall
Sugandha Lahoti
15 Oct 2019
6 min read
Save for later

Python 3.8 is now available with walrus operator, positional-only parameters support for Vectorcall, and more

Sugandha Lahoti
15 Oct 2019
6 min read
Yesterday, the latest version of the Python programming language, Python 3.8 was made available with multiple new improvements and features. Features include the new walrus operator and positional-only parameters, runtime audit Hooks, Vectorcall, a fast calling protocol for CPython and more. Earlier this month, the team behind Python announced the release of Python 3.8b2, the second of four planned beta releases. What’s new in Python 3.8 PEP 572: New walrus operator in assignment expressions Python 3.8 has a new walrus operator := that assigns values to variables as part of a larger expression. It is useful when matching regular expressions where match objects are needed twice. It can also be used with while-loops that compute a value to test loop termination and then need that same value again in the body of the loop. It can also be used in list comprehensions where a value computed in a filtering condition is also needed in the expression body. The walrus operator was proposed in PEP 572 (Assignment Expressions) by Chris Angelico, Tim Peters, and Guido van Rossum last year. Since then it has been heavily discussed in the Python community with many questioning whether it is a needed improvement. Others are excited as the operator does make the code more readable. One user commented on HN, “The "walrus operator" will occasionally be useful, but I doubt I will find many effective uses for it. Same with the forced positional/keyword arguments and the "self-documenting" f-string expressions. Even when they have a use, it's usually just to save one line of code or a few extra characters.” https://twitter.com/reynoldsnlp/status/1183498971425042433 https://twitter.com/jakevdp/status/1140071525870997504 PEP 570: New function parameter syntax in positional-only parameters Python 3.8 has a new function parameter syntax / to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments. This notation allows pure Python functions to fully emulate behaviors of existing C-coded functions. It can be used to preclude keyword arguments when the parameter name is not helpful. It also allows the parameter name to be changed in the future without the risk of breaking client code. As with PEP 572, this proposal also got mixed reactions from Python developers. In support, one developer said, “Position-only parameters already exist in cpython builtins like range and min. Making their support at the language level would make their existence less confusing and documented.” While others think that this will allow authors to “dictate” how their methods could be used. “Not the biggest fan of this one because it allows library authors to overly dictate how their functions can be used, as in, mark an argument as positional merely because they want to. But cool all the same,” a Redditor commented. PEP 578: Python Audit Hooks and Verified Open Hook Python 3.8 now has an Audit Hook and Verified Open Hook. These hooks allow applications and frameworks written in pure Python code to take advantage of extra notifications. They also allow embedders or system administrators to deploy builds of Python where auditing is always enabled. These are available from Python and native code. PEP 587: New C API to configure the Python Initialization Though Python is highly configurable, its configuration seems scattered all around the code. Python 3.8 adds a new C API to configure the Python Initialization providing finer control on the whole configuration and better error reporting. This PEP also adds _PyRuntimeState.preconfig (PyPreConfig type) and PyInterpreterState.config (PyConfig type) fields to internal structures. PyInterpreterState.config becomes the new reference configuration, replacing global configuration variables and other private variables. PEP 590: Provisional support for Vectorcall, a fast calling protocol for CPython A currently provisional Vectorcall protocol is added to the Python/C API. It is meant to formalize existing optimizations that were already done for various classes. Any extension type implementing a callable can use this protocol. It will be made fully public in Python 3.9. PEP 574: Pickle protocol 5 supports out-of-band data buffers The pickle protocol 5 now introduces support for out-of-band buffers. This means PEP 3118 compatible data can be transmitted separately from the main pickle stream, at the discretion of the communication layer. Parallel filesystem cache for compiled bytecode files There is a new PYTHONPYCACHEPREFIX setting that configures the implicit bytecode cache to use a separate parallel filesystem tree, rather than the default __pycache__ subdirectories within each source directory. Python uses the same ABI whether it’s built-in release or debug mode With Python 3.8, Python uses the same ABI whether it’s built-in release or debug mode. On Unix, when Python is built in debug mode, it is now possible to load C extensions built-in release mode and C extensions built using the stable ABI. On Unix, C extensions are no longer linked to libpython except on Android and Cygwin. Also, on Unix, when Python is built in debug mode, import now also looks for C extensions compiled in release mode and for C extensions compiled with the stable ABI. f-strings now have a = specifier Formatted strings (f-strings) were introduced in Python 3.6 with PEP 498. It enables you to evaluate an expression as part of the string along with inserting the result of function calls and so on. Python 3.8 adds a = specifier to f-strings for self-documenting expressions and debugging. An f-string such as f'{expr=}' will expand to the text of the expression, an equal sign, then the representation of the evaluated expression. One developer expressed their delight on Hacker News, “F strings are pretty awesome. I’m coming from JavaScript and partially java background. JavaScript’s String concatenation can become too complex and I have difficulty with large strings.” Another developer said, “The expansion of f-strings is a welcome addition. The more I use them, the happier I am that they exist.” Someone added to this, “This makes clean string interpolation so much easier to do, especially for print statements. It's almost hard to use python < 3.6 now because of them. New metadata module Python 3.8 has a new importlib.metadata module that provides (provisional) support for reading metadata from third-party packages. It can, for instance, extract an installed package’s version number, list of entry points, and more. You can go through other improved modules, language changes, Build and C API changes, API and Feature removals in Python 3.8 on Python docs. For full details, see the changelog. Python 3.8b2 new features: the walrus operator, positional-only parameters, and much more Python 3.8 beta 1 is now ready for you to test Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble. PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3.
Read more
  • 0
  • 0
  • 30875

article-image-can-devops-promote-empathy-in-software-engineering
Richard Gall
14 Oct 2019
8 min read
Save for later

Can DevOps promote empathy in software engineering?

Richard Gall
14 Oct 2019
8 min read
If DevOps is, at a really basic level, about getting different teams to work together, then you could say that DevOps is a discipline that promotes empathy. It’s an interesting idea, and one that’s explored in Viktor Farcic’s book The DevOps Paradox. What’s particularly significant about the notion of empathy existing inside DevOps is that it could help us to focus on exactly what we’re trying to achieve by employing it. In turn, this could help us develop or evolve the way we actually do DevOps - so, instead of worrying about a new toolchain, or new platform purchases, we could, perhaps, just explore ways of getting people to simply understand what their respective needs are. However, in the DevOps Paradox there are a number of different insights on empathy in DevOps and what it means not just for the field itself, but also its place in a modern business. Let’s take a look at some DevOps experts thoughts on empathy and DevOps. "Empathy helps developers put the user at the center of what they do" Jeff Sussna (@jeffsussna) is an IT consultant and coach that helps organizations to design, build and deliver products quickly. "There's a lot of confusion and anxiety about [empathy’s] meaning, and a lot of people tend to misunderstand it. Sometimes people think empathy means wallowing in someone else's pain. In fact, there's actually a philosopher from Yale University who is now putting out the idea that empathy is actually bad, and that it's the cause of all of the world's problems and what we need instead is compassion. "From my perspective, that represents a misunderstanding of both empathy and compassion, but my favorite is when people say things like, "Sociopaths are really good at empathizing". My answer to that is, if you have a sociopath in your organization, you have a much bigger problem, and DevOps isn't going to solve it. At that point, you have an HR problem. What you need to distinguish between is emotional empathy and cognitive empathy, and I use cognitive empathy in the context of DevOps in a very simple way, which is the ability to think about things as if from another's perspective. "If you're a developer and you think, ‘What is the experience of deploying and running my application going to be?’ you're thinking about it from the perspective of the operations person. "If you're an operations person and you're thinking in terms of, ‘What is the experience going to be when you need to spin up a test server in a matter of hours in order to test a hotfix because all of your testing swim lanes are full of other things, and what does that mean for my process of provisioning servers?’ then you're thinking about things from the tester's point of view. "And so, to me, that's empathy, and that's empathizing, which is really at the heart of customer service. It's at the heart of design thinking, and it's at the heart of product development. What is it that our customers are trying to accomplish, what help do they need from us, and how can we help them?" Listen: Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] "As soon as you have empathy you can understand why you provide value" Damien Duportal (@DamienDuportal) is a Developer Advocate at Traefik, Containous’ cloud native edge router. "If you have a tool that helps you to share empathy, then you have a great foundation for starting the conversation. Even if this seems boring to engineers, at least they'll start talking and listening to each other. I mean, once they've stopped debating sterile tabs versus spaces or JavaScript versus Java—or whatever sterile debate it is—they'll have to focus on the value they're going to provide. So, this is really how I would sum up DevOps, which again is about how you bring empathy back and focus on the value creation and interaction side of IT. "Empathy is one of the most advanced bricks you can have for building human interaction. If we are able to achieve so many different things—with different people, different opinions, and different cultures—it's because we, as humans, are capable of having high levels of empathy. "As soon as you have empathy, you can understand why you provide value. If you don't, then what's the point of trying to create value? It will only be from your point of view, and there are over seven billion other people in the world. So, ultimately, we need empathy to understand what we are going to do with our tools." Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin “Let’s not wait for culture to change: culture is in the rearview mirror” As CSO of PraxisFlow, Kevin Behr spends his time working with clients who seek to develop their DevOps process. His 25 years of experience have been driven by a passion for engaging with the complex problems that large IT organizations face, and how we can use DevOps to solve them. You can follow Kevin on Twitter at @kevinbehr. What do we mean when we talk about empathy in DevOps? We're saying that we understand what it feels like to do what you're doing and that I'll never do that to you again. So, let's build a system together that will allow us to never be there. DevOps to me has evolved into a lot of tools because we're humans, and humans love tools of all kinds. As a species, we've defined ourselves by our tools and technologies. And, as a species, we also talk about culture a lot, but, to my mind, culture is a rearview mirror. Culture is just all the things that we've done: our organizational disposition. The way to change culture is to do things differently. Let's not wait for culture, because culture is in the rearview mirror: it's the past. If you're in a transition, then what are you transitioning toward and what does that mean about how you need to act? The very interesting thing about DevOps is that while frequently, its mission is to create a change in the culture of an organization, this change requires far more than coordination: it also requires pure collaboration, and co-laboring. These can be particularly awkward to achieve given the likelihood that we haven't worked with the people in an organization before. And it can become intensely awkward, when those people may have already made villains out of each other because they couldn't get what they wanted. The goal of the DevOps process is to create a new culture, despite these challenges.... ....When you manage to introduce empathy to a team, the development and the operations people seem finally to come together. You suddenly hear someone in operations say, 'Oh, can we do that differently? When you threw that thing at me last time, it gave me a black eye and I had to stay up for four days straight!' And the developer is like, 'It did? How did it do that? Next time, if something happens, please call me, I want to come help.' That empathy of figuring out what went wrong, and working together, is what builds trust." “The CFO doesn’t give a shit about empathy” Chris Riley (@HoardingInfo) is a self-proclaimed bad coder turned editor of Sweetcode.io at fixate.io, a content marketing firm for those who sell to technical audiences. Through this, he's involved with DevOps, SecOps, big data, machine learning, and blockchain. He's a member of the DevOps Institute Board of Regents, a position he's held for over four years. “...The CFO doesn't give a shit about empathy, and the person with the money may not care about that at all. The HR department might, but that's the problem with selling anything. You have to speak their language, and the CFO is going to respond to money. Either you're saving us money, or you're making us more money, and I think DevOps is doing both, which is cool. I think what's nice about that explanation is the fact it doesn't seem insurmountable. It's kind of like how Pixar was structured. “After Steve Jobs started at Pixar, he structured all of the work environments where the idea was to create chance encounters among the employees, so that the graphic designer of one movie would talk to the application developer of another, even when they don't even have any real reason to interact with each other. The way they did it at Pixar was that, as everybody has to go to the bathroom, they put the bathrooms in a large communal area where these people are going to run into each other—that's what created that empathy. They understand what each other's job is. They're excited about each other's movies. They're excited about what they're working on, and they're aware of that in everything they do. It's a really good explanation.” What do you think? Can DevOps help promote empathy inside engineering teams and across wider businesses? Or is there anything else we should be doing?
Read more
  • 0
  • 0
  • 24396
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-modern-web-development-what-makes-it-modern
Richard Gall
14 Oct 2019
10 min read
Save for later

Modern web development: what makes it ‘modern’?

Richard Gall
14 Oct 2019
10 min read
The phrase 'modern web development' is one that I have clung to during years writing copy for Packt books. But what does it really mean? I know it means something because it sounds right - but there’s still a part of me that feels that it’s a bit vague and empty. Although it might sound somewhat fluffy the truth is that there probably is such a thing as modern web development. Insofar as the field has changed significantly in a matter of years, and things are different now from how they were in, say, 2013, modern web development can be easily characterised as all the things that are being done in web development in 2019 that are different from 5-10 years ago. By this I don’t just mean trends like artificial intelligence and mobile development (although those are both important). I’m also talking about the more specific ways in which we actually build web projects. So, let’s take a look at how we got to where we are and the various ways in which ‘modern web development’ is, well, modern. The story of modern web development: how we got here It sounds obvious, but the catalyst for the changes that we currently see in web development today is the rise of mobile. Mobile and the rise of the web applications There are a few different parts to this that have got us to where we are today. In the first instance the growth of mobile in the middle part of this decade (around 2013 or 2014) initiated the trend of mobile-first or responsive web design. Those terms might sound a bit old-hat. If they do, it’s a mark of how quickly the web development world has changed. Primarily, though this was about appearance and UI - making web properties easy to use and navigate on mobile devices, rather than just desktop. Tools like Bootstrap grew quickly, providing an easy and templated way to build mobile-first and responsive websites. But what began as a trend concerned primarily with appearance later shifted as mobile usage grew. This called for a more sophisticated approach as mobile users came to expect richer and faster web experiences, and businesses a new way to monetize these significant changes user behavior. Explore Packt's Bootstrap titles here. Lightweight apps for data-intensive user experiences This is where concepts like the single page web app came to the fore. Lightweight and dynamic, and capable of handling data-intensive tasks and changes in state, single page web apps were unique in that they handled logic in the browser rather than on the server. This was arguably a watershed in changing how we think about web development. It was instrumental in collapsing the well-established distinction between backend and front end. Behind this trend we saw a shift towards new technologies. Node.js quietly emerged on the scene (arguably its only in the last couple of years that its popularity has really exploded), and frameworks like Angular were at the height of their popularity. Find a massive range of Node.js eBooks and videos on the Packt store. Full-stack web development It’s around this time that full stack development started to accelerate as a trend. Look at Google trends. You can see how searches for the phrase have grown since the beginning of 2012: If you look closely, it’s around 2015 that the term undergoes a step change in the level of interest. Undoubtedly one of the reasons for this is that the relationship between client and server was starting to shift. This meant the web developer skill set was starting to change as well. As a web developer, you weren’t only concerned with how to build the front end, but also how that front end managed dynamic content and different states. The rise and fall of Angular A good corollary to this tale is the fate of AngularJS. While it rose to the top amidst the chaos and confusion of the mid-teens framework bubble, as the mobile revolution matured in a way that gave way to more sophisticated web applications, the framework soon became too cumbersome. And while Google - the frameworks’ creator - aimed to keep it up to date with Angular 2 and subsequent versions, delays and missteps meant the project lost ground to React. Indeed, this isn’t to say that Angular is dead and buried. There are plenty of reasons to use Angular over React and other JavaScript tools if the use case is right. But it is nevertheless the case that the Angular project no longer defines web development to the extent that it used to. Explore Packt's Angular eBooks and videos. The fact that Ionic, the JavaScript mobile framework, is now backed by Web Components rather than Angular is an important indicator for what modern web development actually looks like - and how it contrasts with what we were doing just a few years ago. The core elements of modern web development in 2019 So, there are a number of core components to modern web development that have evolved out of industry changes over the last decade. Some of these are tools, some are ideas and approaches. All of them are based on the need to manage a balance of complex requirements with performance and simplicity. Web Components Web Components are the most important element if we’re trying to characterise ‘modern’ web development. The principle is straightforward: Web Components provides a set of reusable custom elements. This makes it easier to build web pages and applications without writing additional lines of code that add complexity to your codebase. The main thing to keep in mind here is that Web Components improve encapsulation. This concept, which is really about building in a more modular and loosely coupled manner, is crucial when thinking about what makes modern web development modern. There are three main elements to Web Components: Custom elements, which are a set of JavaScript APIs that you can call and define however you need them to work. The shadow DOM, which acts as a DOM that’s attached to individual elements on your page. This essentially isolates the resources different elements and components need to work on your page which makes it easier to manage from a development perspective, and can unlock better performance for users. HTML Templates, which are bits of HTML that can be reused and called upon only when needed. These elements together paint a picture of modern web development. This is one in which developers are trying to handle more complexity and sophistication while improving their productivity and efficiency. Want to get started with Web Components? You can! Read Getting Started with Web Components. React.js One of the reasons that React managed to usurp Angular is the fact that it does many of the things that Google wanted Angular to do. Perhaps the most significant difference between React and Angular is that React tackles some of the scalability issues presented by Angular’s two-way data binding (which was, for a while, incredibly exciting and innovative) with unidirectional flow. There’s a lot of discussion around this, but by moving towards a singular model of data flow, applications can handle data on a much larger scale without running into problems. Elsewhere, concepts like the virtual DOM (which is distinct from a shadow DOM, more on that here) help to improve encapsulation for developers. Indeed, flexibility is one of the biggest draws of React. To use Angular you need to know TypeScript, for example. And although you can use TypeScript when working with React, it isn’t essential. You have options. Explore Packt's React.js eBooks and videos. Redux, Flux, and how we think about application state The growth of React has got web developers thinking more and more about application state. While this isn’t something that’s new, as applications have become more interactive and complex it has become more important for developers to take the issue of ‘statefulness’ seriously. Consequently, libraries such as Flux and Redux have emerged on the scene which act as objects in which all the values that comprise an application’s state can be stored. This article on egghead.io explains why state is important in a clear and concise way: "For me, the key to understanding state management was when I realised that there is always state… users perform actions, and things change in response to those actions. State management makes the state of your app tangible in the form of a data structure that you can read from and write to. It makes your ‘invisible’ state clearly visible for you to work with." Find Redux eBooks and videos from Packt. Or, check out a wide range of Flux titles. APIs and microservices One software trend that we haven’t mentioned yet but nevertheless remains important when thinking about modern web development is the rise of APIs and microservices. For web developers, the trend is reinforcing the importance of encapsulation and modularity that things like Web Components and React are designed to help with. Insofar as microservices are simplifying the development process but adding architectural complexity, it’s not hard to see that web developers are having to think in a more holistic manner about how their applications interact with a variety of services and data sources. Indeed, you could even say that this trend is only extending the growth of the full-stack developer as a job role. If development is today more about pulling together multiple different services and components, rather than different roles building different parts of a monolithic app, it makes sense that the demand for full stack developers is growing. But there’s another, more specific, way in which the microservices trend is influencing modern web development: micro frontends. Micro frontends Micro frontends take the concept of microservices and apply them to the frontend. Rather than just building an application in which the front end is powered by microservices (in a way that’s common today), you also treat individual constituent parts of the frontend as a microservice. In turn, you build teams around each of these parts. So, perhaps one works on search, another on check out, another on user accounts. This is more of an organizational shift than a technological one. But it again feeds into the idea that modern web development is something modular, broken up and - usually - full-stack. Conclusion: modern web development is both a set of tools and a way of thinking The web development toolset has been evolving for more than a decade. Mobile was the catalyst for significant change, and has helped get us to a world that is modular, lightweight, and highly flexible. Heavyweight frameworks like AngularJS paved the way, but it appears an alternative has found real purchase with the wider development community. Of course, it won’t always be this way. And although React has dominated developer mindshare for a good three years or so (quite a while in the engineering world), something will certainly replace it at some point. But however the tool chain evolves, the basic idea that we build better applications and websites when we break things apart will likely stick. Complexity won’t decrease. Even if writing code gets easier, understanding how component parts of an application fit together - from front end elements to API integration - will become crucial. Even if it starts to bend your mind, and pose new problems you hadn’t even thought of, it’s clear that things are going to remain interesting as far as web development in the future is concerned.
Read more
  • 0
  • 0
  • 24393

article-image-facebook-releases-pytorch-1-3-with-named-tensors-pytorch-mobile-8-bit-model-quantization-and-more
Bhagyashree R
11 Oct 2019
5 min read
Save for later

Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more

Bhagyashree R
11 Oct 2019
5 min read
Yesterday, at the PyTorch Developer Conference, Facebook announced the release of PyTorch 1.3. This release comes with three experimental features: named tensors, 8-bit model quantization, and PyTorch Mobile. Along with these exciting features, Facebook also announced the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. Key updates in PyTorch 1.3 Named Tensors for more readable and maintainable code Though tensors are the building blocks of modern machine learning, researchers have argued that they are “broken.” Tensors have their own share of shortcomings: they expose private dimensions, broadcast based on absolute position, and keep the type information in the documentation. PyTorch 1.3 tries to solve this problem by introducing experimental support for named tensors, which was proposed by Sasha Rush, an Associate Professor at Cornell Tech. He has built a library called NamedTensor, which serves as a “thin-wrapper” on Torch tensor. This update introduces a few changes to the API. Dimension access and reduction now use a ‘dim’ argument instead of an index. Constructing and adding dimensions requires a “name” argument. Functions now broadcast based on set operations, not through heuristic ordering rules. 8-bit model quantization for mobile-optimized AI Quantization in deep learning is the method of approximating a neural network that uses 32-bit floating-point numbers by a neural network that uses a lower-precision numerical format. It is used to reduce the bandwidth and compute requirements of deep learning models. This is extremely essential for on-device applications that have limited memory size and number of computations. PyTorch 1.3 brings experimental support for 8-bit model quantization with the eager mode Python API for efficient deployment on servers and edge devices. This feature includes techniques like post-training quantization, dynamic quantization, and quantization-aware training. Moving from 32-bits to 8-bits can result in two to four times faster computations with one-quarter the memory usage. PyTorch Mobile for more efficient on-device machine learning Running machine learning models directly on edge devices is of great importance as it reduces latency. This is why PyTorch 1.3 introduces PyTorch Mobile that enables “an end-to-end workflow from Python to deployment on iOS and Android.” The current release is experimental. In the future releases, we can expect PyTorch Mobile to come with build-level optimization, selective compilation, support for QNNPACK quantized kernel libraries and ARM CPUs, further performance improvements, and more. Model interpretability and privacy tools in PyTorch 1.3 Captum and Captum Insights Captum is an easy-to-use model interpretability library for PyTorch. It is backed by state-of-the-art interpretability algorithms such as Integrated Gradients, DeepLIFT, and Conductance to help developers improve and troubleshoot their models. Developers can identify different features that contribute to a model’s output and improve its design. Facebook has also released an early release of Captum Insights. It is an interpretability visualization widget built on top of Captum. It works across images, text, and other features to help users understand feature attribution. Check out Facebook’s announcement to know more about Captum. CrypTen Machine learning via cloud-based platforms poses various security and privacy challenges. Facebook writes, “In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools.” PyTorch 1.3 comes with CrypTen, a framework for privacy-preserving machine learning. It aims to make secure computing techniques accessible to machine learning practitioners. You can find more about CrypTen on GitHub. Libraries for multimodal AI systems Detectron2: It is an object detection library implemented in PyTorch. It features support for the latest models and tasks and increased flexibility to aid computer vision research. There are also improvements in maintainability and scalability to support production use cases. Fairseq gets speech extensions: With this release, Fairseq, a framework for sequence-to-sequence applications such as language translation includes support for end-to-end learning for speech and audio recognition tasks. The release of PyTorch 1.3 started a discussion on Hacker News and naturally many developers compared it with TensorFlow 2.0. Here’s what a user commented, “This is a common trend for being second in the market when we see Pytorch and TensorFlow 2.0, TF 2.0 was created to compete directly with Pytorch pythonic implementation (Keras based, Eager execution).” They further added, “Facebook at least on PyTorch has been delivering a quality product. Although for us running production pipelines TF is still ahead in many areas (GPU, TPU implementation, TensorRT, TFX and other pipeline tools) I can see Pytorch catching up on the next couple of years which by my prediction many companies will be running serious and advanced workflows and we may be able to see a winner there.” The named tensors implementation is being well-received by the PyTorch community: https://twitter.com/leopd/status/1182342855886376965 https://twitter.com/rasbt/status/1182647527906140161 These were some of the updates in PyTorch 1.3. Check out the official announcement by Facebook to know more. PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 48856

article-image-puppet-announces-the-public-beta-of-project-nebula
Savia Lobo
10 Oct 2019
3 min read
Save for later

Puppet announces the public beta of Project Nebula

Savia Lobo
10 Oct 2019
3 min read
Today, Puppet announced the public beta of their Project Nebula at the Puppetize PDX, a two-day event (October 9-10) featuring user-focused DevOps and infrastructure delivery talks, and hands-on workshops. Project Nebula is a simplified workflow automation for the continuous deployment of cloud-native applications and infrastructure. It is designed for teams that are adopting cloud-native and serverless technologies and need an end-to-end workflow management system. Also Read: Puppet’s 2019 State of DevOps Report highlight security integration into DevOps practices result into higher business outcome Why Project Nebula? Puppet has worked closely with its private beta participants to understand their deployment workflows and pain points. On interviewing these participants they realized that they want to adopt cloud native technologies; however, they face multiple challenges in adopting containers, serverless infrastructure, microservices, and observability for even simple cloud-native applications. They also said that a major roadblock is a lack of simple automation today to easily compose multiple tools together for infrastructure provisioning, application deployment, and notifications into an end-to-end deployment. Another roadblock highlighted is the lack of a cohesive platform that multiple teams can use to share workflows and best practices and build them into their own deployments. “In-house efforts to build a deployment platform like this can take years, incur large maintenance and support costs, and often require specialized skill sets that many companies do not have today,” the company states. Project Nebula tries to help users in eliminating these roadblocks and gives teams a consistent, easy-to-use experience for deploying cloud-native apps in a safe, secure and continuous manner. Listen: Puppet’s VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast] Few features in Project Nebula With the focus on ease of use and improved productivity, Project Nebula also provides a single place to build, provision, and deploy cloud-native applications. Other notable features include: Built-in example Workflows: This will help users to get started with their deployments. Don’t start with a blank slate. Support for 20+: This is one of the most popular cloud-native deployment tools as configurable steps within your deployment, including Terraform, CloudFormation, Helm, Kubectl, Kustomize and more. Intuitive visualization: This provides a bird’s eye view of the entire deployment workflow. Easy to compose deployment workflows: One can easily compose deployment workflows that are checked into the source control repository and eliminate the process of writing messy, ad-hoc bash scripts. Know more about Puppet’s Project Nebula in detail, on its official website. Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops Puppet announces updates in a bid to help organizations manage their “automation footprint” “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel
Read more
  • 0
  • 0
  • 18497

article-image-blizzard-comes-under-fire-after-banning-pro-player-for-expressing-support-for-hong-kong-protests
Sugandha Lahoti
10 Oct 2019
6 min read
Save for later

Blizzard comes under fire after banning pro-player for expressing support for Hong Kong protests

Sugandha Lahoti
10 Oct 2019
6 min read
Update: The article has now been updated to include Blizzard's press release about relaxing the ban on the pro-player.  Blizzard has been under fire since last weekend after the game publisher issued a year-long ban to a Hearthstone player who expressed support for the Hong Kong protestors during a competition live stream. The incident occurred on Sunday when Ng “Blitzchung” Wai Chung voiced support for the protesters in Hong Kong in a post-game interview. Blitzchung said, “Liberate Hong Kong. Revolution of our age!” The ban is effective from October 5th and forbids Blitzchung from participating in any tournaments for an entire year. Blizzard is also withholding any prize money he would have earned from competing in the tournament. Blizzard has also terminated its contract with the two casters who were interviewing the competitor. Explaining the reason behind this ban Blizzard issued a statement, “Per the competition rule, players aren’t allowed to do anything that brings [them] into public disrepute, offends a portion or group of the public, or otherwise damages [Blizzard’s] image. While we stand by one’s right to express individual thoughts and opinions, players and other participants that elect to participate in our esports competitions must abide by the official competition rules.” Game Players, US politicians, and Blizzard employees are outraged After the ban of Hearthstone pro,  Blizzard was at the end of major backlash from video game players, US politicians, and Blizzard employees. On Tuesday, a small group of Blizzard employees walked out of work to protest the company’s actions. The demonstration featured about 12-30 employees from multiple departments, who gathered around the Orc warrior statue in the center of the company’s main campus in Irvine, California. The Daily Beast spoke with a few employees. “The action Blizzard took against the player was pretty appalling but not surprising,” said a longtime Blizzard employee. “Blizzard makes a lot of money in China, but now the company is in this awkward position where we can’t abide by our values.” “I’m disappointed,” another current Blizzard employee said. “We want people all over the world to play our games, but no action like this can be made with political neutrality.” US Senators Marco Rubio and Ron Wyden also chastised the actions of Blizzard on Twitter. “Blizzard shows it is willing to humiliate itself to please the Chinese Communist Party,” Senator Wyden tweeted. “No American company should censor calls for freedom to make a quick buck.” “Recognize what’s happening here,” Senator Rubio said on Twitter. “People who don’t live in #China must either self-censor or face dismissal & suspensions. China using access to the market as leverage to crush free speech globally. Implications of this will be felt long after everyone in U.S. politics today is gone.” https://twitter.com/marcorubio/status/1181556058659135488 Blizzard’s own forums and subreddits were also bombarded with angry messages denouncing the ban. The r/Blizzard subreddit went down for a few hours on Tuesday after the board was drowned with posts calling for players to boycott Blizzard and its games like World of Warcraft, Overwatch, and Hearthstone. On its Hearthstone board, a redditor Hinz97 said in a post,“ I play [Hearthstone] everyday, I climbed to Legend several times. I spent more than $10k. As a [Hong Konger], I quit [ Hearthstone] without consideration.” “I’ve been playing since beta. Good riddance,” Redditor UltimaterializerX said. “Blizzard CLEARLY only cares about the Chinese market. The censorship of art was bad enough. The censorship of human life is indefensible. Finding videos of what’s going on in Hong Kong is easy and I suggest everyone do so. It’s Tiananmen Square all over again.” https://twitter.com/Espsilverfire2/status/1182001007976423424 Mark Kern, Team Lead for Vanilla World of Warcraft tweeted, “This hurts. But until Blizzard reverses their decision on @blitzchungHS.  I am giving up playing Classic WoW, which I helped make and helped convince Blizzard to relaunch. There will be no Mark of Kern guild after all.” Fortnite creator Epic Games released a statement stating that it will not ban players or content creators for political speech. “Epic supports everyone’s right to express their views on politics and human rights. We wouldn’t ban or punish a Fortnite player or content creator for speaking on these topics.” https://twitter.com/TimSweeneyEpic/status/1181933071760789504 Blizzard has not yet responded to this development or lifted the ban. Hong Kong protests began in June and now the tech industry has been caught in between the China HK political tussle. In August, Chinese state-run media agencies were caught buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. Post this revelation, Twitter banned 936 accounts managed by the Chinese state; Facebook removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior; Google shutdown 210 YouTube channels. Most recently Apple, after pressure from the Chinese govt, banned a protest safety app that helps people track locations of the Hong Kong police which made people very angry. Amid the protests a day later, Apple again brought it back to the iOS Store. Yesterday, according to Quartz investigations editor John Keefe, Apple has reportedly removed the Quartz application from the App Store at the request of the Chinese government. Quartz has been covering the Hong Kong protests in detail and has been blocked across all of mainland China. Update as on Oct 11: After four days of mounting public pressure, Blizzard Entertainment published a press release partially relaxing the ban on the professional player who expressed support for the Hong Kong protestors during a competition live stream. The one year ban on Ng "blitzchung" has since been changed to a six-month suspension. Additionally, the two Chinese broadcasters who had been fired are now put on a six-month suspension from their jobs. Blizzard President J. Allen Brack wrote also clarified that they were not under the influence of China. "The specific views expressed by blitzchung were NOT a factor in the decision we made," Brack wrote. "I want to be clear: our relationships in China had no influence on our decision." Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests
Read more
  • 0
  • 0
  • 21959
article-image-get-ready-for-open-data-science-conference-2019-in-europe-and-california
Sugandha Lahoti
10 Oct 2019
3 min read
Save for later

Get Ready for Open Data Science Conference 2019 in Europe and California

Sugandha Lahoti
10 Oct 2019
3 min read
Get ready to learn and experience the very latest in data science and AI with expert-led trainings, workshops, and talks at ​ODSC West 2019 in San Francisco and ODSC Europe 2019 in London. ODSC events are built for the community and feature the most comprehensive breadth and depth of training opportunities available in data science, machine learning, and deep learning. They also provide numerous opportunities to connect, network, and exchange ideas with data science peers and experts from across the country and the world. What to expect at ODSC West 2019 ODSC West 2019 is scheduled to take place in San Francisco, California on Tuesday, Oct 29, 2019, 9:00 AM – Friday, Nov 1, 2019, 6:00 PM PDT. This year, ODSC West will host several networking events, including ODSC Networking Reception, Dinner and Drinks with Data Scientists, Meet the Speakers, Meet the Experts, and Book Signings Hallway Track. Core areas of focus include Open Data Science, Machine Learning & Deep Learning, Research frontiers, Data Science Kick-Start, AI for engineers, Data Visualization, Data Science for Good, and management & DataOps. Here are just a few of the experts who will be presenting at ODSC: Anna Veronika Dorogush, CatBoost Team Lead, Yandex Sarah Aerni, Ph.D., Director of Data Science and Engineering, Salesforce Brianna Schuyler, Ph.D., Data Scientist, Fenix International Katie Bauer, Senior Data Scientist, Reddit, Inc Jennifer Redmon, Chief Data Evangelist, Cisco Systems, Inc Sanjana Ramprasad, Machine Learning Engineer, Mya Systems Cassie Kozyrkov, Ph.D., Chief Decision Scientist, Google Rachel Thomas, Ph.D., Co-Founder, fast.ai Check out the conference’s more industry-leading speakers here. ODSC also conducts the Accelerate AI Business Summit, which brings together leading experts in AI and business to discuss three core topics: AI Innovation, Expertise, and Management. Don’t miss out on the event You can also use code ODSC_PACKT right now to exclusively save 30% before Friday on your ticket to ODSC West 2019. What to expect at ODSC Europe 2019 ODSC Europe 2019 is scheduled to take place in London, the UK on Tuesday, Nov 19, 2019 – Friday, Nov 22, 2019. Europe Talks/Workshops schedule includes Thursday, Nov 21st and Friday, Nov 22nd. It is available to Silver, Gold, Platinum, and Diamond pass holders. Europe Trainings schedule includes Tuesday, November 19th and Wednesday, November 20th. It is available to Training,  Gold ( Wed Nov 20th only), Platinum, and Diamond pass holders. Some talks scheduled to take place include ML for Social Good: Success Stories and Challenges, Machine Learning Interpretability Toolkit, Tools for High-Performance Python, The Soul of a New AI, Machine Learning for Continuous Integration, Practical, Rigorous Explainability in AI, and more. ODSC has released a preliminary schedule with information on attending speakers and their training, workshop, and talk topics. The full schedule is going to be available soon. They’ve also recently added several excellent speakers, including Manuela Veloso, Ph.D. | Head of AI Research, JP Morgan Dr. Wojciech Samek | Head of Machine Learning, Fraunhofer Heinrich Hertz Institute Samik Chandanara | Head of Analytics and Data Science, JP Morgan Tom Cronin | Head of Data Science & Data Engineering, Lloyds Banking Group Gideon Mann, Ph.D. | Head of Data Science, Bloomberg, LP There are more chances to learn, connect, and share ideas at this year’s event than ever before. Don’t miss out. Use code ODSC_PACKT right now to save 30% on your ticket to ODSC Europe 2019.
Read more
  • 0
  • 0
  • 16796

article-image-6-reasons-why-employers-should-pay-for-their-developers-training-and-learning-resources
Richard Gall
09 Oct 2019
7 min read
Save for later

6 reasons why employers should pay for their developers' training and learning resources

Richard Gall
09 Oct 2019
7 min read
Developers today play a critical role in every business - the days of centralized IT being shut away providing clandestine support not only feels outdated, it's offensive too. But it's not enough for employers to talk about how important an ambitious and creative technical workforce is. They need to prove it by actively investing in the resources and training their employees need to stay relevant and up to date. That's not only the right thing to do, it's also a smart business decision. It attracts talent and ensures flexibility and adaptability. Not convinced? Here are 6 reasons why employees should pay for the resources their development and engineering teams need. Employers make money out of their developers' programming expertise Let’s start with the obvious - businesses make money from their developers' surplus labor value. That’s the value of everything developers do - everything that they develop and build - that exceeds their labor-cost (ie. their salaries). While this might be a good argument to join a union, at the very least it highlights that employers should invest in the skills of their workforce. True, we’re all responsible for our own career development, and we should all be curious and ambitious enough to explore new ideas, topics, and technologies, but it’s absurd to think that employers have no part to play in developing the skills of the people on which they depend for revenue and profit. Perhaps bringing up Karl Marx might not be the best way to make the case to your business if you’re looking for some investment in training. But framing this type of investment in terms of your future contribution to the business is a good way to ensure that you get the resources you need and deserve as a software developer. It levels the playing field: everyone should have access to the same resources Conventional wisdom says that it’s not hard to find free resources on technology topics. This might sound right, but it isn’t strictly true - knowing where to look, how to determine the quality and relevance of certain resources is a skill in itself. Indeed, it might be one that not all developers know, especially if they’re only starting out in their careers. Although employees that take personal learning seriously are valuable, employers need to ensure that everyone is on the same page when it comes to learning. Failure to do so can not only knock the confidence of more inexperienced members of the team, it can also entrench hierarchies. This can really damage the culture. It’s vital, as a leader, that you empower people to be the best they can be. This issue is only exacerbated when you bring premium resources into the mix. Some team members might be in a financial situation where they can afford one - maybe more - subscriptions to learning resources, as well as the newest books by experts in their field, while others might be finding it a little more difficult to justify spending money on learning materials. Ultimately, this isn’t really any of your business. If a team has a set of resources that they all depend upon, access becomes a non-issue. While some team members will always be happy to pay for training and resources, providing a base level of support ensures that everyone is on the same page (maybe even literally). Relying on free content is useful for problem solving - but it doesn’t help with long-term learning Whatever the challenges of relying on free resources it would be churlish and wrong-headed to pretend they aren’t a fixture of technology learning patterns. Every developer and engineer will grow to use free resources to a greater or lesser extent, and each one will find their favored sites and approaches to finding the information they need. That’s all well and good, but it’s important to recognise that for the most part free resources are well-suited to problem solving and short-term learning. That’s just one part of the learning-cycle. Long-term learning that is geared towards building individual skill sets and exploring new toolchains for different purposes requires more structure and support. Free resources - which may be opinion-based or unreliable - don’t offer the consistency needed for this kind of development. In some scenarios it might be appropriate to use online or in-person training courses - but these can be expensive and can even alienate some software developers. Indeed, sometimes they’re not even necessary - it’s much better to have an accessible platform or set of resources that people can return to and explore over a set period. The bonus aspect of this is that by investing in a single learning platform it becomes much easier for managers and team leads to get transparency on what individuals are learning. That can be useful in a number of different ways, from how much time people are spending learning to what types of things they’re interested in learning. It’s hard to hire new developer talent Hiring talented developers and engineers isn’t easy. But organizations that refuse to invest in their employees skills are going to have to spend more time - and money - trying to attract the skilled developers they need. That makes investing in a quality learning platform or set of resources an obvious choice. It’ll provide the foundations from which you can build a team that’s prepared for the future. But there’s another dimension that’s easy to ignore. When you do need to hire new developer talent, it does nothing for your brand as an employer if you can’t demonstrate that you support learning and personal development. Of course the best candidates will spend time on themselves, but it’s also a warning sign if I, as a potential employee, see a company that refuses to take skill development seriously. It tells me that not only do you not really care about me - it also indicates that you’re not thinking about the future, period. Read next: Why companies that don’t invest in technology training can’t compete Investing in learning makes employees more adaptable and flexible Change is the one constant in business. This is particularly true where technology is concerned. And while it’s easy to make flexibility a prerequisite for job candidates, the truth is that businesses need to take responsibility for the adaptability and flexibility of their employees. If you teach them that change is unimportant, and that learning should be low on their list of priorities, you’re soon going to find that they’re going to become the inflexible and uncurious employees that you wanted to avoid. It’s all well and good depending on them to inspire change. But by paying for their learning resources, employers are taking a decisive step. It’s almost as if you’re saying go ahead, explore, learn, improve. This business depends on it, so we’re going to give you exactly what you need. It’s cost effective Okay, this might not be immediately obvious - but if you’re a company that does provide an allowance to individual team members, things can quickly get costly without you realising. Say you have 4 or 5 developers that each decide how to spend a learning allowance. Yes, that gives them a certain degree of independence, but with the right learning platform that caters to multiple needs and preferences you can save a significant amount of money. Read next: 5 barriers to learning and technology training for small software development teams Conclusion: It's the right thing to do and it makes business sense There are a range of reasons why organizations need to invest in employee learning. But it boils down to two things: it's the right thing to do, and it makes business sense. It might be tempting to think that you can't afford to purchase training materials for your team. But the real question you should ask is can we afford not to? Learn more about Packt for Teams here.
Read more
  • 0
  • 0
  • 52884

article-image-what-to-expect-at-cloud-data-summit-2019-a-summit-hosted-in-the-cloud
Sugandha Lahoti
09 Oct 2019
2 min read
Save for later

What to expect at Cloud Data Summit 2019 - a summit hosted in the cloud

Sugandha Lahoti
09 Oct 2019
2 min read
2019’s Cloud Data Summit is quickly approaching (scheduled to take place on October 16th-17th). This year it will be different; it will be hosted online as a 100% virtual summit. This event will feature industry-leading speakers and thought leaders talking about the hype of AI, big data, machine learning, PaaS and IaaS technologies.  Although it will be a 100% virtual summit, this conference will have all the features of a standard conference - main stage discussions, speaker panels, peer networking sessions, roundtables, breakout sessions, and group lunches. All topics will be presented in a way that is comfortable for both technical and non-technically inclined attendees and will cover real-world implementations, how-to’s, best practices, potential pitfalls, and how to leverage the full potential of the cloud’s data and processing power. Attendees of the Cloud Data Summit include Google, Spotify, IBM, SAP, Microsoft, Apple and more.   Here’s a list of featured speakers: Jay Natarajan, US AI Lead, Microsoft, Lead Architect, Microsoft Dan Linstedt, Inventor of Data Vault Methodology CEO of LearnDataVault.com CTO and Co-Founder of Scalefree. T. Scott Clendaniel, Co-founder & Consultant, Cottrell Consulting Barr Moses, Co-founder & CEO of Monte Carlo Data Dr. Joe Perez, Sr. Systems Analyst, Team Lead NC Department of Health and Human Services Kurt Cagle, CEO of Semantical LLC, Contributor to Forbes and Managing Editor of Cognitive World Joshua Cottrell, Co-founder & Consultant Cottrell Consulting Jawad Sartaj, Chief Analytics Officer Somos Community Care Daniel O’Connor, Head of Product Data Practice Aware Web Solutions Inc. Eric Axelrod, Founder of Cloud Data Summit, President & Chief Architect, DIGR, and Executive Advisor Those individuals or organizations interested in learning more about the Cloud Data Summit or to register for attendance can visit their official website. If you fall in any of these categories - Business Executives, Data and IT Executives, Data Managers, Data Scientists, Data Engineers, Data Warehouse Architects  (so, anyone who is interested in learning about cloud migration and its consequences) -, Cloud Data Summit is not to be missed. For current students and new graduates, tickets are up to 80% off via the special student registration form.
Read more
  • 0
  • 0
  • 20109
article-image-tensorflow-2-0-released-tighter-keras-integration-eager-execution-enabled-by-default
Bhagyashree R
03 Oct 2019
5 min read
Save for later

TensorFlow 2.0 released with tighter Keras integration, eager execution enabled by default, and more!

Bhagyashree R
03 Oct 2019
5 min read
After releasing the beta version of TensorFlow 2.0 in June, Google announced its final release on Monday. This release comes with tighter integration with Keras, eager execution enabled by default, promises three times faster training performance, a cleaned-up API, and more. Key updates in TensorFlow 2.0 Tighter Keras integration for better developer productivity One of the important updates in TensorFlow 2.0 is its tighter integration with Keras, a popular high-level API used for easy and fast prototyping, building, and training deep learning models. This will enable developers to easily leverage its various model-building APIs including Sequential, Functional, and Subclassing. Explaining the motivation behind this change, the TensorFlow team wrote, “By establishing Keras as the high-level API for TensorFlow, we are making it easier for developers new to machine learning to get started with TensorFlow. A single high-level API reduces confusion and enables us to focus on providing advanced capabilities for researchers.” Eager execution enabled by default In TensorFlow 1.x, developers were required to define an abstract data structure named Graph and to run this graph they needed an encapsulation called Session. TensorFlow 2.0 has eager execution enabled by default to “eagerly” run code, similar to normal Python code. Eager execution enables fast iteration and intuitive debugging without building a graph. It also makes creating and experimenting with models using TensorFlow much easier. It can be especially useful when using the tf.keras model subclassing API. Also Read: Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out Distribution Strategy API The Distribution Strategy API in TensorFlow 2.0 allows machine learning researchers to distribute training across a wide variety of compute configurations. This will allow them to “attain great out-of-the-box performance” with minimal code changes. This release also allows distributed training with Keras’ model.fit and custom training loops. Performance improvements on GPUs TensorFlow 2.0 includes multi-GPU support and experimental support for multi worker and Cloud TPUs. This release also has a number of performance improvements on GPUs. It promises three times faster training performance when using mixed precision on NVIDIA’s Volta and Turing GPUs. It includes tight integration with NVIDIA TensorRT, a platform for high-performance deep learning inference. The standardized SavedModel file format The SavedModel API allows you to save your trained ML model into a language-neutral format. With TensorFlow 2.0, all TensorFlow ecosystem projects including TensorFlow Lite, TensorFlow JS, TensorFlow Serving, and TensorFlow Hub, support SavedModels. Standardizing the SavedModel file format will enable developers to run their models on a variety of runtimes including the cloud, web, browser, Node.js, mobile, and embedded systems. “This allows you to run your models with TensorFlow, deploy them with TensorFlow Serving, use them on mobile and embedded systems with TensorFlow Lite, and train and run in the browser or Node.js with TensorFlow.js,” the team writes. API simplification TensorFlow 2.0 includes a number of API updates. Many API symbols are removed or renamed for better consistency and clarity. Also, the tf.app, tf.flags, and tf.logging API are removed in favor of abseil-py. Because of the huge number of API changes, developers in a discussion on Hacker News expressed that transitioning from TensorFlow 1.X to TensorFlow 2.0 is quite complicated. Some also mentioned switching to PyTorch instead. A user commented, “As someone who uses TensorFlow a lot, I predict an enormous clusterfuck of a transition. Tensorflow has turned into a multiheaded monster, supporting many things and approaches but none of them very well...In my opinion, there are some architectural problems with TF, which have not been addressed in this update...If you need to transition from TF1 to TF2, consider doing the TF1 to PyTorch transition instead.” While some others were happy with the recommended Keras API and eager execution. “I don't know if I'm the only one, but I actually love the changes they've made since v1. Eager execution and tf.function are fantastic, and the built-in Keras is even better than the standalone version. A big improvement compared to TF from last year,” a user commented on Reddit. Another user added, “The most important change in terms of usability, IMO, is the use of tf.keras as the recommended interface to TensorFlow. There hasn't been a case yet where I've needed to dip outside of Keras into raw TensorFlow, but the option is there and is easy to do. That said, TF 2.0 changes a lot. Many repos might break, so expect to see lots of tensorflow==1.14 in requirement.txt files from now on.” These were some of the updates in TensorFlow 2.0. Check out the official announcement and release notes to know more in detail. Transformers 2.0: NLP library with deep interoperability between TensorFlow 2.0 and PyTorch, and 32+ pretrained models in 100+ languages TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Train a convolutional neural network in Keras and improve it with data augmentation [Tutorial] Train a convolutional neural network in Keras and improve it with data augmentation [Tutorial]
Read more
  • 0
  • 0
  • 32074

article-image-how-chaos-engineering-can-help-predict-and-prevent-cyber-attacks-preemptively
Sugandha Lahoti
02 Oct 2019
7 min read
Save for later

How Chaos Engineering can help predict and prevent cyber-attacks preemptively

Sugandha Lahoti
02 Oct 2019
7 min read
It's no surprise that cybersecurity has become a major priority for global businesses of all sizes, often employing a dedicated IT team to focus on thwarting attacks. Huge budgets are spent on acquiring and integrating security solutions, but one of the most effective tools might be hiding in plain sight. Encouraging your own engineers and developers to purposely break systems through a process known as chaos engineering can drive a huge return on investment by identifying unknown weaknesses in your digital architecture. As modern networks become more complex with a vastly larger threat surface, stopping break-ins before they happen takes on even greater importance. Evolution of Cyberattacks Hackers typically have a background in software engineering and actually run their criminal enterprises in cycles that are similar to what happens in the development world. That means these criminals focus on iterations and agile changes to keep themselves one step ahead of security tools. Most hackers are focused on financial gains and look to steal data from enterprise or government systems for the purpose of selling it on the Dark Web. However, there is a demographic of cybercriminals focused on simply causing the most damage to certain organizations that they have targeted. In both scenarios, the hacker will first need to find a way to enter the perimeter of the network, either physically or digitally. Many cybercrimes now start with what's known as social engineering, where a hacker convinces an internal employee to divulge information that allows unauthorized access to take place. Principles of Chaos Engineering So how can you predict cyberattacks and stop hackers before they infiltrate your systems? That's where chaos engineering can help. The term first gained popularity a decade ago when Netflix created a tool called Chaos Monkey that would randomly take a node of their network offline to force teams to react accordingly. This proved effective because Netflix learned how to better keep their streaming service online and reduce dependencies between cloud servers. The Chaos Monkey tool can be seen as an example of a much broader practice: Chaos Engineering. The central insight of this form of testing is that, regardless of how all-encompassing your test suite is, as soon as your code is running on enough machines, errors are going to occur. In large, complex systems, it is essentially impossible to predict where these points of failure are going to occur. Rather than viewing this as a problem, chaos engineering sees it as an opportunity. Since failure is unavoidable, why not deliberately introduce it, and then attempt to solve randomly generated errors, in order to ensure your systems and processes can deal with the failure? For example, Netflix runs on AWS, but in response to frequent regional failures have changed their systems to become region agnostic. They have tested that this works using chaos engineering: they regularly take down important services in separate regions via a “chaos monkey” which randomly selects servers to take down, and challenges engineers to work around these failures. Though Netflix popularized the concept, chaos engineering, and testing is now used by many other companies, including Microsoft. The rise of cloud computing has meant that a high proportion of the systems running today have a similar level of complexity to Netflix’s server architecture. The cloud computing model has added a high level of complexity to online systems and the practice of chaos engineering will help to manage that over time, especially when it comes to cybersecurity. It's impossible to predict how and when a hacker will execute an attack, but chaos tests can help you be more proactive in patching your internal vulnerabilities. At first, your developers may be reluctant to jump into chaos activities since they will think their normal process works. However, there’s a good chance that, over time, they will grow to love chaos testing. You might have to do some convincing in the beginning, though, because this new approach likely goes against everything they’ve been taught about securing a network. Designing a Chaos Test Chaos tests need to be run in a controlled manner in order to be effective, and in many ways, the process lines up with the scientific method: in order to run a successful chaos test, you first need to identify a well-formed and refutable hypothesis. A test can then be designed that will prove (or more likely falsify) your hypothesis. For example, you might want to know how your network will react if one DNS server drops offline during a distributed denial of service (DDoS) attack by a hacker. This tactic of masking one’s IP address and overloading a website’s server can be executed through a VPN. Your hypothesis, in this case, is that you will be able to re-route traffic around the affected server. From there, you can begin designing and scheduling the experiment. If you are new to chaos engineering, you will want to restrict the test to a testing or staging environment to minimize the impact on live systems. Then make sure you have one person responsible for documenting the outcomes as the experiment begins. If you identify an issue during the first run of the chaos test, then you can pause that effort and focus on coming up with a solution plan. If not, expand the radius of the experiment until it produces worthwhile results. Running through this type of fire drill can be positive for various groups within a software organization, as it will provide practice for real incidents. Source: Medium How Chaos Engineering Fits Into Quality Assurance A lot of companies hear about chaos engineering and jump into it full-speed. But then over time they lose their enthusiasm for the practice or get distracted with other projects. So how should chaos tests be run and how do you ensure that they remain consistent and valuable? First, define the role of chaos engineering within your larger quality assurance efforts. In fact, your QA team may want to be the leaders of all chaos tests that are run. One important clarification is to distinguish between penetration tests and chaos tests. They both have the same goal of proactively finding system issues, but a penetration test is a specific event with a finite focus while chaos testing must be more open-ended. When teaching your teams about chaos engineering, it's vital to frame it as a practice and not a one-time activity. Return on investment will be hard to find if you do not systematically follow-up after chaos tests are run. The aim should be for continuous improvement to ensure your systems are prepared for any type of cyberattack. Final Thoughts These days, security vendors offer a wide range of tools designed to help companies protect their people, data, and infrastructure. Solutions like firewalls and virus scanners are certainly deployed to great success as part of most cybersecurity strategies, but they should never be treated as foolproof. Hackers are notorious at finding ways to get past these types of tools and exploit companies who are not prepared at a deeper level. The most mature organizations go one step further and find ways to proactively locate their own weaknesses before an outsider can expose them. Chaos engineering is a great way to accomplish this, as it encourages developers to look for gaps and bugs that they might not stumble upon normally. No matter how much planning goes into a system's architecture, unforeseen issues still come up, and hackers are a very dangerous variable in that equation. Author Bio Sam Bocetta is a freelance journalist specializing in U.S. diplomacy and national security, with emphasis on technology trends in cyberwarfare, cyberdefense, and cryptography.
Read more
  • 0
  • 0
  • 32305
Modal Close icon
Modal Close icon