Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-3-programming-languages-some-people-think-are-dead-but-definitely-arent
Richard Gall
24 Oct 2019
11 min read
Save for later

3 programming languages some people think are dead but definitely aren’t

Richard Gall
24 Oct 2019
11 min read
Recently I looked closely at what it really means when a certain programming language, tool, or trend is declared to be ‘dead’. It seems, I argued, that talking about death in respect of different aspects of the tech industry is as much a signal about one’s identity and values as a developer as it is an accurate description of a particular ‘thing’s’ reality. To focus on how these debates and conversations play out in practice I decided to take a look at 3 programming languages, each of which has been described as dead or dying at some point. What I found might not surprise you, but it nevertheless highlights that the different opinions a certain person or community has about a language reflects their needs and challenges as software engineers. Is Java dead? One of the biggest areas of debate in terms of living, thriving or dying, is Java. There are a number of reasons for this. The biggest is the simple fact that it’s so widely used. With so many developers using the language for a huge range of reasons, it’s not surprising to find such a diversity of opinion across its developer community. Another reason is that Java is so well-established as a programming language. Although it’s a matter of debate whether it’s declining or dying, it certainly can’t be said to be emerging or growing at any significant pace. Java is part of the industry mainstream now. You’d think that might mean it’s holding up. But when you consider that this is an industry that doesn’t just embrace change and innovation, but one that depends on it for its value, you can begin to see that Java has occupied a slightly odd space for some time. Why do people think Java is dead? Java has been on the decline for a number of years. If you look at the TIOBE index from the mid to late part of this decade it has been losing percentage points. From May 2016 to May 2017, for example, the language declined 6% - this indicates that it’s losing mindshare to other languages. A further reason for its decline is the rise of Kotlin. Although Java has for a long time been the defining language of Android development, in recent years its reputation has taken a hit as Kotlin has become more widely adopted. As this Medium article from 2018 argues, it’s not necessarily a great idea to start a new Android project with Java. The threat to Java isn’t only coming from Kotlin - it’s coming from Scala too. Scala is another language based on the JVM (Java Virtual Machine). It supports both object oriented and functional programming, offering many performance advantages over Java, and is being used for a wide range of use cases - from machine learning to application development. Reasons why Java isn’t dead Although the TIOBE index has shown Java to be a language in decline, it nevertheless remains comfortably at the top of the table. It might have dropped significantly between 2016 and 2017, but more recently its decline has slowed: it has dropped only 0.92% between October 2018 and October 2019. From this perspective, it’s simply bizarre to suggest that Java is ‘dead’ or ‘dying’: it’s de facto the most widely used programming language on the planet. When you factor in everything else that that entails - the massive community means more support, an extensive ecosystem of frameworks, libraries and other tools (note Spring Boot’s growth as a response to the microservice revolution). So, while Java’s age might seem like a mark against it, it’s also a reason why there’s still a lot of life in it. At a more basic level, Java is ubiquitous; it’s used inside a massive range of applications. Insofar as it’s inside live apps it’s alive. That means Java developers will be in demand for a long time yet. The verdict: is Java dead or alive? Java is very much alive and well. But there are caveats: ultimately, it’s not a language that’s going to help you solve problems in creative or innovative ways. It will allow you to build things and get projects off the ground, but it’s arguably a solid foundation on which you will need to build more niche expertise and specialisation to be a really successful engineer. Is JavaScript dead? Although Java might be the most widely used programming language in the world, JavaScript is another ubiquitous language that incites a diverse range of opinions and debate. One of the reasons for this is that some people seriously hate JavaScript. The consensus on Java is a low level murmur of ‘it’s fine’, but with JavaScript things are far more erratic. This is largely because of JavaScript’s evolution. For a long time it was playing second fiddle to PHP in the web development arena because it was so unstable - it was treated with a kind of stigma as if it weren’t a ‘real language.’ Over time that changed, thanks largely to HTML5 and improved ES6 standards, but there are still many quirks that developers don’t like. In particular, JavaScript isn’t a nice thing to grapple with if you’re used to, say, Java or C. Unlike those languages its an interpreted not a compiled programming language. So, why do people think it’s dead? Why do people think JavaScript is dead? There are a number of very different reasons why people argue that JavaScript is dead. On the one hand, the rise of templates, and out of the box CMS and eCommerce solutions mean the use of JavaScript for ‘traditional’ web development will become less important. Essentially, the thinking goes, the barrier to entry is lower, which means there will be fewer people using JavaScript for web development. On the other hand people look at the emergence of Web Assembly as the death knell for JavaScript. Web Assembly (or Wasm) is “a binary instruction format for a stack-based virtual machine” (that’s from the project’s website), which means that code can be compiled into a binary format that can be read by a browser. This means you can bring high level languages such as Rust to the browser. To a certain extent, then, you’d think that Web Assembly would lead to the growth of languages that at the moment feel quite niche. Read next: Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust Reasons why JavaScript isn’t dead First, let’s counter the arguments above: in the first instance, out of the box solutions are never going to replace web developers. Someone needs to build those products, and even if organizations choose to use them, JavaScript is still a valuable language for customizing and reshaping purpose-built solutions. While the barrier to entry to getting a web project up and running might be getting lower, it’s certainly not going to kill JavaScript. Indeed, you could even argue that the pool is growing as you have people starting to pick up some of the basic elements of the web. On the Web Assembly issue: this is a slightly more serious threat to JavaScript, but it’s important to remember that Web Assembly was never designed to simply ape the existing JavaScript use case. As this useful article explains: “...They solve two different issues: JavaScript adds basic interactivity to the web and DOM while WebAssembly adds the ability to have a robust graphical engine on the web. WebAssembly doesn’t solve the same issues that JavaScript does because it has no knowledge of the DOM. Until it does, there’s no way it could replace JavaScript.” Web Assembly might even renew faith in JavaScript. By tackling some of the problems that many developers complain about, it means the language can be used for problems it is better suited to solve. But aside from all that, there are a wealth of other reasons that JavaScript is far from dead. React continues to grow in popularity, as does Node.js - the latter in particular is influential in how it has expanded what’s possible with the language, moving from the browser to the server. The verdict: Is JavaScript dead or alive? JavaScript is very much alive and well, however much people hate it. With such a wide ecosystem of tools surrounding it, the way that it’s used might change, but the language is here to stay and has a bright future. Is C dead? C is one of the oldest programming languages around (it’s approaching its 50th birthday). It’s a language that has helped build the foundations of the software world as we know it today, including just about every operating system. But although it’s a fundamental part of the technology landscape, there are murmurs that it’s just not up to the job any more… Why do people think that C is dead? If you want to get a sense of the division of opinion around C you could do a lot worse than this article on TechCrunch. “C is no longer suitable for this world which C has built,” explains engineer Jon Evans. “C has become a monster. It gives its users far too much artillery with which to shoot their feet off. Copious experience has taught us all, the hard way, that it is very difficult, verging on ‘basically impossible,’ to write extensive amounts of C code that is not riddled with security holes.” The security concerns are reflected elsewhere, with one writer arguing that “no one is creating new unsafe languages. It’s not plausible to say that this is because C and C++ are perfect; even the staunchest proponent knows that they have many flaws. The reason that people are not creating new unsafe languages is that there is no demand. The future is safe languages.” Added to these concerns is the rise of Rust - it could, some argue, be an alternative to C (and C++) for lower level systems programming that is more modern, safer and easier to use. Reasons why C isn’t dead Perhaps the most obvious reason why C isn’t dead is the fact that it’s so integral to so much software that we use today. We’re not just talking about your standard legacy systems; C is inside the operating systems that allow us to interface with software and machines. One of the arguments often made against C is that ‘the web is taking over’, as if software in general is moving up levels of abstraction that make languages at a machine level all but redundant. Aside from that argument being plain stupid (ie. what’s the web built on?), with IoT and embedded computing growing at a rapid rate, it’s only going to make C more important. To return to our good friend the TIOBE Index: C is in second place, the same position it held in October 2018. Like Java, then, it’s holding its own in spite of rumors. Unlike Java, moreover, C’s rating has actually increased over the course of a year. Not a massive amount admittedly - 0.82% - but a solid performance that suggests it’s a long way from dead. Read next: Why does the C programming language refuse to die? The verdict: Is C dead or alive? C is very much alive and well. It’s old, sure, but it’s buried inside too much of our existing software infrastructure for it to simply be cast aside. This isn’t to say it isn’t without flaws. From a security and accessibility perspective we’re likely to see languages like Rust gradually grow in popularity to tackle some of the challenges that C poses. But an equally important point to consider is just how fundamental C is for people that want to really understand programming in depth. Even if it doesn’t necessarily have a wide range of use cases, the fact that it can give developers and engineers an insight into how code works at various levels of the software stack means it will always remain a language that demands attention. Conclusion: Listen to multiple perspectives on programming languages before making a judgement The obvious conclusion to draw from all this is that people should just stop being so damn opinionated. But I don't actually think that's correct: people should keep being opinionated and argumentative. There's no place for snobbery or exclusion, but anyone that has a view on something's value then they should certainly express it. It helps other people understand the language in a way that's not possible through documentation or more typical learning content. What's important is that we read opinions with a critical eye: what's this persons agenda? What's their background? What are they trying to do? After all, there are things far more important than whether something is dead or alive: building great software we can be proud of being one of them.
Read more
  • 0
  • 0
  • 40576

article-image-firefox-70-released-with-better-security-css-and-javascript-improvements
Savia Lobo
23 Oct 2019
6 min read
Save for later

Firefox 70 released with better security, CSS, and JavaScript improvements

Savia Lobo
23 Oct 2019
6 min read
Mozilla team announced the much-awaited release of Firefox 70 yesterday with amazing new features like secure password generation with Lockwise and the new Firefox Privacy Protection Report. Firefox 70 also includes a plethora of additions for developers such as DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators, and much more. Firefox 70 centers around enhanced privacy and security The new Firefox 70 includes an Enhanced Tracking Protection, which includes a Firefox Privacy Protection Report that gives additional details and more visibility into how you’re being tracked online so you can better combat it. The Enhanced Tracking Protection was set up as default by the browser in September this year. The report highlights how ETP prevents third-party trackers from building a user’s profile based on their online activity. The report also includes the number of cross-site and social media trackers, finger-printers and crypto-miners Mozilla blocked. The report also helps users to keep themselves updated with Firefox Monitor and Firefox Lockwise. Firefox Monitor helps users to get a summary of the number of unsafe passwords that may have been used in a breach so that you can take action to update and change those passwords. Firefox Lockwise helps users to manage passwords and different synced devices. Firefox Lockwise includes a button where users can click to view their logins and updates. They can also have the ability to quickly view and manage how many devices they syncing and sharing passwords with. To know more about security in Firefox 70, read Mozilla’s blog. What’s new in Firefox 70 Updated HTML forms and secure passwords To generate secure passwords, the team has updated HTML input elements. Here, any input element of type password will have an option to generate a secure password available in the context menu, which can then be stored in Lockwise. In addition, any type="password" field with autocomplete=”new-password” set on it will have an autocomplete UI to generate a new password in-context. New CSS improvements Firefox 70 includes some CSS improvements like new options for styling underlines and new set of two-keyword values. Options for styling underlines include three new properties for text-decoration (underline): text-decoration-thickness: sets the thickness of lines added via text-decoration. text-underline-offset: sets the distance between a text-decoration and the text it is set on. Bear in mind that this only works on underlines. text-decoration-skip-ink: sets whether underlines and overlines are drawn if they cross descenders and ascenders. The default value, auto, causes them to only be drawn where they do not cross over a glyph. To allow underlines to cross glyphs, set the value to none. Two-keyword display values Until now, the display property has taken a single value. However, the team says that “the boxes on a page have an outer display type, which determines how the box is laid out in relation to other boxes on the page, and an inner display type, which determines how the box’s children will behave.” The two-keyword values allow you to explicitly specify the outer and inner display values. In supporting browsers (which currently includes only Firefox), the single keyword values will map to new two-keyword values, for example: display: flex; is equivalent to display: block flex; display: inline-flex; is equivalent to display: inline flex; JavaScript improvements Firefox 70 now supports numeric separators for JavaScript. Underscores can now be used as separators in large numbers so that they are more readable. Other improvements in JavaScript include: Intl improvements Firefox 70 includes improved JavaScript i18n (internationalization), starting with the implementation of the Intl.RelativeTimeFormat.formatToParts() method. This is a special version of Intl.RelativeTimeFormat.format() that returns an array of objects, each one representing a part of the value, rather than returning a string of the localized time value. Also,  Intl.NumberFormat.format() and Intl.NumberFormat.formatToParts() now accept BigInt values. Performance Improvements The inclusion of the new baseline interpreter has speeded up JavaScript. The code for the new interpreter includes shared code from the existing Baseline JIT. You can read more about the interpreter on The Baseline Interpreter: a faster JS interpreter in Firefox 70. New Developer tools The Developer Tools Accessibility panel now includes an audit for keyboard accessibility and a color deficiency simulator for systems with WebRender enabled. Pause option in DOM Mutation in Debugger DOM Mutation Breakpoints (aka DOM Change Breakpoints) let you pause scripts that add, remove, or change specific elements. Once a DOM mutation breakpoint is set, you’ll see it listed under “DOM Mutation Breakpoints” in the right-hand pane of the Debugger; this is also where you’ll see breaks reported. Source: Mozilla Hacks Color contrast information in the color picker! In the CSS Rules view, you can click foreground colors with the color picker to determine if their contrast with the background color meets accessibility guidelines. Accessibility inspector: keyboard checks The Accessibility inspector‘s Check for issues dropdown now includes keyboard accessibility checks: Selecting this option causes Firefox to go through each node in the accessibility tree and highlight all that have a keyboard accessibility issue: Hovering over or clicking each one will reveal information about what the issue is, along with a “Learn more” link for more details on how to fix it. Web socket inspector In Firefox DevEdition, the Network monitor now has a new “Messages” panel, which appears when you are monitoring a web socket connection (i.e. a 101 response). This can be used to inspect web socket frames sent and received through the connection. This functionality was originally supposed to be in Firefox 70 general release, but the team had a few more bugs to resolve, so expect it in Firefox 71! For now, users can explore it in the DevEdition. Fixed issues in Firefox 70 Built-in Firefox pages now follow the system dark mode preference Aliased theme properties have been removed, which may affect some themes Passwords can now be imported from Chrome on macOS in addition to existing support for Windows Readability is now greatly improved on under- or overlined texts, including links. The lines will now be interrupted instead of crossing over a glyph. Improved privacy and security indicators A new crossed-out lock icon will indicate sites delivered via insecure HTTP The formerly green lock icon is now grey The Extended Validation (EV) indicator has been moved to the identity popup that appears when clicking the lock icon To know more about other improvements and bug fixes in Firefox 70 in detail read Mozilla’s official blog. Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020
Read more
  • 0
  • 0
  • 29223

article-image-node-js-13-releases-upgraded-v8-full-icu-support-stable-worker-threads-api
Fatema Patrawala
23 Oct 2019
4 min read
Save for later

Node.js 13 releases with an upgraded V8, full ICU support, stable Worker Threads API and more

Fatema Patrawala
23 Oct 2019
4 min read
Yesterday was a super exciting day for Node.js developers as Node.js foundation announced of Node.js 12 transitions to Long Term Support (LTS) with the release of Node.js 13. As per the team, Node.js 12 becomes the newest LTS release along with version 10 and 8. This release marks the transition of Node.js 12.x into LTS with the codename 'Erbium'. The 12.x release line now moves into "Active LTS" and will remain so until October 2020. Then it will move into "Maintenance" until the end of life in April 2022. The new Node.js 13 release will deliver faster startup and better default heap limits. It includes updates to V8, TLS and llhttp and new features like diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API, and more. Key features in Node.js 13 Let us take a look at the key features included in Node.js 13. V8 gets an upgrade to V8 7.8 This release is compatible with the new version V8 7.8. This new version of the V8 JavaScript engine brings performance tweaks and improvements to keep Node.js up with the ongoing improvements in the language and runtime. Full ICU enabled by default in Node.js 13 As of Node.js 13, full-icu is now available as default, which means hundreds of other local languages are now supported out of the box. This will simplify development and deployment of applications for non-English deployments. Stable workers API Worker Threads API is now a stable feature in both Node.js 12 and Node.js 13. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. New compiler and platform support Node.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 13, the codebase will now require a minimum of version 10 for the OS X development tools and version 7.2 of the AIX operating system. In addition to this there has been progress on supporting Python 3 for building Node.js applications. Systems that have Python 2 and Python 3 installed will still be able to use Python 2, however, systems with only Python 3 should now be able to build using Python 3. Developers discuss pain points in Node.js 13 On Hacker News, users discuss various pain-points in Node.js 13 and some of the functionalities missing in this release. One of the users commented, “To save you the clicks: Node.js 13 doesn't support top-level await. Node includes V8 7.8, released Sep 27. Top-level await merged into V8 on Sep 24, but didn't make it in time for the 7.8 release.” Response on this comment came in from V8 team, they say, “TLA is only in modules. Once node supports modules, it will also have TLA. We're also pushing out a version with 7.9 fairly soonish.” Other users discussed how Node.js performs with TypeScript, “I've been using node with typescript and it's amazing. VERY productive. The key thing is you can do a large refactoring without breaking anything. The biggest challenge I have right now is actually the tooling. Intellij tends to break sometimes. I'm using lerna for a monorepo with sub-modules and it's buggy with regular npm. For example 'npm audit' doesn't work. I might have to migrate to yarn…” If you are interested to know more about this release, check out the official Node.js blog post as well as the GitHub page for release notes. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Google is planning to bring Node.js support to Fuchsia
Read more
  • 0
  • 0
  • 27650

article-image-what-do-we-really-mean-when-we-say-that-software-is-dead-or-dying
Richard Gall
22 Oct 2019
12 min read
Save for later

What do we really mean when we say that software is ‘dead’ or ‘dying’?

Richard Gall
22 Oct 2019
12 min read
“The report of my death was an exaggeration,” Mark Twain once wrote in a letter to journalist Frank Marshall White. Twain's quip is a fitting refrain for much of the software industry. Year after year there is a new wave of opinion from experts declaring this or that software or trend to be dead or, if it’s lucky, merely dying. I was inclined to think this was a relatively new phenomenon, but the topic comes up on Jeff Atwood’s blog Coding Horror as far back as 2009. Atwood quotes from an article from influential software engineer Tom DeMarco in which DeMarco writes that “software engineering is an idea whose time has come and gone.” (The hyperlink that points to DeMarco’s piece is, ironically, now dead). So, it’s clearly not something new to the software industry. In fact, this rhetorical trope can tell us a lot about identity, change, and power in tech. Declaring something to be dead is an expression of insecurity, curiosity and sometimes plain obnoxiousness. Consider these questions from Quora - all of them appear to reflect an odd combination of status anxiety and technical interest: Is DevOps dead? Is React.js dead? Is Linux dead? Why? (yes, really) Is web development a dying career? (It’s also something I wrote about last year.) These questions don’t come out of a vacuum. They’re responses to existing opinions and discussions that are ongoing in different areas. To a certain extent they’re valuable: asking questions like those above and using these sorts of metaphors are ways of assessing the value and relevance of different technologies. That being said, they probably should be taken with a pinch of salt. Although they can be an indicator of community feeling towards a particular approach or tool (the plural of anecdote isn’t exactly data, but it’s not as far off as many people pretend it is), they often tell you as much about the person doing the talking as the thing they’re talking about. To be explicit: saying something is dead or dying is very often a mark of perspective or even identity. This might be frustrating, but it nevertheless provides an insight on how different technologies are valued or being used at any given time. What’s important, then, is to be mindful about why someone would describe this or that technology as dead. What might they be really trying to say? “X software is dead because companies just aren’t hiring for it” One of the reasons you might hear people proclaim a certain technology to be dead is because companies are, apparently, no longer hiring for those skills. It stops appearing on job boards; it’s no longer ‘in-demand’. While this undoubtedly makes sense, and it’s true that what’s in demand will shift and change over time, all too often assertions that ‘no one is hiring x developers any more’ is anecdotal. For example, although there have been many attempts to start rumors that JavaScript is ‘dead’ or that its time has come - like this article from a few years back in which the writer claims that JavaScript developers have been “mind-fucked into thinking that JavaScript is a good programming language” - research done earlier this year shows that 70% of companies are searching for JavaScript developers. So, far from dead. In the same survey Java came out as another programming language that is eagerly sought out by companies: 48% were on the lookout for Java developers. And while this percentage has almost certainly decreased over the last decade (admittedly I couldn’t find any research to back this up), that’s still a significant chunk to debunk the notion that Java is ‘dead’ or ‘dying.’ Software ecosystems don't develop chronologically This write up of the research by Free Code Camp argues that it shows that variation can be found “not between tech stacks but within them.” This suggests that the tech world isn’t quite as Darwinian as it’s often made out to be. It’s not a case of life and death, but instead different ecosystems all evolving in different ways and at different times. So, yes, maybe there is some death (after all, there aren’t as many people using Ember or Backbone as there were in 2014), but, as with many things, it’s actually a little more complicated… “X software is dead because no one’s learning it” If no one’s learning something, it’s presumably a good sign that X technology is dead or dying, right? Well, to a certain extent. It might give you an indication of how wider trends are evolving and even the types of problems that engineers and their employers are trying to solve, but again, it’s important to be cautious. It sounds obvious but just because no one seems to be learning something, it doesn’t mean that people aren’t. So yes, in certain developer communities it might be weird to consider people are learning Java. But with such a significant employer demand there are undoubtedly thousands of people trying to get to grips with it. Indeed, they might well be starting out in their career - but you can’t overlook the importance of established languages as a stepping stone into more niche and specialised roles and into more ‘exclusive’ communities. That said, what people are learning can be instructive in the context of the argument made in the Free Code Camp article mentioned above. If variation inside tech stacks is where change and fluctuation is actually happening, then the fact that people are learning a given library or framework will give a clear indication as to how that specific ecosystem is evolving. Software only really dies when their use cases do But even then, it’s still a leap to say that something’s dead. It’s also somewhat misleading and unhelpful. So, although it might be the case that more people are learning React or Kotlin than other related technologies, that doesn’t cancel out the fact that those other technologies may still have a part to play in a particular use case. Another important aspect to consider when thinking about what people are learning is that it’s often part of the whole economics of the hype cycle. The more something gets talked about, the more individual developers might be intrigued about how it actually works. This doesn’t, of course, mean they would necessarily start using it for professional projects or that we’d start to see adoption across large enterprises. There is a mesh of different forces at play when thinking about tech life cycles - sometimes our metaphors don’t really capture what’s going on. “It’s dead because there are better options out there” There’s one thing I haven’t really touched on that’s important when thinking about death and decline in the context of software: the fact that there are always options. You use one tool, language, framework, library, whatever, because it’s the best suited to your needs. If one option comes to supersede another for whatever reason it’s only natural to regard what came before as obsolete in some way. You can see this way of thinking on a large scale in the context of infrastructure - from virtual machines, to containers, to serverless, the technologies that enable each of those various phases might be considered ‘dead’ as we move from one to the other. Except that just isn’t the case. While containerized solutions might be more popular than virtual machines, and while serverless hints at an alternative to containers, each of these different approaches are still very much in play. Indeed, you might even see these various approaches inside the same software architecture - virtual machines might make sense here, but for this part of an application over there serverless functions are the best option for the job. With this in mind, throwing around the D word is - as mentioned above - misleading. In truth it’s really just a way for the speaker to signal that X just doesn’t work for them anymore and that Y is a much better option for what they’re trying to do. The vanity of performed expertise And that’s fine - learning from other people’s experiences is arguably the best way to learn when it comes to technology (far better than, say, a one dimensional manual or bone dry documentation). But when we use the word ‘dead’ we hide what actually might still be interesting or valuable about a given technology. In our bid to signal our own expertise and knowledge we close down avenues of exploration. And vanity only feeds the potentially damaging circus of hype cycles and burn out even more. So, if Kotlin really is a better option for you then that’s great. But it doesn’t mean Java is dead or dying. Indeed, it’s more likely the case that what we’re seeing are use cases growing and proliferating, with engineering teams and organizations requiring a more diverse set of options for an increasingly diverse and multi faceted range of options. If software does indeed die, then it’s not really a linear process. Various use cases will all evolve and over time they will start to impact one another. Maybe eventually we’ll see Kotlin replace Java as the language evolves to tackle a wider range of use cases. “X software pays more money, so Y software must be dying” The idea that certain technologies are worth more than others feeds into the narrative that certain technologies and tools are dying. But it’s just a myth - and a dangerous one at that. Although there is some research on which technologies are the most high paying, much of it lacks context. So, although this piece on Forbes might look insightful (wow, HBase engineers earn more than $120K!) it doesn’t really give you the wider picture of why these technologies command certain salaries. And, more to the point, it ignores the fact that these technologies are just tools used by people in certain job roles. Indeed, it’s more accurate to say that big data engineers and architects are commanding high salaries than to think anything as trite as Kafka developers are really well-respected by their employers! Talent gaps and industry needs It’s probably more useful to look at variation within specific job roles. By this I mean look at what tools the highest earning full-stack developers or architects are using. At least that would be a little more interesting and instructive. But even then it wouldn’t necessarily tell you whether something has ‘died’. It would simply hint at two things: where the talent gaps are, and what organizations are trying to do. That might give you a flavor of how something is evolving - indeed, it might be useful if you’re a developer or engineer looking for a new job. However, it doesn’t mean that something is dead. Java developers might not be paid a great deal but that doesn’t mean the language is dead. If anything, the opposite is true. It’s alive and well with a massive pool of programmers from which employers can choose. The hype cycle might give us an indication of new opportunities and new solutions, but it doesn’t necessarily offer the solutions we need right now. But what about versioning? And end of life software? Okay, these are important issues. In reality, yes, these are examples of when software really is dead. Well, not quite - there are still complications. For example, even as a new version of a particular technology is released it still takes time for individual projects and wider communities to make the move. Even end of life software that's no longer supported by vendors or maintainers can still have an afterlife in poorly managed projects (the existence of this articles like this suggest that this is more common than you’d hope). In a sense this is zombie software that keeps on living years after debates about whether its alive or dead have ceased. In theory, versioning should be the formal way through which we manage death in the software industry. But the fact that even then our nicely ordered systems still fail to properly bury and retire editions of software packages, languages, and tools, highlights that in reality it's actually really hard to properly kill off software. For all the communities that want to kill software, there are always other groups, whether through force of will or plain old laziness, that want to keep it alive. Conclusion: Software is hard to kill Perhaps that’s why we like to say that certain technologies are dead: not only do such proclamations help to signify how we identify (ie. the type of developer we are), it’s also a rhetorical trick that banishes nuance and complexity. If we are managing complexity and solving tricky and nuanced problems every day, the idea that we can simplify our own area of expertise into something that is digestible - quotable, even - is a way of establishing some sort of control in a field where it feels we have anything but. So, if you’re tempted to ever say that one piece of software product is ‘dead,’ ask yourself what you really mean. And if you overhear someone obnoxiously proclaiming a framework, library or language to be dying, consider what they’re trying to say. Are they just trying to make a point about themselves? What’s the other side of the story? And even if they’re almost correct, to what extent aren’t they correct?
Read more
  • 0
  • 0
  • 23285

article-image-microsoft-launches-open-application-model-oam-and-dapr-to-ease-developments-in-kubernetes-and-microservices
Vincy Davis
17 Oct 2019
5 min read
Save for later

Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices

Vincy Davis
17 Oct 2019
5 min read
Yesterday, Microsoft announced the launch of two new open-source projects- Open Application Model (OAM) and DAPR. OAM, developed by Microsoft and Alibaba Cloud under the Open Web Foundation, is a specification that enables the developer to define a coherent model to represent an application. The Dapr project, on the other hand, will allow developers to build portable microservice applications using any language and framework for a new or existing code. Open Application Model (OAM) In OAM, an application is made of many components like a MySQL database or a replicated PHP server with a corresponding load balancer. These components are further used to build an application, thus enabling the platform architects to utilize reusable components for the easy building of reliable applications. OAM will also empower the application developers to separate the application description from the application deployment details, allowing them to focus on the key elements of their application, instead of its operational details. Microsoft also asserted that OAM consists of unique characteristics like platform agnostic. The official blog states, “While our initial open implementation of OAM, named Rudr, is built on top of Kubernetes, the Open Application Model itself is not tightly bound to Kubernetes. It is possible to develop implementations for numerous other environments including small-device form factors, like edge deployments and elsewhere, where Kubernetes may not be the right choice. Or serverless environments where users don’t want or need the complexity of Kubernetes.” Another important feature of OAM is its design extensibility. OAM also enables the platform providers to expose the unique characteristics of their platform through the trait system which will help them to build cross-platform apps wherever the necessary traits are supported. In an interview with TechCrunch, Microsoft Azure CTO Mark Russinovich said that currently, Kubernetes is “infrastructure-focused” and does not provide any resource to build a relationship between the objects of an application. Russinovich believes that OAM will solve the problem that many developers and ops teams are facing today. Commenting on the cooperation with Alibaba Cloud on this specification, Russinovich observed that both the companies encountered the same problems when they talked to their customers and internal teams. He further said that over time, Alibaba Cloud will launch a managed service based on OAM, and chances are that Microsoft will do the same over time. The Dapr project for building microservice applications This is an alpha release of Dapr with an event-driven runtime to help developers build resilient, microservice stateless and stateful applications for the cloud and edge. It also allows the application to be built using any programming language and developer framework. “In addition, through the open source project, we welcome the community to add new building blocks and contribute new components into existing ones. Dapr is completely platformed agnostic, meaning you can run your applications locally, on any Kubernetes cluster, and other hosting environments that Dapr integrates with. This enables developers to build microservice applications that can run on both the cloud and edge with no code changes,” stated the official blog. Image Source: Microsoft APIs in Dapr are exposed as a sidecar architecture (either as a container or as a process) and does not require the application code to include any Dapr runtime code. This simplifies Dapr integration from other runtimes, as well as provides separate application logic for improved supportability. Image Source: Microsoft Building blocks of Dapr Resilient service-to-service invocation: It enables method calls, including retries, on remote services wherever they are running in the supported hosting environment. State management for key/value pairs: This allows long-running, highly available, stateful services to be easily written, alongside stateless services in the same application. Publish and subscribe messaging between services: It enables event-driven architectures to simplify horizontal scalability and makes them resilient to failure. Event-driven resource bindings: This helps in building event-driven architectures for scale and resiliency by receiving and sending events to and from any external resources such as databases, queues, file systems, blob stores, webhooks, etc. Virtual actors: This is a pattern for stateless and stateful objects that makes concurrency simple with method and state encapsulation. Dapr also provides state, life-cycle management for actor activation/deactivation and timers and reminders to wake up actors. Distributed tracing between services: It enables easy diagnose and inter-service calls in production using the W3C Trace Context standard. It also allows push events for tracing and monitoring systems. Users have liked both the opensource projects, especially Dapr. A user on Hacker News comments, “I'm excited by Dapr! If I understand it correctly, it will make it easier for me to build applications by separating the "plumbing" (stateful & handled by Dapr) from my business logic (stateless, speaks to Dapr over gRPC). If I build using event-driven patterns, my business logic can be called in response to state changes in the system as a whole. I think an example of stateful "plumbing" is a non-functional concern such as retrying a service call or a write to a queue if the initial attempt fails. Since Dapr runs next to my application as a sidecar, it's unlikely that communication failures will occur within the local node.” https://twitter.com/stroker/status/1184810311263629315 https://twitter.com/ThorstenHans/status/1184513427265523712 The new WebSocket Inspector will be released in Firefox 71 Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia What to expect from D programming language in the near future An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements
Read more
  • 0
  • 0
  • 29656

article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 23119
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-made-by-google-2019-hardware-event-pixel-4-date-of-google-stadia
Sandesh Deshpande
17 Oct 2019
5 min read
Save for later

Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia

Sandesh Deshpande
17 Oct 2019
5 min read
After Apple, Microsoft and OnePlus tech giant Google is the next to showcase its new range of products in techtober. At its annual hardware event 'Made by Google' in New York City, Google launched a variety of new gears. Though the focus of this event was on Google's flagship phone Pixel 4 and Pixel 4 XL, the tech giant also announced the launch date of its highly anticipated gaming subscription service Google Stadia. The theme of the event was Ambient computing and how it is combining with Artificial Intelligence, wireless tech, and cloud computing. The theme was quite evident from all the products Google showcased at the event. “Our vision for ambient computing is to create a single, consistent experience at home, at work or on the go, whenever you need it,” said Rick Osterloh, Google’s head of hardware. “Your devices and services work together,” Osterloh said. “And it’s fluid so it disappears into the background.” Let’s look at the various products announced at ‘Made by Google’. Google Stadia to release on November 19 The event kicked off with release date (November 19) of Google's revolutionary Game Streaming service 'Stadia'. With Google Stadia you can play games from your desktop, Google’s Chrome web browser, smartphones, smart televisions, and tablets, or Chromecast.  Stadia is a cloud-based platform that streams games directly instead of rendering it on a console or a powerful local PC. Google also shared the inspiration behind the design of Stadia Controller. https://youtube.com/watch?v=Pwb6d2wK3Qw Pixel 4 and Pixel 4 XL now available After much anticipation, Google finally launched the next phone of its flagship Pixel series in two variants - Pixel 4 and Pixel 4 XL. The Pixel 4 has 5.7″ screen with a 2,800mAh battery, while the Pixel 4 XL comes in at 6.3″ screen with a 3,700mAh battery. They’re both running on the Snapdragon 855 chipset with 6GB of RAM. Both phones come with dual rear cameras and display comes with 90 Hz refresh rate. They also have “Project Soli radar” powering face unlock and gesture recognition allowing you to switch songs, snooze alarms or silence calls with just a wave of your hand. Pixel 4 also comes with crash detection (the US only for now), which can recognize if you’ve been in an automobile accident and can automatically call 911 for you. https://youtube.com/watch?v=_F7YRde8DuE Pixel 4 also has an advance recording app that can both record and transcribe audio at the same time.  It can also identify specific words and sounds allowing you to search specific parts of a recording with an inbuilt search function. This search functionality processing happens on your local device. One cannot talk about Pixel phones without mentioning camera features. Both Pixel 4 and Pixel 4 XL come with 3 cameras with 12.2-megapixel f/1.7 main, 16-megapixel f/2.4 telephoto rear and 8-megapixel selfie cameras. With respect to videos, these cameras can record 4K at 30 fps, and 1080p at 120 fps. The Google Pixel 4 will release on October 24 but is available for pre-order today. The 64GB version of the Pixel 4 will start from $799, with a 128GB option available for $899. The 64GB Pixel 4 XL will start from $899, with the option for 128GB for $999. New wireless headphones: Pixel Buds 2 Google also announced its next-generation wireless headphones – Pixel Buds 2. Now you can have direct access to Google assistant with the 'Hey Google' command, Pixel buds also support long range Bluetooth connectivity (According to Google, Pixel buds will remain connected to your phones in the range of three rooms or a Football field's length). Pixel buds will be available in Spring 2020 for around $179. https://youtube.com/watch?v=2MmAbDJK8YY New Chrome OS laptop: Pixelbook Go Google has refreshed its Chromebook series and launched a new product, the Pixelbook Go. “We wanted to create a thin and light laptop that was really fast, and also have it last all day. And of course, we wanted it to look and feel beautiful”, said Google's Ivy Ross, VP head of design hardware, in the event. Weighing only two pounds and 13 mm thin, Pixelbook Go is portable while also being loaded with 16GB of RAM and up to 256GB of storage. The company has promised around 12 hours of battery life. The base model starts at $649. Source: Google Blog Google home mini is now 'Nest Mini' Google’s Home Mini assistant smart speaker is now renamed as 'Nest Mini'. It is now made of recycled plastic bottles, is wall-mountable and consists of a machine learning chip for faster response time.  The smart speaker also has additional microphones suitable for louder environments. Nest Mini will be available from October 22 for $49. Source: Google Blog Google also launched Nest WiFi which is a combination of Router/Smart speaker which is faster and features 25% better coverage compared to its predecessor ‘Google WiFi’. The routers come in 2-pack for $269 or 3-pack for $349 and go on sale on November 4. You can watch the event on YouTube. Bazel 1.0, Google’s polyglot build system switches to semantic versioning for better stability Google Project Zero discloses a zero-day Android exploit in Pixel, Huawei, Xiaomi and Samsung devices Google Chrome Keystone update can render your Mac system unbootable
Read more
  • 0
  • 0
  • 22379

article-image-python-3-8-available-walrus-operator-positional-only-parameters-vectorcall
Sugandha Lahoti
15 Oct 2019
6 min read
Save for later

Python 3.8 is now available with walrus operator, positional-only parameters support for Vectorcall, and more

Sugandha Lahoti
15 Oct 2019
6 min read
Yesterday, the latest version of the Python programming language, Python 3.8 was made available with multiple new improvements and features. Features include the new walrus operator and positional-only parameters, runtime audit Hooks, Vectorcall, a fast calling protocol for CPython and more. Earlier this month, the team behind Python announced the release of Python 3.8b2, the second of four planned beta releases. What’s new in Python 3.8 PEP 572: New walrus operator in assignment expressions Python 3.8 has a new walrus operator := that assigns values to variables as part of a larger expression. It is useful when matching regular expressions where match objects are needed twice. It can also be used with while-loops that compute a value to test loop termination and then need that same value again in the body of the loop. It can also be used in list comprehensions where a value computed in a filtering condition is also needed in the expression body. The walrus operator was proposed in PEP 572 (Assignment Expressions) by Chris Angelico, Tim Peters, and Guido van Rossum last year. Since then it has been heavily discussed in the Python community with many questioning whether it is a needed improvement. Others are excited as the operator does make the code more readable. One user commented on HN, “The "walrus operator" will occasionally be useful, but I doubt I will find many effective uses for it. Same with the forced positional/keyword arguments and the "self-documenting" f-string expressions. Even when they have a use, it's usually just to save one line of code or a few extra characters.” https://twitter.com/reynoldsnlp/status/1183498971425042433 https://twitter.com/jakevdp/status/1140071525870997504 PEP 570: New function parameter syntax in positional-only parameters Python 3.8 has a new function parameter syntax / to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments. This notation allows pure Python functions to fully emulate behaviors of existing C-coded functions. It can be used to preclude keyword arguments when the parameter name is not helpful. It also allows the parameter name to be changed in the future without the risk of breaking client code. As with PEP 572, this proposal also got mixed reactions from Python developers. In support, one developer said, “Position-only parameters already exist in cpython builtins like range and min. Making their support at the language level would make their existence less confusing and documented.” While others think that this will allow authors to “dictate” how their methods could be used. “Not the biggest fan of this one because it allows library authors to overly dictate how their functions can be used, as in, mark an argument as positional merely because they want to. But cool all the same,” a Redditor commented. PEP 578: Python Audit Hooks and Verified Open Hook Python 3.8 now has an Audit Hook and Verified Open Hook. These hooks allow applications and frameworks written in pure Python code to take advantage of extra notifications. They also allow embedders or system administrators to deploy builds of Python where auditing is always enabled. These are available from Python and native code. PEP 587: New C API to configure the Python Initialization Though Python is highly configurable, its configuration seems scattered all around the code. Python 3.8 adds a new C API to configure the Python Initialization providing finer control on the whole configuration and better error reporting. This PEP also adds _PyRuntimeState.preconfig (PyPreConfig type) and PyInterpreterState.config (PyConfig type) fields to internal structures. PyInterpreterState.config becomes the new reference configuration, replacing global configuration variables and other private variables. PEP 590: Provisional support for Vectorcall, a fast calling protocol for CPython A currently provisional Vectorcall protocol is added to the Python/C API. It is meant to formalize existing optimizations that were already done for various classes. Any extension type implementing a callable can use this protocol. It will be made fully public in Python 3.9. PEP 574: Pickle protocol 5 supports out-of-band data buffers The pickle protocol 5 now introduces support for out-of-band buffers. This means PEP 3118 compatible data can be transmitted separately from the main pickle stream, at the discretion of the communication layer. Parallel filesystem cache for compiled bytecode files There is a new PYTHONPYCACHEPREFIX setting that configures the implicit bytecode cache to use a separate parallel filesystem tree, rather than the default __pycache__ subdirectories within each source directory. Python uses the same ABI whether it’s built-in release or debug mode With Python 3.8, Python uses the same ABI whether it’s built-in release or debug mode. On Unix, when Python is built in debug mode, it is now possible to load C extensions built-in release mode and C extensions built using the stable ABI. On Unix, C extensions are no longer linked to libpython except on Android and Cygwin. Also, on Unix, when Python is built in debug mode, import now also looks for C extensions compiled in release mode and for C extensions compiled with the stable ABI. f-strings now have a = specifier Formatted strings (f-strings) were introduced in Python 3.6 with PEP 498. It enables you to evaluate an expression as part of the string along with inserting the result of function calls and so on. Python 3.8 adds a = specifier to f-strings for self-documenting expressions and debugging. An f-string such as f'{expr=}' will expand to the text of the expression, an equal sign, then the representation of the evaluated expression. One developer expressed their delight on Hacker News, “F strings are pretty awesome. I’m coming from JavaScript and partially java background. JavaScript’s String concatenation can become too complex and I have difficulty with large strings.” Another developer said, “The expansion of f-strings is a welcome addition. The more I use them, the happier I am that they exist.” Someone added to this, “This makes clean string interpolation so much easier to do, especially for print statements. It's almost hard to use python < 3.6 now because of them. New metadata module Python 3.8 has a new importlib.metadata module that provides (provisional) support for reading metadata from third-party packages. It can, for instance, extract an installed package’s version number, list of entry points, and more. You can go through other improved modules, language changes, Build and C API changes, API and Feature removals in Python 3.8 on Python docs. For full details, see the changelog. Python 3.8b2 new features: the walrus operator, positional-only parameters, and much more Python 3.8 beta 1 is now ready for you to test Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble. PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3.
Read more
  • 0
  • 0
  • 30875

article-image-can-devops-promote-empathy-in-software-engineering
Richard Gall
14 Oct 2019
8 min read
Save for later

Can DevOps promote empathy in software engineering?

Richard Gall
14 Oct 2019
8 min read
If DevOps is, at a really basic level, about getting different teams to work together, then you could say that DevOps is a discipline that promotes empathy. It’s an interesting idea, and one that’s explored in Viktor Farcic’s book The DevOps Paradox. What’s particularly significant about the notion of empathy existing inside DevOps is that it could help us to focus on exactly what we’re trying to achieve by employing it. In turn, this could help us develop or evolve the way we actually do DevOps - so, instead of worrying about a new toolchain, or new platform purchases, we could, perhaps, just explore ways of getting people to simply understand what their respective needs are. However, in the DevOps Paradox there are a number of different insights on empathy in DevOps and what it means not just for the field itself, but also its place in a modern business. Let’s take a look at some DevOps experts thoughts on empathy and DevOps. "Empathy helps developers put the user at the center of what they do" Jeff Sussna (@jeffsussna) is an IT consultant and coach that helps organizations to design, build and deliver products quickly. "There's a lot of confusion and anxiety about [empathy’s] meaning, and a lot of people tend to misunderstand it. Sometimes people think empathy means wallowing in someone else's pain. In fact, there's actually a philosopher from Yale University who is now putting out the idea that empathy is actually bad, and that it's the cause of all of the world's problems and what we need instead is compassion. "From my perspective, that represents a misunderstanding of both empathy and compassion, but my favorite is when people say things like, "Sociopaths are really good at empathizing". My answer to that is, if you have a sociopath in your organization, you have a much bigger problem, and DevOps isn't going to solve it. At that point, you have an HR problem. What you need to distinguish between is emotional empathy and cognitive empathy, and I use cognitive empathy in the context of DevOps in a very simple way, which is the ability to think about things as if from another's perspective. "If you're a developer and you think, ‘What is the experience of deploying and running my application going to be?’ you're thinking about it from the perspective of the operations person. "If you're an operations person and you're thinking in terms of, ‘What is the experience going to be when you need to spin up a test server in a matter of hours in order to test a hotfix because all of your testing swim lanes are full of other things, and what does that mean for my process of provisioning servers?’ then you're thinking about things from the tester's point of view. "And so, to me, that's empathy, and that's empathizing, which is really at the heart of customer service. It's at the heart of design thinking, and it's at the heart of product development. What is it that our customers are trying to accomplish, what help do they need from us, and how can we help them?" Listen: Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] "As soon as you have empathy you can understand why you provide value" Damien Duportal (@DamienDuportal) is a Developer Advocate at Traefik, Containous’ cloud native edge router. "If you have a tool that helps you to share empathy, then you have a great foundation for starting the conversation. Even if this seems boring to engineers, at least they'll start talking and listening to each other. I mean, once they've stopped debating sterile tabs versus spaces or JavaScript versus Java—or whatever sterile debate it is—they'll have to focus on the value they're going to provide. So, this is really how I would sum up DevOps, which again is about how you bring empathy back and focus on the value creation and interaction side of IT. "Empathy is one of the most advanced bricks you can have for building human interaction. If we are able to achieve so many different things—with different people, different opinions, and different cultures—it's because we, as humans, are capable of having high levels of empathy. "As soon as you have empathy, you can understand why you provide value. If you don't, then what's the point of trying to create value? It will only be from your point of view, and there are over seven billion other people in the world. So, ultimately, we need empathy to understand what we are going to do with our tools." Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin “Let’s not wait for culture to change: culture is in the rearview mirror” As CSO of PraxisFlow, Kevin Behr spends his time working with clients who seek to develop their DevOps process. His 25 years of experience have been driven by a passion for engaging with the complex problems that large IT organizations face, and how we can use DevOps to solve them. You can follow Kevin on Twitter at @kevinbehr. What do we mean when we talk about empathy in DevOps? We're saying that we understand what it feels like to do what you're doing and that I'll never do that to you again. So, let's build a system together that will allow us to never be there. DevOps to me has evolved into a lot of tools because we're humans, and humans love tools of all kinds. As a species, we've defined ourselves by our tools and technologies. And, as a species, we also talk about culture a lot, but, to my mind, culture is a rearview mirror. Culture is just all the things that we've done: our organizational disposition. The way to change culture is to do things differently. Let's not wait for culture, because culture is in the rearview mirror: it's the past. If you're in a transition, then what are you transitioning toward and what does that mean about how you need to act? The very interesting thing about DevOps is that while frequently, its mission is to create a change in the culture of an organization, this change requires far more than coordination: it also requires pure collaboration, and co-laboring. These can be particularly awkward to achieve given the likelihood that we haven't worked with the people in an organization before. And it can become intensely awkward, when those people may have already made villains out of each other because they couldn't get what they wanted. The goal of the DevOps process is to create a new culture, despite these challenges.... ....When you manage to introduce empathy to a team, the development and the operations people seem finally to come together. You suddenly hear someone in operations say, 'Oh, can we do that differently? When you threw that thing at me last time, it gave me a black eye and I had to stay up for four days straight!' And the developer is like, 'It did? How did it do that? Next time, if something happens, please call me, I want to come help.' That empathy of figuring out what went wrong, and working together, is what builds trust." “The CFO doesn’t give a shit about empathy” Chris Riley (@HoardingInfo) is a self-proclaimed bad coder turned editor of Sweetcode.io at fixate.io, a content marketing firm for those who sell to technical audiences. Through this, he's involved with DevOps, SecOps, big data, machine learning, and blockchain. He's a member of the DevOps Institute Board of Regents, a position he's held for over four years. “...The CFO doesn't give a shit about empathy, and the person with the money may not care about that at all. The HR department might, but that's the problem with selling anything. You have to speak their language, and the CFO is going to respond to money. Either you're saving us money, or you're making us more money, and I think DevOps is doing both, which is cool. I think what's nice about that explanation is the fact it doesn't seem insurmountable. It's kind of like how Pixar was structured. “After Steve Jobs started at Pixar, he structured all of the work environments where the idea was to create chance encounters among the employees, so that the graphic designer of one movie would talk to the application developer of another, even when they don't even have any real reason to interact with each other. The way they did it at Pixar was that, as everybody has to go to the bathroom, they put the bathrooms in a large communal area where these people are going to run into each other—that's what created that empathy. They understand what each other's job is. They're excited about each other's movies. They're excited about what they're working on, and they're aware of that in everything they do. It's a really good explanation.” What do you think? Can DevOps help promote empathy inside engineering teams and across wider businesses? Or is there anything else we should be doing?
Read more
  • 0
  • 0
  • 24396

article-image-modern-web-development-what-makes-it-modern
Richard Gall
14 Oct 2019
10 min read
Save for later

Modern web development: what makes it ‘modern’?

Richard Gall
14 Oct 2019
10 min read
The phrase 'modern web development' is one that I have clung to during years writing copy for Packt books. But what does it really mean? I know it means something because it sounds right - but there’s still a part of me that feels that it’s a bit vague and empty. Although it might sound somewhat fluffy the truth is that there probably is such a thing as modern web development. Insofar as the field has changed significantly in a matter of years, and things are different now from how they were in, say, 2013, modern web development can be easily characterised as all the things that are being done in web development in 2019 that are different from 5-10 years ago. By this I don’t just mean trends like artificial intelligence and mobile development (although those are both important). I’m also talking about the more specific ways in which we actually build web projects. So, let’s take a look at how we got to where we are and the various ways in which ‘modern web development’ is, well, modern. The story of modern web development: how we got here It sounds obvious, but the catalyst for the changes that we currently see in web development today is the rise of mobile. Mobile and the rise of the web applications There are a few different parts to this that have got us to where we are today. In the first instance the growth of mobile in the middle part of this decade (around 2013 or 2014) initiated the trend of mobile-first or responsive web design. Those terms might sound a bit old-hat. If they do, it’s a mark of how quickly the web development world has changed. Primarily, though this was about appearance and UI - making web properties easy to use and navigate on mobile devices, rather than just desktop. Tools like Bootstrap grew quickly, providing an easy and templated way to build mobile-first and responsive websites. But what began as a trend concerned primarily with appearance later shifted as mobile usage grew. This called for a more sophisticated approach as mobile users came to expect richer and faster web experiences, and businesses a new way to monetize these significant changes user behavior. Explore Packt's Bootstrap titles here. Lightweight apps for data-intensive user experiences This is where concepts like the single page web app came to the fore. Lightweight and dynamic, and capable of handling data-intensive tasks and changes in state, single page web apps were unique in that they handled logic in the browser rather than on the server. This was arguably a watershed in changing how we think about web development. It was instrumental in collapsing the well-established distinction between backend and front end. Behind this trend we saw a shift towards new technologies. Node.js quietly emerged on the scene (arguably its only in the last couple of years that its popularity has really exploded), and frameworks like Angular were at the height of their popularity. Find a massive range of Node.js eBooks and videos on the Packt store. Full-stack web development It’s around this time that full stack development started to accelerate as a trend. Look at Google trends. You can see how searches for the phrase have grown since the beginning of 2012: If you look closely, it’s around 2015 that the term undergoes a step change in the level of interest. Undoubtedly one of the reasons for this is that the relationship between client and server was starting to shift. This meant the web developer skill set was starting to change as well. As a web developer, you weren’t only concerned with how to build the front end, but also how that front end managed dynamic content and different states. The rise and fall of Angular A good corollary to this tale is the fate of AngularJS. While it rose to the top amidst the chaos and confusion of the mid-teens framework bubble, as the mobile revolution matured in a way that gave way to more sophisticated web applications, the framework soon became too cumbersome. And while Google - the frameworks’ creator - aimed to keep it up to date with Angular 2 and subsequent versions, delays and missteps meant the project lost ground to React. Indeed, this isn’t to say that Angular is dead and buried. There are plenty of reasons to use Angular over React and other JavaScript tools if the use case is right. But it is nevertheless the case that the Angular project no longer defines web development to the extent that it used to. Explore Packt's Angular eBooks and videos. The fact that Ionic, the JavaScript mobile framework, is now backed by Web Components rather than Angular is an important indicator for what modern web development actually looks like - and how it contrasts with what we were doing just a few years ago. The core elements of modern web development in 2019 So, there are a number of core components to modern web development that have evolved out of industry changes over the last decade. Some of these are tools, some are ideas and approaches. All of them are based on the need to manage a balance of complex requirements with performance and simplicity. Web Components Web Components are the most important element if we’re trying to characterise ‘modern’ web development. The principle is straightforward: Web Components provides a set of reusable custom elements. This makes it easier to build web pages and applications without writing additional lines of code that add complexity to your codebase. The main thing to keep in mind here is that Web Components improve encapsulation. This concept, which is really about building in a more modular and loosely coupled manner, is crucial when thinking about what makes modern web development modern. There are three main elements to Web Components: Custom elements, which are a set of JavaScript APIs that you can call and define however you need them to work. The shadow DOM, which acts as a DOM that’s attached to individual elements on your page. This essentially isolates the resources different elements and components need to work on your page which makes it easier to manage from a development perspective, and can unlock better performance for users. HTML Templates, which are bits of HTML that can be reused and called upon only when needed. These elements together paint a picture of modern web development. This is one in which developers are trying to handle more complexity and sophistication while improving their productivity and efficiency. Want to get started with Web Components? You can! Read Getting Started with Web Components. React.js One of the reasons that React managed to usurp Angular is the fact that it does many of the things that Google wanted Angular to do. Perhaps the most significant difference between React and Angular is that React tackles some of the scalability issues presented by Angular’s two-way data binding (which was, for a while, incredibly exciting and innovative) with unidirectional flow. There’s a lot of discussion around this, but by moving towards a singular model of data flow, applications can handle data on a much larger scale without running into problems. Elsewhere, concepts like the virtual DOM (which is distinct from a shadow DOM, more on that here) help to improve encapsulation for developers. Indeed, flexibility is one of the biggest draws of React. To use Angular you need to know TypeScript, for example. And although you can use TypeScript when working with React, it isn’t essential. You have options. Explore Packt's React.js eBooks and videos. Redux, Flux, and how we think about application state The growth of React has got web developers thinking more and more about application state. While this isn’t something that’s new, as applications have become more interactive and complex it has become more important for developers to take the issue of ‘statefulness’ seriously. Consequently, libraries such as Flux and Redux have emerged on the scene which act as objects in which all the values that comprise an application’s state can be stored. This article on egghead.io explains why state is important in a clear and concise way: "For me, the key to understanding state management was when I realised that there is always state… users perform actions, and things change in response to those actions. State management makes the state of your app tangible in the form of a data structure that you can read from and write to. It makes your ‘invisible’ state clearly visible for you to work with." Find Redux eBooks and videos from Packt. Or, check out a wide range of Flux titles. APIs and microservices One software trend that we haven’t mentioned yet but nevertheless remains important when thinking about modern web development is the rise of APIs and microservices. For web developers, the trend is reinforcing the importance of encapsulation and modularity that things like Web Components and React are designed to help with. Insofar as microservices are simplifying the development process but adding architectural complexity, it’s not hard to see that web developers are having to think in a more holistic manner about how their applications interact with a variety of services and data sources. Indeed, you could even say that this trend is only extending the growth of the full-stack developer as a job role. If development is today more about pulling together multiple different services and components, rather than different roles building different parts of a monolithic app, it makes sense that the demand for full stack developers is growing. But there’s another, more specific, way in which the microservices trend is influencing modern web development: micro frontends. Micro frontends Micro frontends take the concept of microservices and apply them to the frontend. Rather than just building an application in which the front end is powered by microservices (in a way that’s common today), you also treat individual constituent parts of the frontend as a microservice. In turn, you build teams around each of these parts. So, perhaps one works on search, another on check out, another on user accounts. This is more of an organizational shift than a technological one. But it again feeds into the idea that modern web development is something modular, broken up and - usually - full-stack. Conclusion: modern web development is both a set of tools and a way of thinking The web development toolset has been evolving for more than a decade. Mobile was the catalyst for significant change, and has helped get us to a world that is modular, lightweight, and highly flexible. Heavyweight frameworks like AngularJS paved the way, but it appears an alternative has found real purchase with the wider development community. Of course, it won’t always be this way. And although React has dominated developer mindshare for a good three years or so (quite a while in the engineering world), something will certainly replace it at some point. But however the tool chain evolves, the basic idea that we build better applications and websites when we break things apart will likely stick. Complexity won’t decrease. Even if writing code gets easier, understanding how component parts of an application fit together - from front end elements to API integration - will become crucial. Even if it starts to bend your mind, and pose new problems you hadn’t even thought of, it’s clear that things are going to remain interesting as far as web development in the future is concerned.
Read more
  • 0
  • 0
  • 24393
article-image-facebook-releases-pytorch-1-3-with-named-tensors-pytorch-mobile-8-bit-model-quantization-and-more
Bhagyashree R
11 Oct 2019
5 min read
Save for later

Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more

Bhagyashree R
11 Oct 2019
5 min read
Yesterday, at the PyTorch Developer Conference, Facebook announced the release of PyTorch 1.3. This release comes with three experimental features: named tensors, 8-bit model quantization, and PyTorch Mobile. Along with these exciting features, Facebook also announced the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. Key updates in PyTorch 1.3 Named Tensors for more readable and maintainable code Though tensors are the building blocks of modern machine learning, researchers have argued that they are “broken.” Tensors have their own share of shortcomings: they expose private dimensions, broadcast based on absolute position, and keep the type information in the documentation. PyTorch 1.3 tries to solve this problem by introducing experimental support for named tensors, which was proposed by Sasha Rush, an Associate Professor at Cornell Tech. He has built a library called NamedTensor, which serves as a “thin-wrapper” on Torch tensor. This update introduces a few changes to the API. Dimension access and reduction now use a ‘dim’ argument instead of an index. Constructing and adding dimensions requires a “name” argument. Functions now broadcast based on set operations, not through heuristic ordering rules. 8-bit model quantization for mobile-optimized AI Quantization in deep learning is the method of approximating a neural network that uses 32-bit floating-point numbers by a neural network that uses a lower-precision numerical format. It is used to reduce the bandwidth and compute requirements of deep learning models. This is extremely essential for on-device applications that have limited memory size and number of computations. PyTorch 1.3 brings experimental support for 8-bit model quantization with the eager mode Python API for efficient deployment on servers and edge devices. This feature includes techniques like post-training quantization, dynamic quantization, and quantization-aware training. Moving from 32-bits to 8-bits can result in two to four times faster computations with one-quarter the memory usage. PyTorch Mobile for more efficient on-device machine learning Running machine learning models directly on edge devices is of great importance as it reduces latency. This is why PyTorch 1.3 introduces PyTorch Mobile that enables “an end-to-end workflow from Python to deployment on iOS and Android.” The current release is experimental. In the future releases, we can expect PyTorch Mobile to come with build-level optimization, selective compilation, support for QNNPACK quantized kernel libraries and ARM CPUs, further performance improvements, and more. Model interpretability and privacy tools in PyTorch 1.3 Captum and Captum Insights Captum is an easy-to-use model interpretability library for PyTorch. It is backed by state-of-the-art interpretability algorithms such as Integrated Gradients, DeepLIFT, and Conductance to help developers improve and troubleshoot their models. Developers can identify different features that contribute to a model’s output and improve its design. Facebook has also released an early release of Captum Insights. It is an interpretability visualization widget built on top of Captum. It works across images, text, and other features to help users understand feature attribution. Check out Facebook’s announcement to know more about Captum. CrypTen Machine learning via cloud-based platforms poses various security and privacy challenges. Facebook writes, “In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools.” PyTorch 1.3 comes with CrypTen, a framework for privacy-preserving machine learning. It aims to make secure computing techniques accessible to machine learning practitioners. You can find more about CrypTen on GitHub. Libraries for multimodal AI systems Detectron2: It is an object detection library implemented in PyTorch. It features support for the latest models and tasks and increased flexibility to aid computer vision research. There are also improvements in maintainability and scalability to support production use cases. Fairseq gets speech extensions: With this release, Fairseq, a framework for sequence-to-sequence applications such as language translation includes support for end-to-end learning for speech and audio recognition tasks. The release of PyTorch 1.3 started a discussion on Hacker News and naturally many developers compared it with TensorFlow 2.0. Here’s what a user commented, “This is a common trend for being second in the market when we see Pytorch and TensorFlow 2.0, TF 2.0 was created to compete directly with Pytorch pythonic implementation (Keras based, Eager execution).” They further added, “Facebook at least on PyTorch has been delivering a quality product. Although for us running production pipelines TF is still ahead in many areas (GPU, TPU implementation, TensorRT, TFX and other pipeline tools) I can see Pytorch catching up on the next couple of years which by my prediction many companies will be running serious and advanced workflows and we may be able to see a winner there.” The named tensors implementation is being well-received by the PyTorch community: https://twitter.com/leopd/status/1182342855886376965 https://twitter.com/rasbt/status/1182647527906140161 These were some of the updates in PyTorch 1.3. Check out the official announcement by Facebook to know more. PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 48856

article-image-puppet-announces-the-public-beta-of-project-nebula
Savia Lobo
10 Oct 2019
3 min read
Save for later

Puppet announces the public beta of Project Nebula

Savia Lobo
10 Oct 2019
3 min read
Today, Puppet announced the public beta of their Project Nebula at the Puppetize PDX, a two-day event (October 9-10) featuring user-focused DevOps and infrastructure delivery talks, and hands-on workshops. Project Nebula is a simplified workflow automation for the continuous deployment of cloud-native applications and infrastructure. It is designed for teams that are adopting cloud-native and serverless technologies and need an end-to-end workflow management system. Also Read: Puppet’s 2019 State of DevOps Report highlight security integration into DevOps practices result into higher business outcome Why Project Nebula? Puppet has worked closely with its private beta participants to understand their deployment workflows and pain points. On interviewing these participants they realized that they want to adopt cloud native technologies; however, they face multiple challenges in adopting containers, serverless infrastructure, microservices, and observability for even simple cloud-native applications. They also said that a major roadblock is a lack of simple automation today to easily compose multiple tools together for infrastructure provisioning, application deployment, and notifications into an end-to-end deployment. Another roadblock highlighted is the lack of a cohesive platform that multiple teams can use to share workflows and best practices and build them into their own deployments. “In-house efforts to build a deployment platform like this can take years, incur large maintenance and support costs, and often require specialized skill sets that many companies do not have today,” the company states. Project Nebula tries to help users in eliminating these roadblocks and gives teams a consistent, easy-to-use experience for deploying cloud-native apps in a safe, secure and continuous manner. Listen: Puppet’s VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast] Few features in Project Nebula With the focus on ease of use and improved productivity, Project Nebula also provides a single place to build, provision, and deploy cloud-native applications. Other notable features include: Built-in example Workflows: This will help users to get started with their deployments. Don’t start with a blank slate. Support for 20+: This is one of the most popular cloud-native deployment tools as configurable steps within your deployment, including Terraform, CloudFormation, Helm, Kubectl, Kustomize and more. Intuitive visualization: This provides a bird’s eye view of the entire deployment workflow. Easy to compose deployment workflows: One can easily compose deployment workflows that are checked into the source control repository and eliminate the process of writing messy, ad-hoc bash scripts. Know more about Puppet’s Project Nebula in detail, on its official website. Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops Puppet announces updates in a bid to help organizations manage their “automation footprint” “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel
Read more
  • 0
  • 0
  • 18497

article-image-blizzard-comes-under-fire-after-banning-pro-player-for-expressing-support-for-hong-kong-protests
Sugandha Lahoti
10 Oct 2019
6 min read
Save for later

Blizzard comes under fire after banning pro-player for expressing support for Hong Kong protests

Sugandha Lahoti
10 Oct 2019
6 min read
Update: The article has now been updated to include Blizzard's press release about relaxing the ban on the pro-player.  Blizzard has been under fire since last weekend after the game publisher issued a year-long ban to a Hearthstone player who expressed support for the Hong Kong protestors during a competition live stream. The incident occurred on Sunday when Ng “Blitzchung” Wai Chung voiced support for the protesters in Hong Kong in a post-game interview. Blitzchung said, “Liberate Hong Kong. Revolution of our age!” The ban is effective from October 5th and forbids Blitzchung from participating in any tournaments for an entire year. Blizzard is also withholding any prize money he would have earned from competing in the tournament. Blizzard has also terminated its contract with the two casters who were interviewing the competitor. Explaining the reason behind this ban Blizzard issued a statement, “Per the competition rule, players aren’t allowed to do anything that brings [them] into public disrepute, offends a portion or group of the public, or otherwise damages [Blizzard’s] image. While we stand by one’s right to express individual thoughts and opinions, players and other participants that elect to participate in our esports competitions must abide by the official competition rules.” Game Players, US politicians, and Blizzard employees are outraged After the ban of Hearthstone pro,  Blizzard was at the end of major backlash from video game players, US politicians, and Blizzard employees. On Tuesday, a small group of Blizzard employees walked out of work to protest the company’s actions. The demonstration featured about 12-30 employees from multiple departments, who gathered around the Orc warrior statue in the center of the company’s main campus in Irvine, California. The Daily Beast spoke with a few employees. “The action Blizzard took against the player was pretty appalling but not surprising,” said a longtime Blizzard employee. “Blizzard makes a lot of money in China, but now the company is in this awkward position where we can’t abide by our values.” “I’m disappointed,” another current Blizzard employee said. “We want people all over the world to play our games, but no action like this can be made with political neutrality.” US Senators Marco Rubio and Ron Wyden also chastised the actions of Blizzard on Twitter. “Blizzard shows it is willing to humiliate itself to please the Chinese Communist Party,” Senator Wyden tweeted. “No American company should censor calls for freedom to make a quick buck.” “Recognize what’s happening here,” Senator Rubio said on Twitter. “People who don’t live in #China must either self-censor or face dismissal & suspensions. China using access to the market as leverage to crush free speech globally. Implications of this will be felt long after everyone in U.S. politics today is gone.” https://twitter.com/marcorubio/status/1181556058659135488 Blizzard’s own forums and subreddits were also bombarded with angry messages denouncing the ban. The r/Blizzard subreddit went down for a few hours on Tuesday after the board was drowned with posts calling for players to boycott Blizzard and its games like World of Warcraft, Overwatch, and Hearthstone. On its Hearthstone board, a redditor Hinz97 said in a post,“ I play [Hearthstone] everyday, I climbed to Legend several times. I spent more than $10k. As a [Hong Konger], I quit [ Hearthstone] without consideration.” “I’ve been playing since beta. Good riddance,” Redditor UltimaterializerX said. “Blizzard CLEARLY only cares about the Chinese market. The censorship of art was bad enough. The censorship of human life is indefensible. Finding videos of what’s going on in Hong Kong is easy and I suggest everyone do so. It’s Tiananmen Square all over again.” https://twitter.com/Espsilverfire2/status/1182001007976423424 Mark Kern, Team Lead for Vanilla World of Warcraft tweeted, “This hurts. But until Blizzard reverses their decision on @blitzchungHS.  I am giving up playing Classic WoW, which I helped make and helped convince Blizzard to relaunch. There will be no Mark of Kern guild after all.” Fortnite creator Epic Games released a statement stating that it will not ban players or content creators for political speech. “Epic supports everyone’s right to express their views on politics and human rights. We wouldn’t ban or punish a Fortnite player or content creator for speaking on these topics.” https://twitter.com/TimSweeneyEpic/status/1181933071760789504 Blizzard has not yet responded to this development or lifted the ban. Hong Kong protests began in June and now the tech industry has been caught in between the China HK political tussle. In August, Chinese state-run media agencies were caught buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. Post this revelation, Twitter banned 936 accounts managed by the Chinese state; Facebook removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior; Google shutdown 210 YouTube channels. Most recently Apple, after pressure from the Chinese govt, banned a protest safety app that helps people track locations of the Hong Kong police which made people very angry. Amid the protests a day later, Apple again brought it back to the iOS Store. Yesterday, according to Quartz investigations editor John Keefe, Apple has reportedly removed the Quartz application from the App Store at the request of the Chinese government. Quartz has been covering the Hong Kong protests in detail and has been blocked across all of mainland China. Update as on Oct 11: After four days of mounting public pressure, Blizzard Entertainment published a press release partially relaxing the ban on the professional player who expressed support for the Hong Kong protestors during a competition live stream. The one year ban on Ng "blitzchung" has since been changed to a six-month suspension. Additionally, the two Chinese broadcasters who had been fired are now put on a six-month suspension from their jobs. Blizzard President J. Allen Brack wrote also clarified that they were not under the influence of China. "The specific views expressed by blitzchung were NOT a factor in the decision we made," Brack wrote. "I want to be clear: our relationships in China had no influence on our decision." Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests
Read more
  • 0
  • 0
  • 21959
article-image-get-ready-for-open-data-science-conference-2019-in-europe-and-california
Sugandha Lahoti
10 Oct 2019
3 min read
Save for later

Get Ready for Open Data Science Conference 2019 in Europe and California

Sugandha Lahoti
10 Oct 2019
3 min read
Get ready to learn and experience the very latest in data science and AI with expert-led trainings, workshops, and talks at ​ODSC West 2019 in San Francisco and ODSC Europe 2019 in London. ODSC events are built for the community and feature the most comprehensive breadth and depth of training opportunities available in data science, machine learning, and deep learning. They also provide numerous opportunities to connect, network, and exchange ideas with data science peers and experts from across the country and the world. What to expect at ODSC West 2019 ODSC West 2019 is scheduled to take place in San Francisco, California on Tuesday, Oct 29, 2019, 9:00 AM – Friday, Nov 1, 2019, 6:00 PM PDT. This year, ODSC West will host several networking events, including ODSC Networking Reception, Dinner and Drinks with Data Scientists, Meet the Speakers, Meet the Experts, and Book Signings Hallway Track. Core areas of focus include Open Data Science, Machine Learning & Deep Learning, Research frontiers, Data Science Kick-Start, AI for engineers, Data Visualization, Data Science for Good, and management & DataOps. Here are just a few of the experts who will be presenting at ODSC: Anna Veronika Dorogush, CatBoost Team Lead, Yandex Sarah Aerni, Ph.D., Director of Data Science and Engineering, Salesforce Brianna Schuyler, Ph.D., Data Scientist, Fenix International Katie Bauer, Senior Data Scientist, Reddit, Inc Jennifer Redmon, Chief Data Evangelist, Cisco Systems, Inc Sanjana Ramprasad, Machine Learning Engineer, Mya Systems Cassie Kozyrkov, Ph.D., Chief Decision Scientist, Google Rachel Thomas, Ph.D., Co-Founder, fast.ai Check out the conference’s more industry-leading speakers here. ODSC also conducts the Accelerate AI Business Summit, which brings together leading experts in AI and business to discuss three core topics: AI Innovation, Expertise, and Management. Don’t miss out on the event You can also use code ODSC_PACKT right now to exclusively save 30% before Friday on your ticket to ODSC West 2019. What to expect at ODSC Europe 2019 ODSC Europe 2019 is scheduled to take place in London, the UK on Tuesday, Nov 19, 2019 – Friday, Nov 22, 2019. Europe Talks/Workshops schedule includes Thursday, Nov 21st and Friday, Nov 22nd. It is available to Silver, Gold, Platinum, and Diamond pass holders. Europe Trainings schedule includes Tuesday, November 19th and Wednesday, November 20th. It is available to Training,  Gold ( Wed Nov 20th only), Platinum, and Diamond pass holders. Some talks scheduled to take place include ML for Social Good: Success Stories and Challenges, Machine Learning Interpretability Toolkit, Tools for High-Performance Python, The Soul of a New AI, Machine Learning for Continuous Integration, Practical, Rigorous Explainability in AI, and more. ODSC has released a preliminary schedule with information on attending speakers and their training, workshop, and talk topics. The full schedule is going to be available soon. They’ve also recently added several excellent speakers, including Manuela Veloso, Ph.D. | Head of AI Research, JP Morgan Dr. Wojciech Samek | Head of Machine Learning, Fraunhofer Heinrich Hertz Institute Samik Chandanara | Head of Analytics and Data Science, JP Morgan Tom Cronin | Head of Data Science & Data Engineering, Lloyds Banking Group Gideon Mann, Ph.D. | Head of Data Science, Bloomberg, LP There are more chances to learn, connect, and share ideas at this year’s event than ever before. Don’t miss out. Use code ODSC_PACKT right now to save 30% on your ticket to ODSC Europe 2019.
Read more
  • 0
  • 0
  • 16796

article-image-6-reasons-why-employers-should-pay-for-their-developers-training-and-learning-resources
Richard Gall
09 Oct 2019
7 min read
Save for later

6 reasons why employers should pay for their developers' training and learning resources

Richard Gall
09 Oct 2019
7 min read
Developers today play a critical role in every business - the days of centralized IT being shut away providing clandestine support not only feels outdated, it's offensive too. But it's not enough for employers to talk about how important an ambitious and creative technical workforce is. They need to prove it by actively investing in the resources and training their employees need to stay relevant and up to date. That's not only the right thing to do, it's also a smart business decision. It attracts talent and ensures flexibility and adaptability. Not convinced? Here are 6 reasons why employees should pay for the resources their development and engineering teams need. Employers make money out of their developers' programming expertise Let’s start with the obvious - businesses make money from their developers' surplus labor value. That’s the value of everything developers do - everything that they develop and build - that exceeds their labor-cost (ie. their salaries). While this might be a good argument to join a union, at the very least it highlights that employers should invest in the skills of their workforce. True, we’re all responsible for our own career development, and we should all be curious and ambitious enough to explore new ideas, topics, and technologies, but it’s absurd to think that employers have no part to play in developing the skills of the people on which they depend for revenue and profit. Perhaps bringing up Karl Marx might not be the best way to make the case to your business if you’re looking for some investment in training. But framing this type of investment in terms of your future contribution to the business is a good way to ensure that you get the resources you need and deserve as a software developer. It levels the playing field: everyone should have access to the same resources Conventional wisdom says that it’s not hard to find free resources on technology topics. This might sound right, but it isn’t strictly true - knowing where to look, how to determine the quality and relevance of certain resources is a skill in itself. Indeed, it might be one that not all developers know, especially if they’re only starting out in their careers. Although employees that take personal learning seriously are valuable, employers need to ensure that everyone is on the same page when it comes to learning. Failure to do so can not only knock the confidence of more inexperienced members of the team, it can also entrench hierarchies. This can really damage the culture. It’s vital, as a leader, that you empower people to be the best they can be. This issue is only exacerbated when you bring premium resources into the mix. Some team members might be in a financial situation where they can afford one - maybe more - subscriptions to learning resources, as well as the newest books by experts in their field, while others might be finding it a little more difficult to justify spending money on learning materials. Ultimately, this isn’t really any of your business. If a team has a set of resources that they all depend upon, access becomes a non-issue. While some team members will always be happy to pay for training and resources, providing a base level of support ensures that everyone is on the same page (maybe even literally). Relying on free content is useful for problem solving - but it doesn’t help with long-term learning Whatever the challenges of relying on free resources it would be churlish and wrong-headed to pretend they aren’t a fixture of technology learning patterns. Every developer and engineer will grow to use free resources to a greater or lesser extent, and each one will find their favored sites and approaches to finding the information they need. That’s all well and good, but it’s important to recognise that for the most part free resources are well-suited to problem solving and short-term learning. That’s just one part of the learning-cycle. Long-term learning that is geared towards building individual skill sets and exploring new toolchains for different purposes requires more structure and support. Free resources - which may be opinion-based or unreliable - don’t offer the consistency needed for this kind of development. In some scenarios it might be appropriate to use online or in-person training courses - but these can be expensive and can even alienate some software developers. Indeed, sometimes they’re not even necessary - it’s much better to have an accessible platform or set of resources that people can return to and explore over a set period. The bonus aspect of this is that by investing in a single learning platform it becomes much easier for managers and team leads to get transparency on what individuals are learning. That can be useful in a number of different ways, from how much time people are spending learning to what types of things they’re interested in learning. It’s hard to hire new developer talent Hiring talented developers and engineers isn’t easy. But organizations that refuse to invest in their employees skills are going to have to spend more time - and money - trying to attract the skilled developers they need. That makes investing in a quality learning platform or set of resources an obvious choice. It’ll provide the foundations from which you can build a team that’s prepared for the future. But there’s another dimension that’s easy to ignore. When you do need to hire new developer talent, it does nothing for your brand as an employer if you can’t demonstrate that you support learning and personal development. Of course the best candidates will spend time on themselves, but it’s also a warning sign if I, as a potential employee, see a company that refuses to take skill development seriously. It tells me that not only do you not really care about me - it also indicates that you’re not thinking about the future, period. Read next: Why companies that don’t invest in technology training can’t compete Investing in learning makes employees more adaptable and flexible Change is the one constant in business. This is particularly true where technology is concerned. And while it’s easy to make flexibility a prerequisite for job candidates, the truth is that businesses need to take responsibility for the adaptability and flexibility of their employees. If you teach them that change is unimportant, and that learning should be low on their list of priorities, you’re soon going to find that they’re going to become the inflexible and uncurious employees that you wanted to avoid. It’s all well and good depending on them to inspire change. But by paying for their learning resources, employers are taking a decisive step. It’s almost as if you’re saying go ahead, explore, learn, improve. This business depends on it, so we’re going to give you exactly what you need. It’s cost effective Okay, this might not be immediately obvious - but if you’re a company that does provide an allowance to individual team members, things can quickly get costly without you realising. Say you have 4 or 5 developers that each decide how to spend a learning allowance. Yes, that gives them a certain degree of independence, but with the right learning platform that caters to multiple needs and preferences you can save a significant amount of money. Read next: 5 barriers to learning and technology training for small software development teams Conclusion: It's the right thing to do and it makes business sense There are a range of reasons why organizations need to invest in employee learning. But it boils down to two things: it's the right thing to do, and it makes business sense. It might be tempting to think that you can't afford to purchase training materials for your team. But the real question you should ask is can we afford not to? Learn more about Packt for Teams here.
Read more
  • 0
  • 0
  • 52884
Modal Close icon
Modal Close icon