Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Programming

81 Articles
article-image-developers-guide-to-software-architecture-patterns
Sugandha Lahoti
06 Aug 2018
11 min read
Save for later

Developer's guide to Software architecture patterns

Sugandha Lahoti
06 Aug 2018
11 min read
As we all know, patterns are a kind of simplified and smarter solution for a repetitive concern or recurring challenge in any field of importance. In the field of software engineering, there are primarily many designs, integration, and architecture patterns. In this article, we will cover the need for software patterns and describe the most prominent and dominant software architecture patterns. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. Why software patterns? There is a bevy of noteworthy transformations happening in the IT space, especially in software engineering. The complexity of recent software solutions is continuously going up due to the continued evolution of the business expectations. With complex software, not only does the software development activity become very difficult, but also the software maintenance and enhancement tasks become tedious and time-consuming. Software patterns come as a soothing factor for software architects, developers, and operators. Types of software patterns Several newer types of patterns are emerging in order to cater to different demands. This section throws some light on these. An architecture pattern expresses a fundamental structural organization or schema for complex systems. It provides a set of predefined subsystems, specifies their unique responsibilities, and includes the decision-enabling rules and guidelines for organizing the relationships between them. The architecture pattern for a software system illustrates the macro-level structure for the whole software solution. A design pattern provides a scheme for refining the subsystems or components of a system, or the relationships between them. It describes a commonly recurring structure of communicating components that solves a general design problem within a particular context. The design pattern for a software system prescribes the ways and means of building the software components. There are other patterns, too. The dawn of the big data era mandates for distributed computing. The monolithic and massive nature of enterprise-scale applications demands microservices-centric applications. Here, application services need to be found and integrated in order to give an integrated result and view. Thus, there are integration-enabled patterns. Similarly, there are patterns for simplifying software deployment and delivery. Other complex actions are being addressed through the smart leverage of simple as well as composite patterns. Software architecture patterns Let's look at some of the prominent and dominant software architecture patterns. Object-oriented architecture (OOA) Objects are the fundamental and foundational building blocks for all kinds of software applications. Therefore, the object-oriented architectural style has become the dominant one for producing object-oriented software applications. Ultimately, a software system is viewed as a dynamic collection of cooperating objects, instead of a set of routines or procedural instructions. We know that there are proven object-oriented programming methods and enabling languages, such as C++, Java, and so on. The properties of inheritance, polymorphism, encapsulation, and composition being provided by OOA come in handy in producing highly modular (highly cohesive and loosely coupled), usable and reusable software applications. The object-oriented style is suitable if we want to encapsulate logic and data together in reusable components. Also, the complex business logic that requires abstraction and dynamic behavior can effectively use this OOA. Component-based assembly (CBD) architecture Monolithic and massive applications can be partitioned into multiple interactive and smaller components. When components are found, bound, and composed, we get the full-fledged software applications.  CBA does not focus on issues such as communication protocols and shared states. Components are reusable, replaceable, substitutable, extensible, independent, and so on. Design patterns such as the dependency injection (DI) pattern or the service locator pattern can be used to manage dependencies between components and promote loose coupling and reuse. Such patterns are often used to build composite applications that combine and reuse components across multiple applications. Aspect-oriented programming (AOP) aspects are another popular application building block. By deft maneuvering of this unit of development, different applications can be built and deployed. The AOP style aims to increase modularity by allowing the separation of cross-cutting concerns. AOP includes programming methods and tools that support the modularization of concerns at the level of the source code. Agent-oriented software engineering (AOSE) is a programming paradigm where the construction of the software is centered on the concept of software agents. In contrast to the proven object-oriented programming, which has objects (providing methods with variable parameters) at its core, agent-oriented programming has externally specified agents with interfaces and messaging capabilities at its core. They can be thought of as abstractions of objects. Exchanged messages are interpreted by receiving agents, in a way specific to its class of agents. Domain-driven design (DDD) architecture Domain-driven design is an object-oriented approach to designing software based on the business domain, its elements and behaviors, and the relationships between them. It aims to enable software systems that are a correct realization of the underlying business domain by defining a domain model expressed in the language of business domain experts. The domain model can be viewed as a framework from which solutions can then be readied and rationalized. DDD is good if we have a complex domain and we wish to improve communication and understanding within the development team. DDD can also be an ideal approach if we have large and complex enterprise data scenarios that are difficult to manage using the existing techniques. Client/server architecture This pattern segregates the system into two main applications, where the client makes requests to the server. In many cases, the server is a database with application logic represented as stored procedures. This pattern helps to design distributed systems that involve a client system and a server system and a connecting network. The main benefits of the client/server architecture pattern are: Higher security: All data gets stored on the server, which generally offers a greater control of security than client machines. Centralized data access: Because data is stored only on the server, access and updates to the data are far easier to administer than in other architectural styles. Ease of maintenance: The server system can be a single machine or a cluster of multiple machines. The server application and the database can be made to run on a single machine or replicated across multiple machines to ensure easy scalability and high availability. However, the traditional two-tier client/server architecture pattern has numerous disadvantages. Firstly, the tendency of keeping both application and data on a server can negatively impact system extensibility and scalability. The server can be a single point of failure. The reliability is the main worry here. To address these issues, the client-server architecture has evolved into the more general three-tier (or N-tier) architecture. This multi-tier architecture not only surmounts the issues just mentioned but also brings forth a set of new benefits. Multi-tier distributed computing architecture The two-tier architecture is neither flexible nor extensible. Hence, multi-tier distributed computing architecture has attracted a lot of attention. The application components can be deployed in multiple machines (these can be co-located and geographically distributed). Application components can be integrated through messages or remote procedure calls (RPCs), remote method invocations (RMIs), common object request broker architecture (CORBA), enterprise Java beans (EJBs), and so on. The distributed deployment of application services ensures high availability, scalability, manageability, and so on. Web, cloud, mobile, and other customer-facing applications are deployed using this architecture. Thus, based on the business requirements and the application complexity, IT teams can choose the simple two-tier client/server architecture or the advanced N-tier distributed architecture to deploy their applications. These patterns are for simplifying the deployment and delivery of software applications to their subscribers and users. Layered/tiered architecture This pattern is an improvement over the client/server architecture pattern. This is the most commonly used architectural pattern. Typically, an enterprise software application comprises three or more layers: presentation/user interface layer, business logic layer, and data persistence layer. The presentation layer is primarily usded for user interface applications (thick clients) or web browsers (thin clients). With the fast proliferation of mobile devices, mobile browsers are also being attached to the presentation layer. Such tiered segregation comes in handy in managing and maintaining each layer accordingly. The power of plug-in and play gets realized with this approach. Additional layers can be fit in as needed. There are model view controller (MVC) pattern-compliant frameworks hugely simplifying enterprise-grade and web-scale applications. MVC is a web application architecture pattern. The main advantage of the layered architecture is the separation of concerns. That is, each layer can focus solely on its role and responsibility. The layered and tiered pattern makes the application: Maintainable Testable Easy to assign specific and separate roles Easy to update and enhance layers separately This architecture pattern is good for developing web-scale, production-grade, and cloud-hosted applications quickly and in a risk-free fashion. When there are business and technology changes, this layered architecture comes in handy in embedding newer things in order to meet varying business requirements. Event-driven architecture (EDA) The world is eventually becoming event-driven. That is, applications have to be sensitive and responsive proactively, pre-emptively, and precisely. Whenever there is an event happening, applications have to receive the event information and plunge into the necessary activities immediately. The request and reply notion paves the way for the fire and forgets tenet. The communication becomes asynchronous. There is no need for the participating applications to be available online all the time. EDA is typically based on an asynchronous message-driven communication model to propagate information throughout an enterprise. It supports a more natural alignment with an organization's operational model by describing business activities as series of events. EDA does not bind functionally disparate systems and teams into the same centralized management model. EDA ultimately leads to highly decoupled systems. The common issues being introduced by system dependencies are getting eliminated through the adoption of the proven and potential EDA. We have seen various forms of events used in different areas. There are business and technical events. Systems update their status and condition emitting events to be captured and subjected to a variety of investigations in order to precisely understand the prevailing situations. The submission of web forms and clicking on some hypertexts generate events to be captured. Incremental database synchronization mechanisms, RFID readings, email messages, short message service (SMS), instant messaging, and so on are events not to be taken lightly. There are event processing engines, message-oriented middleware (MoM) solutions such as message queues and brokers to collect and stock event data and messages. Millions of events can be collected, parsed, and delivered through multiple topics through these MoM solutions. As event sources/producers publish notifications, event receivers can choose to listen to or filter out specific events and make proactive decisions in real-time on what to do next. EDA style is built on the fundamental aspects of event notifications to facilitate immediate information dissemination and reactive business process execution. In an EDA environment, information can be propagated to all the services and applications in real-time. The EDA pattern enables highly reactive enterprise applications. Real-time analytics is the new normal with the surging popularity of the EDA pattern. Service-oriented architecture (SOA) With the arrival of service paradigms, software packages and libraries are being developed as a collection of services. Services are capable of running independently of the underlying technology. Also, services can be implemented using any programming and script languages. Services are self-defined, autonomous, and interoperable, publicly discoverable, assessable, accessible, reusable, and compostable. Services interact with one another through messaging. There are service providers/developers and consumers/clients. Every service has two parts: the interface and the implementation. The interface is the single point of contact for requesting services. Interfaces give the required separation between services. All kinds of deficiencies and differences of service implementation get hidden by the service interface. Precisely speaking, SOA enables application functionality to be provided as a set of services, and the creation of personal as well as professional applications that make use of software services. In short, SOA is for service-enablement and service-based integration of monolithic and massive applications. The complexity of enterprise process/application integration gets moderated through the smart leverage of the service paradigm. To summarize, we detailed the prominent and dominant software architecture patterns and how they are used for producing and running any kind of enterprise-class and production-grade software applications. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, grab the book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 24627

article-image-founder-ceo-of-odoo-fabien-pinckaers-discusses-the-new-odoo-13-framework
Vincy Davis
04 Nov 2019
6 min read
Save for later

Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework

Vincy Davis
04 Nov 2019
6 min read
Odoo, formerly known as OpenERP (Enterprise Resource Planning), is a popular open source, business application development software. It comes with many features like a powerful GUI, performance optimization, integrated in-app purchase features and more. It is used by companies to manage and organize their workloads like materials and warehouse management, human resources, finance, accounting, sales, and many other enterprise features. With a fast-growing community, Odoo is being used by companies of all sizes. At the Odoo Experience 2019 event conducted earlier this month, the Odoo team announced the release of Odoo 13, its latest version of the all-in-one business software. This release contains an abundance of major and minor improvements, including new features like sales coupons & promotions module, MRP subcontracting, website form builder, skill management module and more.  At the event, founder & CEO of Odoo, Fabien Pinckaers explained the many concepts behind the new Odoo framework, which he says is one of the best improvements in Odoo 13. New to Odoo? If you are a beginner in Odoo, read our book Working with Odoo 12 - Fourth Edition written by Greg Moss to learn how to start a new company database in Odoo and to understand the basics of Odoo sales management. You can also master customer relationship management in Odoo for setting up a modern business environment. This book will also take you through the OpenChatter feature with notes and messages associated with the Odoo documents. Also, learn how to use Odoo's API to integrate with other applications from our book.   The Odoo 13 framework is also called an In-Memory ORM, because it provides more considerable memory than before. When employed for operational measures, on an average, it runs 4.5 times faster when compared to earlier versions of Odoo. Key features of Odoo 13 framework Simplified Cache process Pinckaers says that in the new framework, they have simplified the cache as the stored fields will now only need a single value. On the other hand, the non-stored fields’ computed value will depend on the keywords present in the context (eg. translatable and context). He added that, in version 12, most fields did not need a cache so it contained only one global cache with an exception for fields that were text-dependent. It also had a new attribute for a multi-line inventory where the projects depend on “way roads”. However, the difficulty in this version is that when creating a field, users had to select the cache value and if the context of the field is changing, then the users had to again specify the new value of cache. This step is made simpler in version 13, as the user now needs to specify the value of the cache only once. “It seems simple but actually in the business code we're passing it to all the fields at the same time,” asserts Pinckaers. This simplified cache process will also reduce the alert memory access of the code. In-memory updates While specifying the various test field values, in the earlier versions, users had to update its validation value each time making it a time consuming process. To overcome this problem, the Odoo team has included all the data transactions in memory in the new version. Consequently,  in Odoo 13, when assigning the field value, the user can put it in the cache each time. Hence, when a field value needs to be read, it is taken from the cache itself. To manage all the dependencies in Python, Pinckaers demonstrated how users should always:  Use the inverse field, instead of SQL query Avoid using SELECT, as the implementation of the compute will read the same object When create(), set one2many to[] Delaying the computing field for faster transactions In order to delay a computing field in the line.product_quantity and the line.discount in the preceding Odoo versions, a user had to compute the dependency value for all the for line in order commands. Once the transaction was completed, the values were then recomputed and written in the code. This process is also made easy in Odoo 13, as the user can now mark all the line commands to recompute and use the self.flush() command to compute the values after the transaction is completed. This makes all the computation transactions to be conducted in memory. According to Pinckaers, this support will help users with more than 100 customers as it will make the process much faster and simpler. Optimize dependency tree to reduce Python and SQL computations Pinckaers takes the ‘change order’ example to demonstrate how version 13 of Odoo has a clean dependency tree. This means that if the price list of the order is changed, the total cost of the order will also change indirectly, thus optimizing the dependency tree. He explains that this indirect change will happen due to the indirect dependency between the pricelist identity and the total cost list of the field in Odoo 13. In the earlier versions, due to the recursive nature of the dependencies, each order of the line entailed the order ID of the field. This required the user to read sometimes even more than 100 lines of the list to get the order ID. In Odoo 13, this prolonged process is altered to get a more optimized dependency tree. This means that the user can now directly get the order ID from the dependent tree, without the Python and SQL computations.  Improvements in browse optimization() The major improvement instilled in Odoo 13 browse optimization() is the mechanism to avoid multiple format cache conversion. In the previous versions, users had to read and convert all the SQL queries to cache format followed by put in cache command. This meant that it required three commands just to read the data, making the process very tedious. With the latest version, the prefetch command will directly save all the similar data formats in the memory. “But if the format is different, then we have to apply everything a color conversion method. As  Python is extremely slow,” Pinckaers says, “applying a dictionary that we see from outside the cache” makes the process faster because a C implementation can be used to directly convert the data in the cache format. You can watch the full video to see Pinckaer’s demonstration of code cleanup and Python optimization. If you want to use Odoo to build enterprise applications and set up the functional requirements for your business, read our book ‘Working with Odoo 12 - Fourth Edition' written by Greg Moss to learn how to use the MRP module to create, process, and schedule the manufacturing and production order. This book will also guide you with in-depth knowledge about the business intelligence required in Odoo, its architecture and will also unveil how to customize Odoo to meet the specific needs of your business.  Creating views in Odoo 12 – List, Form, Search [Tutorial] How to set up Odoo as a system service [Tutorial] Handle Odoo application data with ORM API [Tutorial] Implement an effective CRM system in Odoo 11 [Tutorial] “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company” – An interview with Odoo community hero, Yenthe Van Ginneken
Read more
  • 0
  • 0
  • 24525

article-image-mark-reinhold-on-the-evolution-of-java-platform-and-openjdk
Sugandha Lahoti
02 Aug 2018
5 min read
Save for later

Mark Reinhold on the evolution of Java platform and OpenJDK

Sugandha Lahoti
02 Aug 2018
5 min read
Yesterday, Mark Reinhold, Chief architect of the Java Platform Group and tech lead at OpenJDK talked about both the short-term and long-term technical roadmap of Java and the JDK. He was speaking at the ongoing OpenJDK Committers’ Workshop which meets twice a year to discuss the state of the OpenJDK Community and the JDK technical roadmap. With decades as one of the world’s most popular programming language, you’d be forgiven for thinking it might be slowing down - especially with younger languages like Kotlin jostling for position in the popularity stakes. However, there’s plenty of life in it yet. Mark explained what Java’s future might look like and how developers can influence its growth for the better. Who is in charge of the future of Java and OpenJDK? Mark believes that the success of the Java platform depends on contributors focussing on the big picture. The leaders who guide the development platform are not merely developers, who are only interested in writing code or developing new features; the true leaders are what Mark likes to call, “stewards”. These stewards are people who assume responsibility for overseeing and protecting something considered worth caring for and preserving. They try to preserve the past while evolving in the future. A developer is considered as a steward if they demonstrate 3 key qualities: Deep Knowledge in at least one key area. Breadth of care across the platform: They think from time to time about the entire platform and how the whole thing fits together. Empathy: They have the ability to put themselves in the minds of ordinary developers who use the platform rather than work on the platform. In the case of OpenJDK, stewards are effectively in charge of the development of the platform. These stewards are led by Mark Reinhold, but he’s also supported by John Rose for the Java Virtual Machine and Brian Goetz for the language and libraries. Apart from these guys, many other developers, who have the 3 key qualities above, contribute to stewardship as a part of their day to day work. Every single one of them has demonstrated a deep long-term track-record of expertise in at least one area combined with a breadth of care with the entire platform and the ability to empathize with the ordinary developers. Stewards ensure reliability and compatibility The stewardship of the Java platform is guided by two key values. First, it's thinking about long-term goals and working to balance conservation with innovation. Second, it is about preserving the values of Readability and Compatibility. Readability is essential to maintainability. This means you don’t think about the code from a short-term perspective. Thinking about the long-term reliability of the code you’re writing is vital, not least because it makes life easier for other people using the software in the future. Compatibility is similar. It’s all about recognizing that software doesn’t exist in a vacuum - it exists in an ecosystem of tools and developers. There are a number of different types of compatibility that highlight what it means in practice: Source: existing code continues to compile Binary: existing code continues to link at run-time Behavior: Existing APIs continue to behave within the bounds of their specifications. Migration: Adopting a new feature incrementally Intellectual: New features are built on existing knowledge. Add selective features but make them look like they have been there all along. The Java platform ensures that stewards strive to balance conservation and innovation. It’s only through balance that the project can maintain its core values of readability and compatibility. How stewards guide the Java platform As Mark pointed out, it takes considerable solitary thinking, maybe months and years, before an idea takes off. Even then, it needs to be discussed intensively with other stewards. The fruits of these discussions surface in two ways that ensure visibility and transparency: New JEPs in the JEP’s process New OpenJDK projects which explain a problem area in depth, eventually generating more JEPs, which later wind up as features. Transparency is essential. Anyone is open to make an appeal if they don’t like a decision. In fact, if you don’t agree to a decision that the Head JDK makes, you are also free to appeal to the OpenJDK Governing Board. How you can influence the evolution of Java All developers, external contributors, and organizations have the opportunity to influence the direction of the Java platform. The degree of that influence is determined by the degree of the contributions made in the JDK community on a meaningful and ongoing basis. This includes detailed bug reports, constructive critiques, bug fixes, small enhancements, entire non-trivial JEPs. If you only participate in order to serve yourself or your employer’s narrow technical interests then you are unlikely to gain much influence. However, if you deliver a strong track record of consistent serious contributions over a long period of time, then your influence will grow quite large and you might even become a steward yourself. The OpenJDK community has been going strong over the past years under the leadership of the Java stewards. You can go through the entire conference on YouTube to review life at OpenJDK Community, and a quick look at what's ahead for the Java platform in general. Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java. 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 24461

article-image-oldest-programming-languages-use-today
Antonio Cucciniello
11 Jul 2017
5 min read
Save for later

The oldest programming languages in use today

Antonio Cucciniello
11 Jul 2017
5 min read
Today, we are going to be discussing some of the oldest, most established programming languages that are still in use today. Some developers may be surprised to learn that many of these languages surpass them in age, in a world where technology, especially in the world of development, is advancing at such a rapid rate. But then, old is gold, after all. So, in age order, let’s present the oldest programming languages in use today: C The C language was created in 1972 (it’s not that old, okay). C is a lower level language that was based an earlier language called B (do you see a trend here?) It is a general-purpose language, and a parent language which many future programming languages derive from, such as C#, Java, JavaScript, Perl, PHP and Python. It is used in many applications that must interface with hardware or play with memory. C++ Pronounced see-plus-plus, C++ was developed 11 years later in 1983. It is very similar to C, in fact it is often considered an extension of C. It added various concepts such as classes, virtual functions, and templates. It is more of an intermediate level language that can be used lower level or higher level, depending on the application. It is also known for being used in low latency applications. Objective-C Around the same time as C++ was being released to the public, Objective-C was created. If you took an educated guess from the name and said that it would be another extension of C, then you’d be right. This version was meant to be an object-oriented version of C (there’s a lot in a name, clearly). It is used, probably most famously, by Apple. If you are a Mac or iOS user, then your iPhone or Mac applications were most likely developed with Objective-C (until they recently moved over to Swift). Python We are going to take a quick jump ahead in time to the 90’s for this one. In 1991, the Python programming language was released, though it had been in development in the late 80’s. It is a dynamically-typed, object-oriented language that is often used for scripting and web applications. It is usually used with some of its frameworks like Django or Flask on the backend. It is one of the most popular programming languages in use today. Ruby In 1993, Ruby was released. Today, you probably heard of Ruby on Rails, which primarily is used to create the backend of web applications using Ruby. Unlike the many languages derived from C, this language was influenced by older languages such as Perl and Lisp. This language was designed for productive and fun programming. This was done by making the language closer to human needs, rather than machine needs. Java Two years later in 1995, Java was developed. This is a high level language that is derived from C. It is famously known for its use in web applications and as the language to use to develop Android applications and Android OS. It used to be the most popular language a few years ago, but its popularity and usage has definitely decreased. PHP In the same year as Java was developed, PHP was born. It is an open source programming language developed for the purpose of creating dynamic websites. It is also used for server side web development. Its usage is definitely declining, but it is still in use today. JavaScript That same year (yup, ’95 was good year for programming, not so much for fans of Full House), JavaScript was brought to the world. Its purpose was to be a high level language that helped with the functionality of a web page. Today, it is sometimes used as a scripting language, as well as being used on the backend of applications with the release of Node.js. It is one of the most popular and widely used programming languages today. Conclusion That was our brief history lesson on some in use programming languages. Even though some of them are 20, 30, even over 40 years old, they are being used by thousands of developers daily. They all have a variety of uses, from lower level to higher level, from web applications to mobile applications. Do you feel there is a need for newer languages, or are you happy with what we have? If you have any favorites, let us know which one and why! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 24455

article-image-what-are-apis-why-should-businesses-invest-in-api-development
Packt Editorial Staff
25 Jul 2019
9 min read
Save for later

What are APIs? Why should businesses invest in API development?

Packt Editorial Staff
25 Jul 2019
9 min read
Application Programming Interfaces (APIs) are like doors that provide access to information and functionality to other systems and applications. APIs share many of the same characteristics as doors; for example, they can be as secure and closely monitored as required. APIs can add value to a business by allowing the business to monetize information assets, comply with new regulations, and also enable innovation by simply providing access to business capabilities previously locked in old systems. This article is an excerpt from the book Enterprise API Management written by Luis Weir. This book explores the architectural decisions, implementation patterns, and management practices for successful enterprise APIs. In this article, we’ll define the concept of APIs and see what value APIs can add to a business. APIs, however, are not new. In fact, the concept goes way back in time and has been present since the early days of distributed computing. However, the term as we know it today refers to a much more modern type of APIs, known as REST or web APIs. The concept of APIs Modern APIs started to gain real popularity when, in the same year of their inception, eBay launched its first public API as part of its eBay Developers Program. eBay's view was that by making the most of its website functionality and information also accessible via a public API, it would not only attract, but also encourage communities of developers worldwide to innovate by creating solutions using the API. From a business perspective, this meant that eBay became a platform for developers to innovate on and, in turn, eBay would benefit from having new users that perhaps it couldn't have reached before. eBay was not wrong. In the years that followed, thousands of organizations worldwide, including known brands, such as Salesforce.com, Google, Twitter, Facebook, Amazon, Netflix, and many others, adopted similar strategies. In fact, according to the programmableweb.com (a well-known public API catalogue), the number of publicly available APIs has been growing exponentially, reaching over 20k as of August 2018. Figure 1: Public APIs as listed in programmableweb.com in August 2018 It may not sound like much, but considering that each of the listed APIs represents a door to an organization's digital offerings, we're talking about thousands of organizations worldwide that have already opened their doors to new digital ecosystems, where APIs have become the product these organizations sell and developers have become the buyers of them. Figure: Digital ecosystems enabled by APIs In such digital ecosystems, communities of internal, partner, or external developers can rapidly innovate by simply consuming these APIs to do all sorts of things: from offering hotel/flight booking services by using the Expedia API, to providing educational solutions that make sense of the space data available through the NASA API. There are ecosystems where business partners can easily engage in business-to-business transactions, either to resell goods or purchase them, electronically and without having to spend on Electronic Data Interchange (EDI) infrastructure. Ecosystems where an organization's internal digital teams can easily innovate as key enterprise information assets are already accessible. So, why should businesses care about all this? There is, in fact, not one answer but multiple, as described in the subsequent sections. APIs as enablers for innovation and bimodal IT What is innovation? According to a common definition, innovation is the process of translating an idea or invention into a good or service that creates value or for which customers will pay. In the context of businesses, according to an article by HBR, innovation manifests itself in two ways: Disruptive innovation: Described as the process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Sustaining innovation: When established businesses (incumbents) improve their goods and services in the eyes of existing customers. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers. Why is this relevant? It is well known that established businesses struggle with disruptive innovation. The Netflix vs Blockbuster example reminds us of this fact. By the time disruptors are able to catch up with an incumbent's portfolio of goods and services, they are able to do so with lower prices, better business models, lower operation costs, and far more agility, and speed to introduce new or enhanced features. At this point, sustaining innovation is not good enough to respond to the challenge. With all the recent advances in technology and the internet, the rate at which disruptive innovation is challenging incumbents has only grown exponentially. Therefore, in order for established businesses to endure the challenge put upon them, they must somehow also become disruptors. The same HBR article describes a point of view on how to achieve this from a business standpoint. From a technology standpoint, however, unless the several systems that underpin a business are "enabled" to deliver such disruption, no matter what is done from a business standpoint, this exercise will likely fail. Perhaps by mere coincidence, or by true acknowledgment of the aforesaid, Gartner introduced the concept of bimodal IT in December 2013, and this concept is now mainstream. Gartner defined bimodal IT as the following: "The practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed." Figure: Gartner's bimodal IT According to Gartner, Mode 1 (or slow) IT organizations focus on delivering core IT services on top of more traditional and hard-to-change systems of record, which in principle are changed and improved in longer cycles, and are usually managed with long-term waterfall project mechanisms. Whereas for Mode 2 (or fast) IT organizations, the main focus is to deliver agility and speed, and therefore they act more like a startup (or digital disruptor in HBR terms) inside the same enterprise. However, what is often misunderstood is how fast IT organizations can disruptively innovate, when most of the information assets, which are critical to bringing context to any innovation, reside in backend systems, and any sort of access has to be delivered by the slowest IT sibling. This dilemma means that the speed of innovation is constrained to the speed by which the relevant access to core information assets can be delivered. As the saying goes, "Where there's a will there's a way." APIs could be implemented as the means for the fast IT to access core information assets and functionality, without the intervention of the slow IT. By using APIs to decouple the fast IT from the slow IT, innovation can occur more easily. However, as with everything, it is easier said than done. In order to achieve such organizational decoupling using APIs, organizations should first build an understanding about what information assets and business capabilities are to be exposed as APIs, so fast IT can consume them as required. This understanding must also articulate the priorities of when different assets are required and by whom, so the creation of APIs can be properly planned for and delivered. Luckily for those organizations that already have mature service-oriented architectures (SOA), some of this work will probably already be in place. For organizations without such luck, this activity should be planned for and should be a fundamental component of the digital strategy. Then the remaining question would be: which team is responsible for defining and implementing such APIs; the fast IT or slow IT? Although the long answer to this question is addressed throughout the different chapters of this book, the short answer is neither and both. It requires a multi-disciplinary team of people, with the right technology capabilities available to them, so they can incrementally API-enable the existing technology landscape, based on business-driven priorities. APIs to monetize on information assets Many experts in the industry concur that an organization's most important asset is its information. In fact, a recent study by Massachusetts Institute of Technology (MIT) suggests that data is the single most important asset for organizations "Data is now a form of capital, on the same level as financial capital in terms of generating new digital products and services. This development has implications for every company's competitive strategy, as well as for the computing architecture that supports it." If APIs act as doors to such assets, then APIs also provide businesses with an opportunity to monetize them. In fact, some organizations are already doing so. According to another article by HBR, 50% of the revenue that Salesforce.com generates comes from APIs, while eBay generates about 60% of its revenue through its API. This is perhaps not such a huge surprise, given that both of these organizations were pioneers of the API economy. Figure: The API economy in numbers What's even more surprising is the case of Expedia. According to the same article, 90% of Expedia's revenue is generated via APIs. This is really interesting, as it basically means that Expedia's main business is to indirectly sell electronic travel services via its public API. However, it's not all that easy. According to the previous study by MIT, most of the CEOs for Fortune 500 companies don't yet fully acknowledge the value of APIs. An intrinsic reason for this could be the lack of understanding and visibility over how data is currently being (or not being) used. Assets that sit hidden on systems of record, only being accessed via traditional integration platforms, will not, in most cases, give insight to the business on how information is being used, and the business value it adds. APIs, on the other hand, are better suited to providing insight about how/by whom/when/why information is being accessed, therefore giving the business the ability to make better use of information to, for example, determine which assets have better capital potential. In this article we provided a short description of APIs, and how they act as an enabler to digital strategies. Define the right organisation model for business-driven APIs with Luis Weir’s upcoming release Enterprise API Management. To create effective API documentation, know how developers use it, says ACM GraphQL API is now generally available Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more
Read more
  • 0
  • 0
  • 23649

article-image-a-five-level-learning-roadmap-for-functional-programmers
Sugandha Lahoti
12 Apr 2019
4 min read
Save for later

A five-level learning roadmap for Functional Programmers

Sugandha Lahoti
12 Apr 2019
4 min read
The following guide serves as an excellent learning roadmap for functional programming. It can be used to track our level of knowledge regarding functional programming. This guide was developed for the Fantasyland institute of learning for the LambdaConf conference. It was designed for statically-typed functional programming languages that implement category theory. This post is extracted from the book Hands-On Functional Programming with TypeScript by Remo H. Jansen. In this book, you will understand the pros, cons, and core principles of functional programming in TypeScript. This roadmap talks about five levels of difficulty: Beginner, Advanced Beginner, Intermediate, Proficient, and Expert. Languages such as Haskell support category theory natively, but, we can take advantage of category theory in TypeScript by implementing it or using some third-party libraries. Not all the items in the list are 100% applicable to TypeScript due to language differences, but most of them are 100% applicable. Beginner To reach the beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Immutable data Second-order functions Constructing and destructuring Function composition First-class functions and lambdas Use second-order functions (map, filter, fold) on immutable data structures Destructure values to access their components Use data types to represent optionality Read basic type signatures Pass lambdas to second-order functions Advanced beginner To reach the advanced beginner level, you will need to master the following concepts and skills: CONCEPTS SKILLS Algebraic data types Pattern matching Parametric polymorphism General recursion Type classes, instances, and laws Lower-order abstractions (equal, semigroup, monoid, and so on) Referential transparency and totality Higher-order functions Partial application, currying, and point-free style Solve problems without nulls, exceptions, or type casts Process and transform recursive data structures using recursion Able to use functional programming in the small Write basic monadic code for a concrete monad Create type class instances for custom data types Model a business domain with abstract data types (ADTs) Write functions that take and return functions Reliably identify and isolate pure code from an impure code Avoid introducing unnecessary lambdas and named parameters Intermediate To reach the intermediate level, you will need to master the following concepts and skills: CONCEPTS SKILLS Generalized algebraic data type Higher-kinded types Rank-N types Folds and unfolds Higher-order abstractions (category, functor, monad) Basic optics Implement efficient persistent data structures Existential types Embedded DSLs using combinators Able to implement large functional programming applications Test code using generators and properties Write imperative code in a purely functional way through monads Use popular purely functional libraries to solve business problems Separate decision from effects Write a simple custom lawful monad Write production medium-sized projects Use lenses and prisms to manipulate data Simplify types by hiding irrelevant data with existential Proficient To reach the proficient level, you will need to master the following concepts and skills: CONCEPTS SKILLS Codata (Co)recursion schemes Advanced optics Dual abstractions (comonad) Monad transformers Free monads and extensible effects Functional architecture Advanced functors (exponential, profunctors, contravariant) Embedded domain-specific languages (DSLs) using generalized algebraic datatypes (GADTs) Advanced monads (continuation, logic) Type families, functional dependencies (FDs) Design a minimally powerful monad transformer stack Write concurrent and streaming programs Use purely functional mocking in tests. Use type classes to modularly model different effects Recognize type patterns and abstract over them Use functional libraries in novel ways Use optics to manipulate state Write custom lawful monad transformers Use free monads/extensible effects to separate concerns Encode invariants at the type level. Effectively use FDs/type families to create safer code Expert To reach the expert level, you will need to master the following concepts and skills: CONCEPTS SKILLS High performance Kind polymorphism Generic programming Type-level programming Dependent-types, singleton types Category theory Graph reduction Higher-order abstract syntax Compiler design for functional languages Profunctor optics Design a generic, lawful library with broad appeal Prove properties manually using equational reasoning Design and implement a new functional programming language Create novel abstractions with laws Write distributed systems with certain guarantees Use proof systems to formally prove properties of code Create libraries that do not permit invalid states. Use dependent typing to prove more properties at compile time Understand deep relationships between different concepts Profile, debug, and optimize purely functional code with minimal sacrifices Summary This guide should be a good resource to guide you in your future functional-programming learning efforts. Read more on this in our book Hands-On Functional Programming with TypeScript. What makes functional programming a viable choice for artificial intelligence projects? Why functional programming in Python matters: Interview with best selling author, Steven Lott Introducing Coconut for making functional programming in Python simpler
Read more
  • 0
  • 0
  • 23322
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-microsofts-github-acquisition-is-good-for-the-open-source-community
Pavan Ramchandani
19 Jul 2018
6 min read
Save for later

Microsoft’s GitHub acquisition is good for the open source community

Pavan Ramchandani
19 Jul 2018
6 min read
Microsoft buying GitHub is "good news" for open source. - Jim Zemlin, the Executive Director of Linux Foundation Unless you have been living under a rock, you will have heard about the software giant Microsoft’s acquisition of the open source platform giant GitHub for $7.5 Billion. Since the announcement a few weeks ago, the discussions in the open source community have heated up regarding the future of open source. This acquisition has seen a surge in the number of developers migrating to rival version control systems like BitBucket, GitLab, etc., but mostly GitLab. This will affect GitHub’s user base and in turn the contribution to the platform, which is the primary source of funding to keep any open source service alive. This goes to show how difficult it is to create a great product for developers and still make money. Microsoft has created great products for enterprises and has been making money in the process. As such, this acquisition is one worth waiting and watching as it transforms both entities. The common fear among developers is that Microsoft will exploit the limitations inherent to an open source platform and will inject its subscription model into GitHub to make it profitable. The insane price that Microsoft paid to acquire GitHub afterall needs to be recovered. However, it may not be as straightforward. Many believe it’s not the platform’s monetizing potential, but its access to the user base that Microsoft is most interested in. A lot of them also believe, Microsoft has the potential to resurrect GitHub and revolutionize the open source movement. Let us explore some reasons why this acquisition is fruitful for the developer community. GitHub’s losses have been significant GitHub had reportedly been suffering losses and is said to have lost $66 Mn loss in 2016. The software industry is a fierce eat-or-get-eaten jungle. Losing out in the market to giant companies or other emerging startups is a common fear. There is always an alternative tool for every developer need as the software market relentlessly works to make things cheaper while offering variety. Startups are reaching the deflection point sooner in their operation cycle. The GitHub community is the platform’s greatest strengths and the reason why the platform has remained operational through difficult times; but there were regular internal frictions at the management level in GitHub. The strife became apparent when reports came of developers feeling ignored by the GitHub management. The founder, Chris Wanstrath, had to come out and address reports of toxic environment, in a report last year. With Microsoft buying GitHub, there would be a massive cashflow for all the projects in development and the management will be streamlined with Nat Friedman, announced as the head of GitHub operations. Nat’s successful history with leading open source projects such as Xamarin, gives many hope that this time around, Microsoft really does mean well for GitHub with its acquisition. The Azure Cloud advantage for GitHub One of the key challenges that GitHub has faced lately is scaling their infrastructure smoothly without adversely impacting their users. Outages have become a common occurrences that most GitHub users are familiar with. Microsoft has a strong suite of cloud platform and services in the form of Azure. GitHub users can expect receiving the native experience of using the Azure stack as a part of the integration with GitHub. This integration will further enhance the collaboration on the GitHub platform for developers and advance the GitHub ecosystem. Microsoft can integrate GitHub into its enterprise offerings GitHub, in the last few years, has been attempting to extend its reach in the enterprise market with various offerings for business. However, this offering was limited to creating private repositories for some fees. Microsoft, on the other end, has been a leader when it comes to providing enterprise tools and venturing into the subscription market. This acquisition will excite the brand-loyal enterprises, using Microsoft suites. Imagine the new clientele that GitHub now has access to thanks to Microsoft. Just as Microsoft have bundled Skype with their Office 365 suite, it is easy to postulate similar offerings being designed for enterprises with GitHub at the center of such plans. Just like Excel, GitHub could end up as the default version control tool that enterprises use to build new projects, prototype ideas, open source or otherwise. In exchange, Github could be Microsoft’s ace up its sleeve in  strengthening its open source community ties and help put Microsoft in a position to inject innovative strategies in the community. Microsoft’s push to open source projects Microsoft, have plunged head first into open sourcing projects in recent years. The push is not only for their experimental projects, but has also has been for their successful enterprise tools like .NET Core and Visual Studio. Historically, Microsoft has taken a lot of heat from the open source community for opposing the Linux model. But the recent paradigm shift in Microsoft, with a change in its leadership and vision, is focussed on working around the community and doing business from the enterprises. End of last year, Microsoft joined the Linux Foundation and went platinum with the Open Source Initiative. TypeScript is a full open source language and sees regular updates from Microsoft. It is now an established language for web development and is managed better than some of the open source languages. Also, TypeScript is fully hosted on GitHub for developers to improve on it.  This indicates that Microsoft has been able to reach out to the community and has the potential to operate open source projects without necessarily commercializing them. Conclusion Microsoft buying out GitHub is not necessarily bad. The tech giant has been one of the biggest contributors to GitHub with its projects like Visual Studio Code, TypeScript, etc. While the panic is understandable, considering Microsoft’s past strategies to counter the open source model in its early days, the recent activities in Microsoft, especially under the leadership of Satya Nadella are suggesting a paradigm shift in Microsoft’s approach to serving the IT market. You can hate Microsoft for being a profit-driven company, but there is no denying that Microsoft was one of the pioneers of the modern day software industry and more importantly, the bitter pill that GitHub needs to get out of the evergrowing loss making sinkhole. Microsoft understand software better and are capable of doing open source the right way and with more efficiency.  This acquisition was inevitable to sustain the platform and to scale it to serve the increasing demand of developer market. What Microsoft must bear in mind while revamping GitHub policies and the business model is that, it’s greatest challenge and its greatest asset is the paradox of this alliance itself. As GitHub gets more profit conscious, Microsoft must get more community centric to ensure an equilibrium is reached where developers can thrive on a platform that provides a great developing and community experience. The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab GitHub for Unity 1.0 is here with Git LFS and file locking support Microsoft releases Open Service Broker for Azure (OSBA) version 1.0  
Read more
  • 0
  • 0
  • 21798

article-image-what-are-lightweight-architecture-decision-records
Richard Gall
16 May 2018
4 min read
Save for later

What are lightweight Architecture Decision Records?

Richard Gall
16 May 2018
4 min read
Architecture Decision Records (ADRs) document all the decisions made about software. Every change is recorded in a plain text file sitting inside a version control system (like GitHub). The record should be a complement to the information you can find in a version control system. The ADR provides context and information around every decision made about a piece of software. Why are lightweight Architecture Decision Records needed? We are always making decisions when we build software. Even the simplest piece of software will have required the engineer to take a number of different decisions. Often these decisions aren't obvious. If you've ever had to work with code written by someone else you're probably familiar with this sort of situation. You might have even found that when you come across someone else's code, you need to make a further decision. Either you can simply accept what has been written, and merely surmise and assume why it has been done in the way that it, or you can decide to change it, based on your own judgement. Neither option is ideal. This was what Michael Nygard identified in this blog post in 2011. This was when the concept of Architecture Decision Records first emerged. An ADR should prevent situations like this arising. That makes life easier for you. More importantly, it should mean that every decision is transparent to everyone involved. So, instead of blindly accepting something or immediately changing it, you can simply check the Architecture Decision Record. This will then inform how you proceed. Perhaps you need to make a change. But perhaps you also now understand the context of why something was built in the way it was. Any questions you might have should be explicitly answered in the architecture decision record. So, when you start asking yourself why has she done it like that? instead of floundering helplessly, you can find the answer in the ADR. Why lightweight Architecture Decision Records now? Architecture Decision Records aren't a new thing. Nygard wrote his post all the way back in 2011, after all. But the fact remains that the context from which Nygard was writing in 2011 was very specific. Today it is mainstream. As we've moved away from monolithic architecture towards microservices or serverless, decision making has become more and more important in software engineering. This is a point that is well explained in a blog post here: "The rise of lean development and microservices... complicates the ability to communicate architecture decisions. While these concepts are not inherently opposed to documentation, their processes often fail to effectively capture decision-making processes and reasoning. Another possible inefficiency when recording decisions is bad or out-of-date documentation. It's often a herculean effort to keep large, complex architecture documents current, making maintenance one of the most common barriers to entry." ADRs are, then, a way of managing the complexity in modern software engineering. They are a response to a fundamental need to better communicate decisions. Most importantly, they codify decision-making within the development process. It is when they are lightweight and sit within the project itself that they are most effective. Architecture Decision Record template Architecture Decision Records must follow a template. Not only does that mean everyone is working off the same page, it also means people are actually more likely to document their decisions. Think about it: if you're asked to note how you decide to do something without any guidelines, you're probably not going to do it at all. Below, you'll find an Architecture Decision Record example template. There are a number of different templates you can use, but it's probably best to sit down with your team and agree on what needs to be captured. An Architecture Decision Record example Date Decision makers [who was involved in the decision taken] Category [which part of the architecture does this decision pertain to] Contextual outline [Explain why this decision was made. Outline the key considerations and assumptions at play] Impact consequences [What does this decision mean for the project? What should someone reading this be aware of in terms of future decisions?] As I've already noted, there are a huge number of ways you may want to approach this. Use this as a starting point. Read next Enterprise Architecture Concepts Reactive Programming and the Flux Architecture
Read more
  • 0
  • 0
  • 21593

article-image-elon-musks-tiny-submarine-is-a-lesson-in-how-not-to-solve-problems-in-tech
Richard Gall
11 Jul 2018
6 min read
Save for later

Elon Musk's tiny submarine is a lesson in how not to solve problems in tech

Richard Gall
11 Jul 2018
6 min read
Over the last couple of weeks the world has been watching on as rescuers attempted to find, and then save, a young football team from Tham Luang caves in Thailand. Owing to a remarkable coordinated effort, and a lot of bravery from the team (including one diver who died), all 12 boys were brought back to safety. Tech played a big part in the rescue mission too - from drones to subterranean radios. But it wanted to play a bigger role - or at least Elon Musk wanted it to. Musk and his submarine has been a somewhat bizarre subplot to this story, and while you can't fault someone for offering to help out in a crisis, you might even say it was unnecessary. Put simply, Elon Musk's involvement in this story is a fable about the worst aspects of tech-solutionism. It offers an important lesson for anyone working in tech how not to solve problems. Bringing a tiny submarine to a complex rescue mission that requires coordination between a number of different agencies, often operating from different countries is a bit like telling someone to use Angular to build their first eCommerce store. It's like building an operating system from scratch because your computer has crashed. Basically, you just don't need it. There are better and more appropriate solutions - like Shopify or WooCommerce, or maybe just rebooting your system. Lesson 1: Don't insert yourself in problems if you're not needed Elon Musk first offered his support to the rescue mission in Thailand on July 4. It was a response to one of his followers. https://twitter.com/elonmusk/status/1014509856777293825 Musk's first instincts were measures, saying that he suspects 'the Thai government has got this under control' but it didn't take long for his mind to change. Without any specific invitation or coordination with the parties leading the rescue mission, Musk's instincts to innovate and create kicked in. This sort of situation is probably familiar to anyone who works in tech - or, for that matter, anyone who has ever had a job. Perhaps you're the sort of person who hears about a problem and your immediate instinct is to fix it. Or perhaps you've been working on a project, someone hears about it, and immediately they're trying to solve all the problems you've been working on for weeks or months. Yes, sometimes it's appealing, but on the other side it can be incredibly annoying and disruptive. This is particularly true in software engineering where you're trying to solve problems at every level - from strategy to code. There's rarely a single solution. There's always going to be a difference of opinion. At some point we need to respect boundaries and allow the right people to get on with the job. Lesson 2: Listen to the people involved and think carefully about the problem you're trying to solve One of the biggest challenges in problem solving is properly understanding the problem. It's easy to think you've got a solution after a short conversation about a problem but there may be nuances you've missed or complexities that aren't immediately clear. Humility can be a very valuable quality when problem solving. It allows everyone involved to think clearly about the task at hand; it opens up space for better solutions. As the old adage goes, when every problem looks like a nail, every solution looks like a hammer. For Musk, when a problem looks like kids stuck in an underwater cave, the solution looks like a kid-sized submarine. Never mind that experts in Thailand explained that the submarine would not be 'practical.' For Musk, a solution is a solution. "Although his technology is good and sophisticated it’s not practical for this mission" said Narongsak Osatanakorn, one of the leaders of the rescue mission, speaking to the BBC and The Guardian. https://twitter.com/elonmusk/status/1016110809662066688 Okay, so perhaps that's a bit of a facetious example - but it is a problem we can run into, especially if we work in software. Sometimes you don't need to build a shiny new SPA - your multi-page site might be just fine for its purpose. And maybe you don't need to deploy on containers - good old virtual machines might do the job for you. In these sort of instances it's critical to think about the problem at hand. To do that well you also need to think about the wider context around it - what infrastructure is already there? If we change something, is that going to have a big impact on how it's maintained in the future? In many ways, the lesson here recalls the argument put forward by the Boring Software Manifesto in June. In it, the writer argued in favor of things that are 'simple and proven' over software that is 'hyped and volatile'. Lesson 3: Don't take it personally if people decline your solutions Problem solving is a collaborative effort, as we've seen. Offering up solutions is great - but it's not so great when you react badly to rejection. https://twitter.com/elonmusk/status/1016731812159254529 Hopefully, this doesn't happen too much in the workplace - but when your job is to provide solutions, it doesn't help anyone to bring your ego into it. In fact, it indicates selfish motives behind your creative thinking. This link between talent, status and ego has been developing for some time now in the tech world. Arguably Elon Musk is part of a trend of engineers - ninjas, gurus, wizards, whatever label you want to place on yourself - for whom problem-solving is as much an exercise in personal branding as it is actually about solving problems. This trend is damaging for everyone - it not only undermines people's ability to be creative, it transforms everyone's lives into a rat race for status and authority. That's not only sad, but also going to make it hard to solve real problems. Lesson 4: Sometimes collaboration can be more inspiring than Elon Musk Finally, let's think about the key takeaway here: everyone in that cave was saved. And this wasn't down to some miraculous invention. It was down to a combination of tools - some of them even pretty old. It wasn't down to one genius piece of engineering, but instead a combination of creative thinking and coordinated problem solving that used the resources available to bring a shocking story to a positive conclusion. Working in tech isn't always going to be a matter of life and death - but it's the collaborative and open world we want to work in, right?
Read more
  • 0
  • 2
  • 20550

article-image-python-web-development-frameworks-django-flask
Owen Roberts
22 Dec 2015
5 min read
Save for later

Python Web Development Frameworks: Django or Flask?

Owen Roberts
22 Dec 2015
5 min read
I love Python, I’ve been using it close to three years now after a friend gave me a Raspberry Pi they had grown bored with. In the last year I’ve also started to seriously get into web development for my own personal projects but juggling all these different languages can sometimes get a bit too much for me; so this New Year I’ve promised myself I’m going to get into the world of Python web development. Python web dev has exploded in the last year. Django has been around for a decade now, but with long term support and the wealth of improvements that we’ve seen to the framework in just the last year it’s really reaching new heights of popularity. Not only Django, but Flask’s rise to fame has meant that writing a web page doesn’t have to involve reams and reams of code too! Both these frameworks are about cutting down on time spent coding without sacrificing quality, but which one do you go for? In this blog I’m going to show you the best bundles you need to get started with taking Python to the world of the web with titles I've been recommended - and at only $5 per eBook, hopefully this little hamper list inspires you to give something new a try for 2016! So, first of all which do you start with, Django or Flask? Let’s have a look at each and see what they can do for you. Route #1: Django So the first route to enter the world of Python web dev is Django, also touted as “the web framework for perfectionists with deadlines”. Django is all about clean, pragmatic design and getting to your finished app in as little time as possible. Having been around the longest it's also got a great amount of support meaning it's perfect for larger, more professional projects. The best way to get started is with our Django By Example or Learning Django Web Development titles. Both have everything you need to take the first steps in the world of web development in Python; taking what you already know and applying it in new ways. The By Example title is great as it works through 4 different applications to see how Django works in different situations, while the Learning title is a great supplement to learning the key features that need to be used in every application. Now that the groundwork has been laid, we need to build upon that. With Django we've got to catch up with 10 years of experience and community secrets fast! Django Design Patterns and Best Practices is filled with some of the community's best hacks and cheats to get the most out of developing Django, so if you're a developer who likes to save time and avoid mistakes (and who doesn't?!) then this book is the perfect desk companion for any Django lover. Finally, to top everything off and prepare us for the next steps in the world of Django why not try a new paradigm with Test-Driven Development with Django? I'm honestly one of those developers that hates having to test right at the end, so being able to peel down a complex critical task into layers throughout just makes more sense to me. Route #2: Flask Flask has exploded in popularity in the last year and it's not hard to see why – with the focus on as much minimal code as possible, Python is perfect for developers who are looking to get a quick web page up, as well as those who just hate having to write mountains of code when a single line can do. As an added bonus the creators of the framework looked at Django and took on board feedback from that community as well, so you get the combined force of two different frameworks at your fingertips. Flask is easy to pick up, but difficult to master, so having a good selection of titles to help you along is the best way to get involved in this new world of Python web dev. Learning Flask Framework is the logical first step for getting into Flask. Released last month it's come heartily recommended as the all-in-one first stop to getting the most out of Flask. Want to try a different way to learn though? Well, Learning Flask video is a great supplement to the learning title, it shows us everything we need to start building our first Flask titles in just under 2 hours – almost as quick as it takes the average Flask developer to build their own sites. The Flask Framework Cookbook is the next logical step as a desktop companion for someone just starting their own projects. Having over 80 different recipes to get the most out of the framework is essential for those dipping their feet into this new world without worrying about losing everything. Finally, Flask Blueprints is something a little different, and is especially good for getting the most out of Flask. Now, if you're serious about learning Flask you're likely to get everything you need quickly, but the great thing about the framework is how you apply it. The different projects inside this title make sure you can make the most out of Flask's best features for every project you might come across! Want to explore more Python? Take a look at our dedicated Python page. You'll find our latest titles, as well as even more free content.
Read more
  • 0
  • 0
  • 20465
article-image-essential-tools-for-go-programming
Nicholas Maccharoli
14 Jan 2016
5 min read
Save for later

Essential Tools for Go Programming

Nicholas Maccharoli
14 Jan 2016
5 min read
Golang as a programming language is a pleasure to work with, but the reason for this also comes largely in part from the great community around the language and its modern tool set, both from standard distribution and third-party tools. The go command On a system with go installed, type go with no arguments to see its quick help menu. Here, you will see the basic go commands, such as build, run, get, install, fmt, and so on. Go ahead and take a minute to run go help on some verbs that look interesting; I promise I'll be here when you get back. Basic Side options The go build and go run commands do what you think they do, as is also the case with go test, which runs any test files in the directory it is passed. The go clean command wipe out all the compiled and executable files from the directory in which it is run. Run this command when you want to force a build to be made entirely from source again. The go version command prints out the version and build info, as you might expect. The go env command is very useful when you want to see exactly how your environment is set up. Running it will show where all your environment variables point and will also make you aware of which ones are still not properly set. go doc: Which arguments did this take again? Whenever in doubt, just give go doc a call. Just running go doc [Package Name] will give you a high-level readout of the types, interfaces, and behavior defined in this package; that is, go doc net/http will give you all the function stubs and types defined. If you just need to check the order or types of arguments that a function takes, run go doc on the package and use a tool like grep to grab the relevant line, such as go doc net/http | grep -i servecontent This will produce just what we need! func ServeContent(w ResponseWriter, req *Request, name string, modtime time.Time, content io.ReadSeeker) If you need more detail on the function or type, just run the go doc command with the package and function name, and you will get a quick description of this function or type. gofmt This little tool is quite a time-saver. I mainly use it to ensure that my source files are stylistically correct, and I also use the -s flag to let gofmt simplify my code. Just run gofmt -w on a file or an entire directory to fix up the files in place. After running this command, you should see the proper use of white space and indentation corrected to eight space tabs. Here is a diff of a file with poor formatting that I ran through gofmt: Original package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n",value) } } After running gofmt -w Hello.go package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n", value) } } As you can see, the indentation looks much better and reads way easier! The magic of gofmt -s The -s flag to gofmt helps clean up unnecessary code; so, the intentionally ignored values in the following code: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } Would get converted to the following after running –s: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } The awesomeness of go get One of the really cool features of the go command is that go get it works seamlessly with code hosted on GitHub as well as repositories hosted elsewhere. A note of warning Make sure that $GOPATH is properly set (this is usually exported as a variable in your shell). You may have a line such as “export GOPATH=$HOME” in your shell's profile file. Nabbing a library off of GitHub Say, we see this really neat library we want to use called fastHttp. Using only the go tool, we can fetch the library and get it ready for use all with just: go get github.com/valyala/fasthttp Now, all we have to do is import it with the exact same path, and we can start using the library right away! Just type this and it should do the trick: import "github.com/valyala/fasthttp" In the event that you want to have a look around in the library you just downloaded with go get, just type cd into $GOPATH/src/[Path that was provided to get command]—in this case, $GOPATH/src/github.com/valyala/fasthttp—and feel free to inspect the source files. I am also happy to inform you that you can also use go doc with the libraries you download in the exact same way as you use go doc when interacting with the standard library! Try it: type go doc fasthttp (you might want to tack on less since its a little bit long to type go doc fasthttp | less). Those are only stock features and options! The go tool is great and gets the job done, but there are also other great alternatives to some of the go tool's features, such as the godep package manager. If you have some time, I think it’s worth the time investment to learn! About the author Nick Maccharoli is an iOS/backend developer and an open source enthusiast working at a start-up in Tokyo and enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma.
Read more
  • 0
  • 0
  • 20168

article-image-more-than-half-suffer-from-it-industry-burnout
Aaron Lazar
02 Jul 2018
7 min read
Save for later

Why does more than half the IT industry suffer from Burnout?

Aaron Lazar
02 Jul 2018
7 min read
I remember when I was in college a few years ago, this was a question everyone was asking. People who were studying Computer Science were always scared of this happening. Although it’s ironic because knowing the above, they were still brave enough to get into Computer Science in the first place! Okay, on a serious note, this is a highly debated topic and the IT industry is labeled to be notorious for employee burnout. The harsh reality Honestly speaking, I have developer friends who earn pretty good salary packages, even those working at a junior level. However, just two in five of them are actually satisfied with their jobs. They seem to be heading towards burnout quite quickly, too quickly in fact. I would understand if you told me that a middle aged person, having certain health conditions et al, working in a tech company, was nearing burnout. Here I see people in their early 20’s struggling to keep up, wishing for the weekend to come! Facts and figures Last month, a workspace app called Blind surveyed over 11K (11,487 to be precise) employees in the tech industry and the responses weren’t surprising! At least for me. The question posed to them was pretty simple: Are you currently suffering from job burnout? Source: TeamBlind Oh yeah, that’s a whopping 6,566 employees! Here’s some more shocking stats: When narrowed down to 30 companies, 25 of them had an employee burnout rate of 50% or higher. Only 5 companies had an employee burnout rate below 50%. Moreover, 16 out of the 30 companies had an employee burnout rate that was higher than the survey average of 57.16%. While Netflix had the least number of employees facing burnout, companies like Credit Karma, Twitch and Nvidia recorded the highest. I thought I’d analyse a bit and understand what some of the most common reasons causing burnout in the tech industry, could be. So here they are: #1 Unreasonable workload Now I know this is true for a fact! I’ve been working closely with developers and architects for close to 5 years now and I’m aware of how unreasonable projects can get. Especially their timelines. Customer expectation is something really hard to meet in the IT sector, mainly because the customer usually doesn’t know much about tech. Still, deadlines are set extremely tight, like a noose around developers’ necks, not giving them any space to maneuver whatsoever. Naturally, this will come down hard on them and they will surely experience burnout at some time, if not already. #2 Unreasonable managers In our recent Skill-Up survey, more than 60% of the respondents felt they knew more about tech, than what their managers did. More than 40% claimed that the biggest organisational barriers to their organisation’s (theirs as well) goals was their manager’s lack of tech knowledge. As with almost everyone, developers expect managers to be like a mentor, able to guide them into taking the right decisions and making the right choices. Rather, with the lack of knowledge, managers are unable to relate to their team members, ultimately coming across as unreasonable to them. On the other side of town, IT Management has been rated as one of the top 20 most stressful jobs in the world, by careeraddict! #3 Rapidly changing tech The tech landscape is one that changes ever so fast, and developers tend to get caught up in the hustle to stay relevant. I honestly feel the quote, “Time and tide wait for none” needs to be appended to “Time, tide and tech wait for none”! The competition is so high that if they don’t keep up, they’re probably history in a couple of years or so. I remember in the beginning of 2016, there was a huge hype about Data Science and AI - there was a predicted shortage of a million data scientists by 2018. Thousands of engineers all around the world started diving into their pockets to fund their Data Science Masters Degrees. All this can put a serious strain on their health and they ultimately meet burnout. #4 Disproportionate compensation Tonnes of software developers feel they’re underpaid, obviously leading them to lose interest in their work. Ever wonder why developers jump companies so many times in their careers? Now this stagnation is happening while on the other hand, work responsibilities are rising. There’s a huge imbalance that’s throwing employees off track. Chris Bolte, CEO of Paysa, says that companies recruit employees at competitive rates. But once they're on board, the companies don't tend to pay much more than the standard yearly increase. This is obviously a bummer and a huge demotivation for the employees. #5 Organisation culture The culture prevailing in tech organisations has a lot to do with how fast employees reach burnout. No employee wants to feel they’re tools or perhaps cogs in a wheel. They want to feel a sense of empowerment, that they’re making an impact and they have a say in the decisions that drive results. Without a culture of continuous learning and opportunities for professional and personal growth, employees are likely to be driven to burnout pretty quickly, either causing them to leave the organisation or worse still, lose confidence in themselves. #6 Work life imbalance This is a very tricky thing, especially if you’re out working long hours and you’re mostly unhappy at work. Moreover, developers usually tend to take their work home so that they can complete projects on time, and that messes up everything. When there’s no proper work life balance, you’re most probably going to run into a health problem, which will lead you to burnout, eventually. #7 Peer pressure This happens a lot, not just in the IT industry, but it’s more common here owing to the immense competition and the fast pace of the industry itself. Developers will obviously want to put in more efforts than they can, simply because their team members are doing it already. This can go two ways: One where their efforts still go unnoticed, and secondly, although they’re noticed, they’ve lost on their health and other important aspects of life. By the time they think of actually doing something innovative and productive, they’ve crashed and burned. [dropcap]I[/dropcap]f you ask me, burnout is a part and parcel of every industry and it majorly depends on mindset. The mindset of employees as well as the employer. Developers should try avoiding long work hours as far as possible, while trying to take their minds off work by picking up a nice hobby and exploring more ways to enrich their lives. On the other side of the equation, employers and managers should do better at understanding their team’s limitations or problems, while also maintaining an unbiased approach towards the whole team. They should realize that a motivated and balanced team is great for their balance sheet in the long run. They must be serious enough to include employee morale and nurturing a great working environment as one of management’s key performance indicators. If the IT industry must rise as a phoenix from the ashes, it will take more than a handful of people or organizations changing their ways. Change begins from within every individual and at the top for every organization. Should software be more boring? The “Boring Software” manifesto thinks so These 2 software skills subscription services will save you time – and cash Don’t call us ninjas or rockstars, say developers  
Read more
  • 0
  • 0
  • 20001

article-image-eight-things-you-need-learn-python
Oli Huggins
02 Jun 2016
4 min read
Save for later

Eight Things You Need To Learn with Python

Oli Huggins
02 Jun 2016
4 min read
We say it a lot, but Python really is a versatile language that can be applied to many different purposes. Web developers, data analysts, security pros - there's an impressive range of challenges that can be solved by Python. So, what exactly should you be learning to do with this great language to really get the most out of it?   Writing Python What's the most important thing to learn with Python? How to write it. As Python becomes the popular language of choice for most developers, there is an increasing need to learn and adopt it on different environments for different purposes. The Beginning Python video course focuses on just that. Aimed at a complete novice with no previous programming experience in Python, this course will guide the readers every step of the way. Starting with the absolute basics like understanding of variables, arrays, and strings, the course goes on teach the intricacies of Python. It teaches how you can build your own functions making use of the existing functions in Python. By the end, the course ensures that you have a strong foundation of the programming concepts in Python. Design Patterns As Python matures from being used just as a scripting language and into enterprise development and data science, the need for clean, reusable code becomes ever more vital. The modern Python developer cannot go astray with tried and true design patterns for Python when they want to write efficient, reliable Python code. The second edition of Learning Python Design Patterns is stuffed with rich examples of design pattern implementation. From OOP to more complex concepts, you'll find everything you need to improve your Python within. Machine Learning Design We all know how powerful Python is for machine learning - so why are your results proving sub-par and inaccurate? The issue is probably not your implementation, but rather with your system design. Just knowing the relevant algorithms and tools is not enough for a really effective system - you need the right design. Designing Machine Learning Systems with Python covers various machine learning designing aspects with the help of real-world data sets and examples and will enable you to evaluate and decide the right design for your needs. Python for the Next Generation Python was built to be simple, and it's the perfect language to get kids coding. With programmers getting younger and younger these days, get them learning with a language that will serve them well for life. In Python for Kids, kids will create two interesting game projects that they can play and show off to their friends and teachers, as well as learn Python syntax, and how to do basic logic building. Distributed Computing What do you do when your Python application takes forever to give the output? Very heavy computing results in delayed response or, sometimes, even failure. For special systems that deal with a lot of data and are mission critical, the response time becomes an important factor. In order to write highly available, reliable, and fault tolerant programs, one needs to take aid of distributed computing. Distributed Computing with Python will teach you how to manage your data intensive and resource hungry Python applications with the aid of parallel programming, synchronous and asynchronous programming, and many more effective techniques. Deep Learning Python is at the forefront of the deep learning revolution - the next stage of machine learning, and maybe even a step towards AI. As machine learning becomes a mainstream practice, deep learning has taken a front seat among data scientists. The Deep Learning with Python video course is a great stepping stone in entering the world of deep learning with Python -- learn the basics, clear your concepts, and start implementing efficient deep learning for making better sense of data. Get all that it takes to understand and implement Python deep learning libraries from this insightful tutorial. Predictive Analytics With the power of Python and predictive analytics, you can turn your data into amazing predictions of the future. It's not sorcery, just good data science. Written by Ashish Kumar, a data scientist at Tiger Analytics, Learning Predictive Analytics with Python is a comprehensive, intermediate-level book on Predictive Analytics and Python for aspiring data scientists. Internet of Things Python's rich libraries of data analytics, combined with its popularity for scripting microcontroller units such as the Raspberry Pi and Arduino, make it an exceptional choice for building IoT. Internet of Things with Python offers an exciting view of IoT from many angles, whether you're a newbie or a pro. Leverage your existing Python knowledge to build awesome IoT project and enhance your IoT skills with this book.  
Read more
  • 0
  • 0
  • 19838
article-image-6-new-ebooks-for-programmers-to-watch-out-for-in-march
Richard Gall
20 Feb 2019
6 min read
Save for later

6 new eBooks for programmers to watch out for in March

Richard Gall
20 Feb 2019
6 min read
The biggest challenge for anyone working in tech is that you need multiple sets of eyes. Yes, you need to commit to regular, almost continuous learning, but you also need to look forward to what’s coming next. From slowly emerging trends that might not even come to fruition (we’re looking at you DataOps), to version updates and product releases, for tech professionals the horizon always looms and shapes the present. But it’s not just about the big trends or releases that get coverage - it’s also about planning your next (career) move, or even your next mini-project. That could be learning a new language (not necessarily new, but one you haven’t yet got round to learning), trying a new paradigm, exploring a new library, or getting to grips with cloud native approaches to software development. This sort of learning is easy to overlook but it is one that's vital to any developers' development. While the Packt library has a wealth of content for you to dig your proverbial claws into, if you’re looking forward, Packt has got some new titles available in pre-order that could help you plan your learning for the months to come. We’ve put together a list of some of our own top picks of our pre-order titles available this month, due to be released late February or March. Take a look and take some time to consider your next learning journey... Hands-on deep learning with PyTorch TensorFlow might have set the pace when it comes to artificial intelligence, but PyTorch is giving it a run for its money. It’s impossible to describe one as ‘better’ than the other - ultimately they both have valid use cases, and can both help you do some pretty impressive things with data. Read next: Can a production ready Pytorch 1.0 give TensorFlow a tough time? The key difference is really in the level of abstraction and the learning curve - TensorFlow is more like a library, which gives you more control, but also makes things a little more difficult. PyTorch, then, is a great place to start if you already know some Python and want to try your hand at deep learning. Or, if you have already worked with TensorFlow and simply want to explore new options, PyTorch is the obvious next step. Order Hands On Deep learning with PyTorch here. Hands-on DevOps for Architects Distributed systems have made the software architect role incredibly valuable. This person is not only responsible for deciding what should be developed and deployed, but also the means through which it should be done and maintained. But it’s also made the question of architecture relevant to just about everyone that builds and manages software. That’s why Hands on DevOps for Architects is such an important book for 2019. It isn’t just for those who typically describe themselves as software architects - it’s for anyone interested in infrastructure, and how things are put together, and be made to be more reliable, scalable and secure. With site reliability engineering finding increasing usage outside of Silicon Valley, this book could be an important piece in the next step in your career. Order Hands-on DevOps for Architects here. Hands-on Full stack development with Go Go has been cursed with a hell of a lot of hype. This is a shame - it means it’s easy to dismiss as a fad or fashion that will quickly disappear. In truth, Go’s popularity is only going to grow as more people experience, its speed and flexibility. Indeed, in today’s full-stack, cloud native world, Go is only going to go from strength to strength. In Hands-on Full Stack Development with Go you’ll not only get to grips with the fundamentals of Go, you’ll also learn how to build a complete full stack application built on microservices, using tools such as Gin and ReactJS. Order Hands-on Full Stack Development with Go here. C++ Fundamentals C++ is a language that often gets a bad rap. You don’t have to search the internet that deeply to find someone telling you that there’s no point learning C++ right now. And while it’s true that C++ might not be as eye-catching as languages like, say, Go or Rust, it’s nevertheless still a language that still plays a very important role in the software engineering landscape. If you want to build performance intensive apps for desktop C++ is likely going to be your go-to language. Read next: Will Rust replace C++? One of the sticks that’s often used to beat C++ is that it’s a fairly complex language to learn. But rather than being a reason not to learn it, if anything the challenge it presents to even relatively experienced developers is one well worth taking on. At a time when many aspects of software development seem to be getting easier, as new layers of abstraction remove problems we previously might have had to contend with, C++ bucks that trend, forcing you to take a very different approach. And although this approach might not be one many developers want to face, if you want to strengthen your skillset, C++ could certainly be a valuable language to learn. The stats don’t lie - C++ is placed 4th on the TIOBE index (as of February 2019), beating JavaScript, and commands a considerably high salary - indeed.com data from 2018 suggests that C++ was the second highest earning programming language in the U.S., after Python, with a salary of $115K. If you want to give C++ a serious go, then C++ Fundamentals could be a great place to begin. Order C++ Fundamentals here. Data Wrangling with Python & Data Visualization with Python Finally, we’re grouping two books together - Data Wrangling with Python and Data Visualization with Python. This is because they both help you to really dig deep into Python’s power, and better understand how it has grown to become the definitive language of data. Of course, R might have something to say about this - but it’s a fact the over the last 12-18 months Python has really grown in popularity in a way that R has been unable to match. So, if you’re new to any aspect of the data science and analysis pipeline, or you’ve used R and you’re now looking for a faster, more flexible alternative, both titles could offer you the insight and guidance you need. Order Data Wrangling with Python here. Order Data Visualization with Python here.
Read more
  • 0
  • 0
  • 19124

article-image-tech-hype-cycles-do-they-deserve-your-attention
Richard Gall
30 Apr 2018
6 min read
Save for later

Tech hype cycles: do they deserve your attention?

Richard Gall
30 Apr 2018
6 min read
Hype cycles are an integral aspect of modern technology. They tell us the story of a specific technology and how it fits into a given context. This context is usually professional, but it is sometimes social and cultural. They are also are able to show us how the use of something has changed. They illustrate when something was adopted, when it grew, and perhaps when it began to decline. True, this might seem superfluous or superficial. But that explains while we often fail to pay that much attention to them. Instead of focusing on the cycle, and the wider context of how and why something is being used, we get distracted in the details of whatever is being hyped. "Hype cycles allow us to see past hype." But hype cycles, or hype curves, can help us to make better sense of the technology at our disposal. They allow you to see past the hype. That means rather than following the trends or buzzwords that fashion places on a pedestal at any given moment, you're always able to see those trends and buzzwords in a context. For example, instead of simply moving from big data to to AI, or from cloud to edge, you can see how different technologies and trends fit together. You can begin to observe how things are impacting one another. Hype cycles allow you to see how software changes trends, and then how trends change industries. It's not always easy to see how the code you're writing fits into the big picture - but hype cycles are a good way of allowing you to get a better sense of it. The history of the tech hype cycle According to this Wired article from 2012, the term 'hype cycle' has been around since 1995. But the idea of a hype cycle was taken by research organization Gartner and became central to the way they presented changes across the tech landscape. The first Gartner hype cycle report was released in 1999. Written by Alexander Drobik, the report predicted the end of the dot com bubble at the beginning of the new millennium. However, it's important to note that what Drobnik hadn't simply predicted the end of a trend - it was instead what's called a period of disillusionment within the 'hype cycle' of, well, the internet (perhaps the ultimate hype cycle). Let's look at what the cycle looks like in detail. What does the tech hype cycle look like? Of course, Gartner are the organization that popularized the concept of the hype cycle, but we've created our own example of what it looks like:                 Let's break down each of these points in the hype cycle in a bit more detail. Technology trigger This is the initial breakthrough. It's an exciting time when either researchers, engineers discover a new way of doing something. It's more the possibility of disruption rather than actual disruption. This is often the time when the press - and investors - get excited. Peak of inflated expectations This is when everyone gets really excited about the possibility of disruption. This period can be characterized by the sentence "This changes everything." It's the period when everyone talks about transformation but nothing yet has really transformed. True, the new technology might have worked somewhere, but there's lots of projects that don't even hit the ground, and a few that have simply failed. Trough of disillusionment This is the hangover everyone goes through after getting drunk on inflated expectations. This begins with 'Why X isn't working' pieces in the press, which gradually develops into silence. Technologies or trends seem to disappear into relative insignificance. Slope of enlightenment Now the hype has died down, technologies are applied with more serious consideration. Arguably the period of disillusionment is an important period of reflection about what works and what doesn't. This allows businesses and organizations to apply technologies in a more effective way during this 'enlightened' period. In essence, this time is about experimentation and learning. True, there might be some humility here, which is probably a good thing after the earlier inflated expectations. Plateau of productivity This is where enlightenment turns into stability. Ways of using a particular technology become established within an industry. It becomes mainstream. Perhaps the benefits to customers are now being felt more readily, which makes it easier to calculate just how valuable something might be. The hype cycle is a framework that explains how technologies become popular and gradually more mainstream. Of course, there are some technologies that don't quite follow this trajectory - what happens, for example, when things simply never take off? Some technologies get stuck at the trough of disillusionment. If they can never really give us the full picture, are hype cycles actually nothing more than a load of hype? Are hype cycles just a load of hype? Although hype cycles are useful in outlining how technologies are adopted, and mature, there are, of course, do have some limitations. Of course, Gartner have some stake in actually selling the concept to you. Its business is based on being an authoritative and invaluable source of tech insight. This means Gartner needs you (or maybe your boss) to think that hype cycles are a recurring pattern of all technology. Similarly, the people who write about technology and sell it, have a vested interest in hype cycles. They might not realize it but the need to 'tell a story' about how or why something is important - why something is 'transformative' - feeds into the concept that Gartner has successfully monetized. But that doesn't mean tech hype cycles should simply be ignored. They might well be artificial and lacking in any quantitative rigour, but we ignore the hype cycle at our peril. This is because the way we - the press, industry leaders, and tech communities - plays an important part in how technologies and trends are adopted. We need to take a somewhat ironic approach to hype cycles. That means we need to recognise while part of it is a bit of a charade, it's a charade that is pretty much inescapable. Trends and technology can't exist outside of these systems. Things only ever become popular when they're visible and when they're being talked about. Hype cycles give us a framework for understanding how technology is talked about. Read next: What is AIOps and why is it going to be important?
Read more
  • 0
  • 0
  • 18098
Modal Close icon
Modal Close icon