Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Programming

81 Articles
article-image-erp-tool-in-focus-odoo-11
Sugandha Lahoti
22 May 2018
3 min read
Save for later

ERP tool in focus: Odoo 11

Sugandha Lahoti
22 May 2018
3 min read
What is Odoo? Odoo is an all-in-one management software that offers a range of business applications. It forms a complete suite of enterprise management applications targeting companies of all sizes. It is versatile in the sense that it can be used across multiple categories including CRM, website, e-commerce, billing, accounting, manufacturing, warehouse, and project management, and inventory. The community version is free-of-charge and can be installed with ease. Odoo is one of the fastest growing open source, business application development software products available. With the announcement of version 11 of Odoo, there are many new features added to Odoo and the face of business application development with Odoo has changed. In Odoo 11, the online installation documentation continues to improve and there are now options for Docker installations. In addition, Odoo 11 uses Python 3 instead of Python 2.7. This will not change the steps you take in installing Odoo but will change the specific libraries that will be installed. While much of the process is the same as previous versions of Odoo, there have been some pricing changes in Odoo 11. There are only two free users now and you pay for additional users. There is one free application that you can install for an unlimited number of users, but as soon as you have more than one application, then you must pay $25 for each user, including the first user. If you have thought about developing in Odoo, now is the best time to start. Before I convince you on why Odoo is great, let’s take a step back and revisit our fundamentals. What is an ERP? ERP is an acronym often used for Enterprise Resource Planning. The ERP gives a global and real-time view of data that can enable companies to address concerns and drive improvements. It automates the core business operations such as the order to fulfillment and procures to pay processes. It also reduces risk management for companies and enhances customer services by providing a single source for billing and relationship tracking. Why Odoo? Odoo is Extensible and easy to customize Odoo's framework was built with extensibility in mind. Extensions and modifications can be implemented as modules, to be applied over the module with the feature being changed, without actually changing it. This provides a clean and easy-to-control and customized applications. You get integrated information Instead of distributing data throughout several separate databases, Odoo maintains a single location for all the data. Moreover, the data remains consistent and up to date. Single reporting system Odoo has a unified and single reporting system to analyze and track the status. Users can also run their own reports without any help from IT. Single reporting systems, such as those provided by Odoo ERP software helps make reporting easier and customizable. Built around Python Odoo is built using the Python programming language, which is one of the most popular languages used by developers. Large community The capability to combine several modules into feature-rich applications, along with the open source nature of Odoo, is probably the important factors explaining the community that has grown around Odoo. In fact, there are thousands of community modules available for Odoo, covering virtually every topic, and the number of people getting involved has been steadily growing every year. Go through our video, Odoo 11 development essentials to learn to scaffold a new module, create new models, and use the proper functions that make Odoo 11 the best ERP out there. Top 5 free Business Intelligence tools How to build a live interactive visual dashboard in Power BI with Azure Stream Tableau 2018.1 brings new features to help organizations easily scale analytics
Read more
  • 0
  • 0
  • 29705

article-image-what-is-quantum-entanglement
Amarabha Banerjee
05 Aug 2018
3 min read
Save for later

What is Quantum Entanglement?

Amarabha Banerjee
05 Aug 2018
3 min read
Einstein described it as “Spooky action at a distance”. Quantum entanglement is a phenomenon observed in photons where particles share information of their state - even if separated by a huge distance. This state sharing phenomenon happens almost instantaneously. Quantum particles can be in any possible state until their state is measured by an observer. These states are called Eigen-Values. In case of quantum entanglement, two particles separated by several miles of distance, when observed, change into the same state. Quantum entanglement is hugely important for modern day computation tasks. The reason is that the state information between photons travel sometimes at speeds like 10k times the speed of light. This if implemented in physical systems, like quantum computers, can be a huge boost. Source: picoquant One important concept for us to understand this idea is ‘Qubit’. What is a Qubit? It’s the unit of information in Quantum computing. Like ‘Bit’ in case of normal computers. A bit can be represented by two states - ‘0’ or ‘1’. Qbits are also like ‘bits’, but they are governed by the weirder rules of Quantum Computing. Qubits don’t just contain pure states like ‘0’ and ‘1’, but they can also exist as superposition of these two states like {|0>,|1>},{ |1>,|0>}, {|0>,|0>}, {|1>,|1>}. This particular style of writing particle states is called the Dirac Notation. Because of these unique superposition of states, the quantum particles get entangled and share their state related information. A recent research experiment by a Chinese group has claimed to have packed 18 Qubits of information in just 6 entangled photons. This is revolutionary. What this basically means is that if one bit can pack in three times the information that it can carry presently, then our computers would become three times faster and smoother to work with. The reasons which make this a great start for future implementation of faster and practical quantum computers are: It’s very difficult to entangle so many electrons There are instances of more than 18 qubits getting packed into a larger number of photons, however the degree of entanglement has been much simpler Entanglement of each new particle takes increasingly more computer simulation time Introducing each new qubit creates a separate simulation taking up more processing time. The possible reason why this experiment has worked might be credited to the multiple degrees of freedom that photons can have. This particular experiment has been performed using Photons in a networking system. The fact that such a system allows multiple degrees of freedom for the Photon meant that this result is specific to this particular quantum system. It would be difficult to replicate the results in other systems like a Superconducting Network. Still this result means a great deal for the progress of quantum computing systems and how they can evolve to be a practical solution and not just remain in theory forever. Quantum Computing is poised to take a quantum leap with industries and governments on. PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule! Q# 101: Getting to know the basics of Microsoft’s new quantum computing language  
Read more
  • 0
  • 0
  • 29693

article-image-salesforce-lightning-platform-powerful-fast-and-intuitive-user-interface
Fatema Patrawala
05 Nov 2019
6 min read
Save for later

What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface

Fatema Patrawala
05 Nov 2019
6 min read
Salesforce has always been proactive in developing and bringing to market new features and functionality in all of its products. Throughout the lifetime of the Salesforce CRM product, there have been several upgrades to the user interface. In 2015, Salesforce began promoting its new platform – Salesforce Lightning. Although long time users and Salesforce developers may have grown accustomed to the classic user interface, Salesforce Lightning may just covert them. It brings in a modern UI with new features, increased productivity, faster deployments, and a seamless transition across desktop and mobile environments. Recently, Salesforce has been actively encouraging its developers, admins and users to migrate from the classic Salesforce user interface to the new Lightning Experience. Andrew Fawcett, currently VP Product Management and a Salesforce Certified Platform Developer II at Salesforce, writes in his book, Salesforce Lightning Enterprise Architecture, “One of the great things about developing applications on the Salesforce Lightning Platform is the support you get from the platform beyond the core engineering phase of the production process.” This book is a comprehensive guide filled with best practices and tailor-made examples developed in the Salesforce Lightning. It is a must-read for all Lightning Platform architects! Why should you consider migrating to Salesforce Lightning Earlier this year, Forrester Consulting published a study quantifying the total economic impact and benefits of Salesforce Lightning for Service Cloud. In the study, Forrester found that a composite service organization deploying Lightning Experience obtained a return on investment (ROI) of 475% over 3 years. Among the other potential benefits, Forrester found that over 3 years organizations using Lighting platform: Saved more than $2.5 million by reducing support handling time; Saved $1.1 million by avoiding documentation time; and Increased customer satisfaction by 8.5% Apart from this, the Salesforce Lightning platform allows organizations to leverage the latest cloud-based features. It includes responsive and visually attractive user interfaces which is not available within the Classic themes. Salesforce Lightning provides stupendous business process improvements and new technological advances over Classic for Salesforce developers. How does the Salesforce Lightning architecture look like While using the Salesforce Lightning platform, developers and users interact with a user interface backed by a robust application layer. This layer runs on the Lightning Component Framework which comprises of services like the navigation, Lightning Data Service, and Lightning Design System. Source: Salesforce website As part of this application layer, Base Components and Standard Components are the building blocks that enable Salesforce developers to configure their user interfaces via the App Builder and Community Builder. Standard Components are typically built up from one or more Base Components, which are also known as Lightning Components. Developers can build Lightning Components using two programming models: the Lightning Web Components model, and the Aura Components model. The Lightning platform is critical for a range of services and experiences in Salesforce: Navigation Service: The navigation service is supported for Lightning Experience and the Salesforce app. It is built with extensive routing, deep linking, and login redirection, Salesforce's navigation service powers app navigation, state changes, and refreshes. Lightning Data Service: Lightning Data Service is built on top of the User Interface API, It enables developers to load, create, edit, or delete a record in your component without requiring Apex code. Lightning Data Service improves performance and data consistency across components. Lightning Design System: With Lightning Design System, developers can build user interfaces easily including the component blueprints, markup, CSS, icons, and fonts. Base Lightning Components: Base Lightning Components are the building blocks for all UI across the platform. Components range from a simple button to a highly functional data table and can be written as an Aura component or a Lightning web component. Standard Components: Lightning pages are made up of Standard Components, which in turn are composed of Base Lightning Components. Salesforce developers or admins can drag-and-drop Standard Components in tools like Lightning App Builder and Community Builder. Lightning App Builder: Lightning App Builder will let developers build and customize interfaces for Lightning Experience, the Salesforce App, Outlook Integration, and Gmail Integration. Community Builder: For Communities, developers can use the Community Builder to build and customize communities easily. Apart from the above there are other services available within the Salesforce Lightning platform, like the Lightning security measures and record detail pages on the platform and Salesforce app. How to plan transitioning from Classic to Lightning Experience As Salesforce admins/developers prepare for the transition to Lightning Experience, they will need to evaluate three things: how does the change benefit the company, what work is needed to prepare for the change, and how much will it cost. This is the stage to make the case for moving to Lightning Experience by calculating the return on investment of the company and defining what a Lightning Experience implementation will look like. First they will need to analyze how prepared the organization is for the transition to Lightning Experience. Salesforce admins/developers can use the Lightning Experience Readiness Check, it is a tool that produces a personalized Readiness Report and shows which users will benefit right away, and how to adjust the implementation for Lightning Experience. Further Salesforce developers/admins can make the case to their leadership team by showing how migrating to Lightning Experience can realize business goals and improve the company's bottom line. Finally, by using the results of the activities carried out to assess the impact of the migration, understand the level of change required and decide on a suitable approach. If the changes required are relatively small, consider migrating all users and all areas of functionality at the same time. However, if the Salesforce environment is more complex and the amount of change is far greater, consider implementing the migration in phases or as an initial pilot to start with. Overall, the Salesforce Lightning Platform is being increasingly adopted by admins, business analysts, consultants, architects, and especially Salesforce developers. If you want to deliver packaged applications using Salesforce Lightning that cater to enterprise business needs, read this book, Salesforce Lightning Platform Enterprise Architecture, written by Andrew Fawcatt.  This book will take you through the architecture of building an application on the Lightning platform and help you understand its features and best practices. It will also help you ensure that the app keeps up with the increasing customers’ and business requirements. What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce open sources ‘Lightning Web Components framework’ “Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO Build a custom Admin Home page in Salesforce CRM Lightning Experience How to create and prepare your first dataset in Salesforce Einstein  
Read more
  • 0
  • 0
  • 29239

article-image-jakarta-ee-past-present-and-future
David Heffelfinger
16 Aug 2018
10 min read
Save for later

Jakarta EE: Past, Present, and Future

David Heffelfinger
16 Aug 2018
10 min read
You may have heard some talk about a new Java framework called Jakarta EE, in this article we will cover what Jakarta EE actually is, how we got here, and what to expect when it’s actually released. History and Background In September of 2017, Oracle announced it was donating Java EE to the Eclipse Foundation. Isn’t Eclipse a Java IDE? Most Java developers are familiar with the hugely popular Eclipse IDE, therefore for many, when they hear the word “Eclipse”, the Eclipse IDE comes to mind. Not everybody knows that the Eclipse IDE is developed by the Eclipse Foundation, an open source foundation similar to the Apache Foundation and the Linux Foundation. In addition to the Eclipse IDE, the Eclipse Foundation develops several other Java tools and APIs such as Eclipse Vert.x, Eclipse Yasson, and EclipseLink. Java EE was the successor to J2EE; which was a wildly popular set of specifications for implementing enterprise software. In spite of its popularity, many J2EE APIs were cumbersome to use and required lots of boilerplate code. Sun Microsystems, together with the Java community as part of the Java Community Process (JCP), replaced J2EE with Java EE in 2006. Java EE introduced a much nicer, lightweight programming model, making enterprise Java development much more easier than what could be accomplished with J2EE. J2EE was so popular that, to this day, it is incorrectly used as a generic term for all server-side Java technologies. Many, to this day still refer to Java EE as J2EE, and incorrectly assume Java EE is a bloated, convoluted technology. In short, J2EE was so popular that even Java EE can’t shake its predecessor’s reputation for being a “heavyweight” technology. In 2010 Oracle purchased Sun Microsystems, and became the steward for Java technology, including Java EE. Java EE 7 was released in 2013, after the Sun Microsystems acquisition by Oracle, simplifying enterprise software development even further, and adding additional APIs to meet new demands of enterprise software systems. Work on Java EE 8, the latest version of the Java EE specification, began shortly after Java EE 7 was released. In the beginning everything seemed to be going well, however  in early 2016, the Java EE community started noticing a lack of progress in Java EE 8, particularly Java Specification Requests (JSRs) led by Oracle. The perceived lack of Java EE 8 progress became a big concern for many in the Java EE community. Since the specifications were owned by Oracle, there was no legal way for any other entity to continue making progress on Java EE 8. In response to the perceived lack of progress, several Java EE vendors, including big names such as IBM and Red Hat, got together and started the Microprofile initiative, which aimed to introduce new APIs to Java EE, with a focus on optimizing Java EE for developing systems based on a microservices architecture. The idea wasn’t to compete with Java EE per se, but to develop new specifications in the hopes that they would be eventually added to Java EE proper. In addition to big vendors reacting to the perceived Java EE progress, a grassroots organization called the Java EE Guardians was formed, led largely by prominent Java EE advocate Reza Rahman. The Java EE Guardians provided a way for Java EE developers and advocates to have a united, collective voice which could urge Oracle to either keep working on Java EE 8, or to allow the community to continue the work themselves. Nobody can say for sure how much influence the Microprofile initiative and Java EE Guardians had, but many speculate that Java EE would have never been donated to the Eclipse Foundation had it not been for these two initiatives. One Standard, Multiple Implementations It is worth mentioning that Java EE is not a framework per se, but a set of specifications for various APIs. Some examples of Java EE specifications include the Java API for RESTful Web Services (JAX-RS), Contexts and Dependency Injection (CDI), and the Java Persistence API (JPA). There are several implementations of Java EE, commonly known as application servers or runtimes, examples include Weblogic, JBoss, Websphere, Apache Tomee, GlassFish and Payara. Since all of these implement the Java EE specifications, code written against one of these servers can easily be migrated to another one, with minimal or no modifications. Coding against the Java EE standard provides protection against vendor lock-in. Once Jakarta EE is completely migrated to the Eclipse Foundation, it will continue being a specification with multiple implementations, keeping one of the biggest benefits of Java EE. To become Java EE certified, application server vendors had to pay Oracle a fee to obtain a Technology Compatibility Kit (TCK), which is a set of tests vendors can use to make sure their products comply 100% with the Java EE specification. The fact that the TCK is closed source and not publicly available has been a source of controversy among the Java EE community. It is expected that the TCK will be made publicly available once the transition to the Eclipse Foundation is complete. From Java EE to Jakarta EE Once the announcement of the donation was made, it became clear that for legal reasons Java EE would have to be renamed, as Oracle owns the “Java” trademark. The Eclipse Foundation requested input from the community, hundreds of suggestions were submitted. The Foundation made it clear that naming such a big project is no easy task, there are several constraints that may not be obvious to the casual observer, such as: the name must not be trademarked in any country, it must be catchy, and it must not spell profanity in any language. Out of hundreds of suggestions, the Eclipse Foundation narrowed them down to two choices, “Enterprise Profile” and “Jakarta EE”, and had the community vote for their favorite. “Jakarta EE” won by a fairly large margin. It is worth mentioning that the name “Jakarta” carries a bit of history in the Java world, as it used to be an umbrella project under the Apache Foundation. Several very popular Java tools and libraries used to fall under the Jakarta umbrella, such as the ANT build tool, the Struts MVC framework, and many others. Where we are in the transition Ever since the announcement, the Eclipse Foundation along with the Java EE community at large has been furiously working on transitioning Java EE to the Eclipse Foundation. Transitioning such a huge and far reaching project to an open source foundation is a huge undertaking, and as such it takes some time. Some of the progress so far includes relicensing all Oracle led Java EE technologies, including reference implementations (RI), Technology Compatibility Kits (TCK) and project documentation.  39 projects have been created under the Jakarta EE umbrella, corresponding to 39 Java EE specifications being donated to the Eclipse Foundation. Reference Implementations Each Java EE specification must include a reference implementation, which proves that the requirements on the specification can be met by actual code. For example, the reference implementation for JSF is called Mojarra, the CDI reference implementation is called Weld, and the JPA is called EclipseLink. Similarly, all other Java EE specifications have a corresponding reference implementation. These 39 projects are in different stages of completion, a small minority are still in the proposal stage; some have provisioned committers and other resources, but code and other artifacts hasn’t been transitioned yet; some have had the initial contribution (code and related content) transitioned already, the majority of the projects have had the initial contribution committed to the Eclipse Foundation’s Git repository, and a few have had their first Release Review, which is a formal announcement of the project’s release to the Eclipse Foundation, and a request for feedback. Current status for all 39 projects can be found at https://www.eclipse.org/ee4j/status.php. Additionally, the Jakarta EE working group was established, which includes Java EE implementation vendors, companies that either rely on Java EE or provide products or services complementary to Java EE, as well as individuals interested in advancing Jakarta EE. It is worth noting that Pivotal, the company behind the popular Spring Framework, has joined the Jakarta EE Working Group. This is worth pointing out as the Spring Framework and Java EE have traditionally been perceived as competing technologies. With Pivotal joining the Jakarta EE Working Group some are speculating that “the feud may soon be over”, with Jakarta EE and Spring cooperating with each other instead of competing. At the time of writing, it has been almost a year since the announcement that Java EE is moving to the Eclipse foundation, some may be wondering what is holding up the process. Transitioning a project of such a massive scale as Java EE involves several tasks that may not be obvious to the casual observer, both tasks related to legal compliance as well as technical tasks. For example, each individual source code file needs to be inspected to make sure it has the correct license header. Project dependencies for each API need to be analyzed. For legal reasons, some of the Java EE technologies need to be renamed, appropriate names need to be found. Additionally, build environments need to be created for each project under the Eclipse Foundation infrastructure. In short, there is more work than meets the eye. What to expect when the transition is complete The first release of Jakarta EE will be 100% compatible with Java EE. Existing Java EE applications, application servers and runtimes will also be Jakarta EE compliant. Sometime after the announcement, the Eclipse Foundation surveyed the Java EE community as to the direction Jakarta EE should take under the Foundation’s guidance. The community overwhelmingly stated that they want better support for cloud deployment, as well as better support for microservices. As such, expect Jakarta EE to evolve to better support these technologies. Representatives from the Eclipse Foundation have stated that release cadence for Jakarta EE will be more frequent than it was for Java EE when it was under Oracle. In summary, the first version of Jakarta EE will be an open version of Java EE 8, after that we can expect better support for cloud and microservices development, as well as a faster release cadence. Help Create the Future of Jakarta EE Anyone, from large corporations to individual contributors can contribute to Jakarta EE. I would like to invite interested readers to contribute! Here are a few ways to do so: Subscribe to Jakarta EE community mailing list: jakarta.ee-community@eclipse.org Contribute to EE4J projects: https://github.com/eclipse-ee4j You can also keep up to date with the latest Jakarta EE happenings by following Jakarta EE on Twitter at @JakartaEE or by visiting the Jakarta EE web site at https://jakarta.ee About the Author David R. Heffelfinger David R. Heffelfinger is an independent consultant based in the Washington D.C. area. He is a Java Champion, an Apache NetBeans committer, and a former member of the JavaOne content committee. He has written several books on Java EE, application servers, NetBeans, and JasperReports. David is a frequent speaker at software development conferences such as JavaOne, Oracle Code and NetBeans Day. You can follow him on Twitter at @ensode.  
Read more
  • 0
  • 0
  • 29008

article-image-microservices-require-a-high-level-vision-to-shape-the-direction-of-the-system-in-the-long-term-says-jaime-buelta
Bhagyashree R
25 Nov 2019
9 min read
Save for later

"Microservices require a high-level vision to shape the direction of the system in the long term," says Jaime Buelta

Bhagyashree R
25 Nov 2019
9 min read
Looking back 4-5 years ago, the sentiment around microservices architecture has changed quite a bit. First, it was in the hype phase when after seeing the success stories of companies like Netflix, Amazon, and Gilt.com developers thought that microservices are the de facto of application development. Cut to now, we have realized that microservices is yet another architectural style which when applied to the right problem in the right way works amazingly well but comes with its own pros and cons. To get an understanding of what exactly microservices are, when we should use them, when not to use them, we sat with Jaime Buelta, the author of Hands-On Docker for Microservices with Python. Along with explaining microservices and their benefits, Buelta shared some best practices developers should keep in mind if they decide to migrate their monoliths to microservices. [box type="shadow" align="" class="" width=""] Further Learning Before jumping to microservices, Buelta recommends building solid foundations in general software architecture and web services. “They'll be very useful when dealing with microservices and afterward,” he says. Buelta’s book, Hands-on Docker for Microservices with Python aims to guide you in your journey of building microservices. In this book, you’ll learn how to structure big systems, encapsulate them using Docker, and deploy them using Kubernetes. [/box] Microservices: The benefits and risks A traditional monolith application encloses all its capabilities in a single unit. On the contrary, in the microservices architecture, the application is divided into smaller standalone services that are independently deployable, upgradeable, and replaceable. Each microservice is built for a single business purpose, which communicates with other microservices with lightweight mechanisms. Buelta explains, “Microservice architecture is a way of structuring a system, where several independent services communicate with each other in a well-defined way (typically through web RESTful services). The key element is that each microservice can be updated and deployed independently.” Microservices architecture does not only dictates how you build your application but also how your team is organized. [box type="shadow" align="" class="" width=""]"Though [it] is normally described in terms of the involved technologies, it’s also an organizational structure. Each independent team can take full ownership of a microservice. This allows organizations to grow without developers clashing with each other," he adds. [/box] One of the key benefits of microservices is it enables innovation without much impact on the system as a whole. With microservices, you can do horizontal scaling, have strong module boundaries, use diverse technologies, and develop parallelly. Coming to the risks associated with microservices, Buelta said, "The main risk in its adoption, especially when coming from a monolith, is to make a design where the services are not truly independent. This generates an overhead and complexity increase in inter-service communication." He adds, "Microservices require a high-level vision to shape the direction of the system in the long term. My recommendation to organizations moving towards this kind of structure is to put someone in charge of the “big picture”. You don't want to lose sight of the forest for the trees." Migrating from monoliths to microservices Martin Fowler, a renowned author and software consultant, advises going for a "monolith-first" approach. This is because using microservices architecture from the get-go can be risky as it is mostly found suitable for large systems and large teams. Buelta shared his perspective, "The main metric to start thinking into getting into this kind of migration is raw team size. For small teams, it is not worth it, as developers understand everything that is going on and can ask the person sitting right across the room for any question. A monolith works great in these situations, and that’s why virtually every system starts like this." This asserts the "two-pizza team" rule by Amazon, which says that if a team responsible for one microservice couldn’t be fed with two pizzas, it is too big. [box type="shadow" align="" class="" width=""]"As business and teams grow, they need better coordination. Developers start stepping into each other's toes often. Knowing the intent of a particular piece of code is trickier. Migrating then makes sense to give some separation of function and clarity. Each team can set its own objectives and work mostly on their own, presenting a clear external interface. But for this to make sense, there should be a critical mass of developers," he adds.[/box] Best practices to follow when migrating to microservices When asked about what best practices developers can practice when migrating to microservices, Buelta said, "The key to a successful microservice architecture is that each service is as independent as possible." A question that arises here is ‘how can you make the services independent?’ "The best way to discover the interdependence of system is to think in terms of new features: “If there’s a new feature, can it be implemented by changing a single service? What kind of features are the ones that will require coordination of several microservices? Are they common requests, or are they rare? No design will be perfect, but at least will help make informed decisions,” explains Buelta. Buelta advises doing it right instead of doing it twice. "Once the migration is done, making changes on the boundaries of the microservices is difficult. It’s worth to invest time into the initial phase of the project," he adds. Migrating from one architectural pattern to another is a big change. We asked what challenges he and his team faced during the process, to which he said, [box type="shadow" align="" class="" width=""]"The most difficult challenge is actually people. They tend to be underestimated, but moving into microservices is actually changing the way people work. Not an easy task![/box] He adds, “I’ve faced some of these problems like having to give enough training and support for developers. Especially, explaining the rationale behind some of the changes. This helps developers understand the whys of the change they find so frustrating. For example, a common complaint moving from a monolith is to have to coordinate deployments that used to be a single monolith release. This needs more thought to ensure backward compatibility and minimize risk. This sometimes is not immediately obvious, and needs to be explained." On choosing Docker, Kubernetes, and Python as his technology stack We asked Buelta what technologies he prefers for implementing microservices. For language his answer was simple: "Python is a very natural choice for me. It’s my favorite programming language!" He adds, "It’s very well suited for the task. Not only is it readable and easy to use, but it also has ample support for web development. On top of that, it has a vibrant ecosystem of third-party modules for any conceivable demand. These demands include connecting to other systems like databases, external APIs, etc." Docker is often touted as one of the most important tools when it comes to microservices. Buelta explained why, "Docker allows to encapsulate and replicate the application in a convenient standard package. This reduces uncertainty and environment complexity. It simplifies greatly the move from development to production for applications. It also helps in reducing hardware utilization.  You can fit multiple containers with different environments, even different operative systems, in the same physical box or virtual machine." For Kubernetes, he said, "Finally, Kubernetes allows us to deploy multiple Docker containers working in a coordinated fashion. It forces you to think in a clustered way, keeping the production environment in mind. It also allows us to define the cluster using code, so new deployments or configuration changes are defined in files. All this enables techniques like GitOps, which I described in the book, storing the full configuration in source control. This makes any change in a specific and reversible way, as they are regular git merges. It also makes recovering or duplicating infrastructure from scratch easy." "There is a bit of a learning curve involved in Docker and Kubernetes, but it’s totally worth it. Both are very powerful tools. And they encourage you to work in a way that’s suited for avoiding downfalls in production," he shared. On multilingual microservices Microservices allow you to use diverse technologies as each microservice ideally is handled by an independent team. Buelta shared his opinion regarding multilingual microservices, "Multilingual microservices are great! That’s one of its greatest advantages. A typical example of this is to migrate legacy code written in some language to another. A microservice can replace another that exposes the same external interface. All while being completely different internally. I’ve done migrations from old PHP apps to replace them with Python apps, for example." He adds, "As an organization, working with two or more frameworks at the same time can help understand better both of them, and when to use one or the other." Though using multilingual microservices is a great advantage, it can also increase the operational overhead. Buelta advises, "A balance needs to be stuck, though. It doesn’t make sense to use a different tool each time and not be able to share knowledge across teams. The specific numbers may depend on company size, but in general, more than two or three should require a good explanation of why there’s a new tool that needs to be introduced in the stack. Keeping tools at a reasonable level also helps to share knowledge and how to use them most effectively." About the author Jaime Buelta has been a professional programmer and a full-time Python developer who has been exposed to a lot of different technologies over his career. He has developed software for a variety of fields and industries, including aerospace, networking and communications, industrial SCADA systems, video game online services, and financial services. As part of these companies, he worked closely with various functional areas, such as marketing, management, sales, and game design, helping the companies achieve their goals. He is a strong proponent of automating everything and making computers do most of the heavy lifting so users can focus on the important stuff. He is currently living in Dublin, Ireland, and has been a regular speaker at PyCon Ireland. Check out Buelta’s book, Hands-On Docker for Microservices with Python on PacktPub. In this book, you will learn how to build production-grade microservices as well as orchestrate a complex system of services using containers. Follow Jaime Buelta on Twitter: @jaimebuelta. Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 28964

article-image-what-is-the-reactive-manifesto
Packt Editorial Staff
17 Apr 2018
3 min read
Save for later

What is the Reactive Manifesto?

Packt Editorial Staff
17 Apr 2018
3 min read
The Reactive Manifesto is a document that defines the core principles of reactive programming. It was first released in 2013 by a group of developers led by a man called Jonas Boner (you can find him on Twitter: @jboner). Jonas wrote this in a blog post explaining the reasons behind the manifesto: "Application requirements have changed dramatically in recent years. Both from a runtime environment perspective, with multicore and cloud computing architectures nowadays being the norm, as well as from a user requirements perspective, with tighter SLAs in terms of lower latency, higher throughput, availability and close to linear scalability. This all demands writing applications in a fundamentally different way than what most programmers are used to." A number of high-profile programmers signed the reactive manifesto. Some of the names behind it include Erik Meijer, Martin Odersky, Greg Young, Martin Thompson, and Roland Kuhn. A second, updated version of the Reactive Manifesto was released in 2014 - to date more than 22,000 people have signed it. The Reactive Manifesto underpins the principles of reactive programming You can think of it as the map to the treasure of reactive programming, or like the bible for the programmers of the reactive programming religion. Everyone starting with reactive programming should have a read of the manifesto to understand what reactive programming is all about and what its principles are. The 4 principles of the Reactive Manifesto Reactive systems must be responsive The system should respond in a timely manner. Responsive systems focus on providing rapid and consistent response times, so they deliver a consistent quality of service. Reactive systems must be resilient In case the system faces any failure, it should stay responsive. Resilience is achieved by replication, containment, isolation, and delegation. Failures are contained within each component, isolating components from each other, so when failure has occurred in a component, it will not affect the other components or the system as a whole. Reactive systems must be elastic Reactive systems can react to changes and stay responsive under varying workload. They achieve elasticity in a cost effective way on commodity hardware and software platforms. Reactive systems must be message driven Message driven: In order to establish the resilient principle, reactive systems need to establish a boundary between components by relying on asynchronous message passing. Those are the core principles behind reactive programming put forward by the manifesto. But there's something else that supports the thinking behind reactive programming. That's the standard specification on reactive streams. Reactive Streams standard specifications Everything in the reactive world is accomplished with the help of Reactive Streams. In 2013, Netflix, Pivotal, and Lightbend (previously known as Typesafe) felt a need for a standards specification for Reactive Streams as the reactive programming was beginning to spread and more frameworks for reactive programming were starting to emerge, so they started the initiative that resulted in Reactive Streams standard specification, which is now getting implemented across various frameworks and platforms. You can take a look at the Reactive Streams standard specification here. This post has been adapted from Reactive Programming in Kotlin. Find it on the Packt store here.
Read more
  • 0
  • 1
  • 28677
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-the-eu-copyright-directive-means-for-developers-and-what-you-can-do
Richard Gall
11 Sep 2018
6 min read
Save for later

What the EU Copyright Directive means for developers - and what you can do

Richard Gall
11 Sep 2018
6 min read
Tomorrow, on Wednesday 12 September, the European Parliament will vote on amendments to the EU Copyright Bill, first proposed back in September 2016. This bill could have a huge impact on open source, software engineering, and even the future of the internet. Back in July, MEPs voted down a digital copyright bill that was incredibly restrictive. It asserted the rights of large media organizations to tightly control links to their stories, copyright filters on user generated content. https://twitter.com/EFF/status/1014815462155153408 The vote tomorrow is an opportunity to amend aspects of the directive - that means many of the elements that were rejected in July could still find their way through. What parts of the EU copyright directive are most important for software developers? There are some positive aspects of the directive. To a certain extent, it could be seen as evidence of the European Union continuing a broader project to protect citizens by updating digital legislation - a move that GDPR began back in May 2018. However, there are many unintended consequences of the legislation. It's unclear whether the negative impact is down to any level of malicious intent from law makers, or is simply reflective of a significant level of ignorance about how the web and software works. There are 3 articles within the directive that developers need to pay particular attention to. Article 13 of the EU copyright directive: copyright filters Article 13 of the directive has perhaps had the most attention. Essentially, it will require "information society service providers" - user-generated information and content platforms - to use "recognition technologies" to protect against copyright infringement. This could have a severe impact on sites like GitHub, and by extension, the very philosophy of open collaboration and sharing on which they're built. It's for this reason that GitHub has played a big part in educating Brussels law makers about the possible consequences of the legislation. Last week, the platform hosted an event to discuss what can be done about tomorrow's vote. In it, Marten Mickos, CEO of cybersecurity company Hacker One gave a keynote speech, saying that "Article 13 is just crap. It will benefit nobody but the richest, the wealthiest, the biggest - those that can spend tens of millions or hundreds of millions on building some amazing filters that will somehow know whether something is copyrighted or not." https://youtu.be/Sm_p3sf9kq4 A number MEPs in Brussels have, fortunately, proposed changes that would exclude software development platforms to instead focus the legislation on sites where users upload music and video. However, for those that believe strongly in an open internet, even these amendments could be a small compromise that not only places an unnecessary burden on small sites that simply couldn't build functional copyright filters, but also opens a door to censorship online. A better alternative could be to ditch copyright filters and instead opt for licensing agreements instead. This is something put forward by German politician Julia Reda - if you're interested in policy amendments you can read them in detail here. [caption id="attachment_22485" align="alignright" width="300"] Image via commons.wikimedia.org[/caption] Julia Reda is a member of the Pirate Party in Germany - she's a vocal advocate of internet freedoms and an important voice in the fight against many of the directive (she wants the directive to be dropped in its entirety). She's put together a complete list of amendments and alternatives here. Article 11 of the EU Copyright Directive: the "link tax" Article 11 follows the same spirit of article 13 of the bill. It gives large press organizations more control over how their content is shared and linked to online. It has been called the "link tax" - it could mean that you would need a license to link to content. According to news sites, this law would allow them to charge internet giants like Facebook and Google that link to their content. As Cory Doctorow points out in an article written for Motherboard in June, only smaller platforms would lose out - the likes of Facebook and Google could easily manage the cost. But there are other problems with article 11. It could, not only, as Doctorow also writes, "crush scholarly and encyclopedic projects like Wikipedia that only publish material that can be freely shared," but it could also "inhibit political discussions". This is because the 'link tax' will essentially allow large media organizations to fully control how and where their content is shared. "Links are facts" Doctorow argues, meaning that links are a vital component within public discourse, which allows the public to know who thinks what, and who said what. Article 3 of the EU Copyright Directive: restrictions on data mining Article 3 of the directive hasn't received as much attention as the two above, but it does nevertheless have important implications for the data mining and analytics landscape. Essentially, this proportion of the directive was originally aimed at posing restrictions on the data that can be mined for insights except in specific cases of scientific research. This was rejected by MEPs. However, it is still an area of fierce debate. Those that oppose it argue that restrictions on text and data mining could seriously hamper innovation and hold back many startups for whom data is central to the way they operate. However, given the relative success of GDPR in restoring some level of integrity to data (from a citizen's perspective), there are aspects of this article that might be worth building on as a basis for a compromise. With trust in a tech world at an all time low, this could be a stepping stone to a more transparent and harmonious digital domain. An open internet is worth fighting for - we all depend on it The difficulty unpicking the directive is that it's not immediately clear who its defending. On the one hand, EU legislators will see this as something that defends citizens from everything that they think is wrong with the digital world (and, let's be honest, there are things that are wrong with it). Equally, those organizations lobbying for the change will, as already mentioned, want to present this as a chance to knock back tech corporations that have had it easy for too long. Ultimately, though, the intention doesn't really matter. What really matters are the consequences of this legislation, which could well be catastrophic. The important thing is that the conversation isn't owned by well-intentioned law makers that don't really understand what's at stake, or media conglomerates with their own interests in protecting their content from the perceived 'excesses' of a digital world whose creativity is mistaken for hostility. If you're an EU citizen, get in touch with your MEP today. Visit saveyourinternet.eu to help the campaign. Read next German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law
Read more
  • 0
  • 0
  • 28576

article-image-concurrency-programming-101-why-do-programmers-hang-by-a-thread
Aarthi Kumaraswamy
03 Apr 2018
4 min read
Save for later

Concurrency programming 101: Why do programmers hang by a thread?

Aarthi Kumaraswamy
03 Apr 2018
4 min read
A thread can be defined as an ordered stream of instructions that can be scheduled to run as such by operating systems. These threads, typically, live within processes, and consist of a program counter, a stack, and a set of registers as well as an identifier. These threads are the smallest unit of execution to which a processor can allocate time. Threads are able to interact with shared resources, and communication is possible between multiple threads. They are also able to share memory, and read and write different memory addresses, but therein lies an issue. When two threads start sharing memory, and you have no way to guarantee the order of a thread's execution, you could start seeing issues or minor bugs that give you the wrong values or crash your system altogether. These issues are, primarily, caused by race conditions, an important topic for another post. The following figure shows how multiple threads can exist on multiple different CPUs: Types of threads Within a typical operating system, we, typically, have two distinct types of threads: User-level threads: Threads that we can actively create, run, and kill for all of our various tasks Kernel-level threads: Very low-level threads acting on behalf of the operating system Python works at the user-level, and thus, everything we cover here will be, primarily, focused on these user-level threads. What is multithreading? When people talk about multithreaded processors, they are typically referring to a processor that can run multiple threads simultaneously, which they are able to do by utilizing a single core that is able to very quickly switch context between multiple threads. This switching context takes place in such a small amount of time that we could be forgiven for thinking that multiple threads are running in parallel when, in fact, they are not. When trying to understand multithreading, it's best if you think of a multithreaded program as an office. In a single-threaded program, there would only be one person working in this office at all times, handling all of the work in a sequential manner. This would become an issue if we consider what happens when this solitary worker becomes bogged down with administrative paperwork, and is unable to move on to different work. They would be unable to cope, and wouldn't be able to deal with new incoming sales, thus costing our metaphorical business money. With multithreading, our single solitary worker becomes an excellent multi-tasker, and is able to work on multiple things at different times. They can make progress on some paperwork, and then switch context to a new task when something starts preventing them from doing further work on said paperwork. By being able to switch context when something is blocking them, they are able to do far more work in a shorter period of time, and thus make our business more money. In this example, it's important to note that we are still limited to only one worker or processing core. If we wanted to try and improve the amount of work that the business could do and complete work in parallel, then we would have to employ other workers or processes as we would call them in Python. Let's see a few advantages of threading: Multiple threads are excellent for speeding up blocking I/O bound programs They are lightweight in terms of memory footprint when compared to processes Threads share resources, and thus communication between them is easier There are some disadvantages too, which are as follows: CPython threads are hamstrung by the limitations of the global interpreter lock (GIL), about which we'll go into more depth in the next chapter. While communication between threads may be easier, you must be very careful not to implement code that is subject to race conditions It's computationally expensive to switch context between multiple threads. By adding multiple threads, you could see a degradation in your program's overall performance. This is an excerpt from the book, Learning Concurrency in Python by Elliot Forbes. To know how to deal with issues such as deadlocks and race conditions that go hand in hand with concurrent programming be sure to check out the book.     
Read more
  • 0
  • 0
  • 27874

article-image-webassembly-trick-or-treat
Prasad Ramesh
31 Oct 2018
1 min read
Save for later

WebAssembly - Trick or Treat?

Prasad Ramesh
31 Oct 2018
1 min read
WebAssembly is a low level language that works in binary and close with the machine code. It defines an AST in a binary format. In this language, you can create and debug code in plain text format. It made popular appearance in many browsers last year and is catching on due to its ability to run heavier apps with speed on a browser window. There are Tools and languages built for it. Why are developers excited about WebAssembly? Developers are excited about this as it can potentially run heavy desktop games and applications right inside your browser window. As Mozilla shares plans to bring more functionality to WebAssembly, modern day web browsing will become more robust. However, the language used by this, WASM, poses some security threats. This is because WASM binary applications cannot be checked for tampers. Some features are even being held back from WebAssembly till it is more secure against attacks like Spectre and Meltdown.
Read more
  • 0
  • 0
  • 27769

article-image-how-to-become-an-exceptional-performance-engineer
Guest Contributor
14 Dec 2019
8 min read
Save for later

How to become an exceptional Performance Engineer

Guest Contributor
14 Dec 2019
8 min read
Whenever I think of performance engineering, I am reminded of Amazon’s CEO Jeff Bezos’ statement, “Focusing on the customer makes a company more resilient.” Any company which follows this consumer-focused approach has a performance engineering domain in it, though in varying capacity and form. The connection is simple. More and more businesses are becoming web-based, so they are interacting with their customers digitally. In such a scenario, if they have to provide exceptional customer experience, they have to build resilient, stable, user-centric and high performing web-systems and applications. And to do that, they need performance engineering. What is Performance Engineering? Let me explain performance engineering with an example. Suppose, your team is building an online shopping portal. The developers will build a system that allows people to access products and buy them. They will ensure that the entire transaction is smooth, uncomplicated for the user and can be done quickly. Now imagine that to promote the portal, you do a flash sale, and 1000 users come on the platform and start doing transactions simultaneously. And your system, under this load, starts performing slower, a lot of transactions fail and your users are dejected. This will directly affect your brand image, customer loyalty, and revenue. How about we fix this before such a situation occurs? That is exactly what performance engineering entails. A performance engineer would essentially take into account such scenarios and conduct load tests and check the system’s performance in the development phase itself. Load tests check the behavior of your system in particular situations. A ‘load’ is a possible scenario that can affect the system, for instance, sale offers or peak times. If the system is able to handle the load, it will check if it is scalable. If the system is unable to handle it, they will analyze the result, find the possible bottleneck by checking the code and try to rectify it. So, for the above example, a performance engineer would have tested the system for 100 transactions at a time, then 500, and then 1000 and would have even gone up to one hundred thousand. Hence, performance engineering ensures crash-free operation of a system, software or application. Using processes and systematic techniques, practices, and activities, a performance engineer ensures that the performance requirements are met during the development cycle. However, this is not a blanket role. It would vary with your field of operation. The work of a performance engineer working on a web application will be a lot different than that of a database performance engineer or that of a streaming performance engineer. For each of these, your “load” would vary but your goal is the same, ensuring that your system is resilient enough to shoulder that load. Before I dive deeper into the role of a performance engineer, I’d like to clarify the difference between a performance tester and a performance engineer. (Yes, they are not the same!) Performance Tester versus Performance Engineer Well, many people think that 2-3 years of experience as a performance tester can easily land you a performance engineering job. Well, no. It is a long journey, which requires much more knowledge than what a tester has. A performance tester would have testing knowledge and would know about performance analysis and performance monitoring concepts across different applications. They would essentially conduct a “load test” to check the performance, stability, and scalability of a system, and produce reports to share with the developer to work on. Their work ends here. But this is not the case for a performance engineer. A performance engineer will look for the root cause of the performance issue, work towards finding a possible solution for it and then tune and optimize the system to sort the said issue until the performance parameters are met. Simply put, performance testing can be considered as a part of performance engineering but not as the same thing. Roles and Responsibilities of a Performance Engineer Designing Effective Tests As a performance engineer, your first task is to design an effective test to check the system. I found this checklist on Dzone that is really helpful for designing tests: Identify your goals, requirements, desires, workload model and your stakeholders. Understand how to test concurrency, arrival rates, and scheduling. Understand the roles of scalability, capacity, and reliability as quality attributes and requirements. Understand how to setup/create test data and data management. Scripting, Running Tests and Interpreting Results There are several performance testing tools available in the market. But you would have to work with different languages based on the tool you use. For instance, you’d have to build your testing in C and Javascript while working with Microfocus Loadrunner. Similarly, you’d script in Java and Javascript for Apache JMeter. Once your test is ready, you’d run that test on your system. Make sure you use consistent metrics while running these tests or else your results would be inaccurate. Finally, you will interpret those results. In this, you’d have to figure out what the bottlenecks are and where they are occurring. For that, you would have to read results and analyze graphs that your performance testing tool has produced and draw conclusions. Fine Tuning And Performance Optimisation Once you know what the bottleneck is and where it is occurring, you would have to find a solution to overcome it to enhance the performance of the system you are testing. (Something a performance tester won’t do!) Your task is to ensure that the system/application is optimized to the level where it works optimally at the maximum load possible. Of course, you can seek aid from a developer (backend, frontend or full-stack) working on the project to figure this out. But as a performance engineer, you’d have to be involved actively in this fine-tuning and optimization process. There are four major skills/attributes that differentiate an exceptional performance engineer from an average one. Proves that their load results are scalable If you are a good performance engineer, you will not serve a half-cooked meal. First of all, take all possibilities into account. For instance, take the example of the same online shopping portal. If you are considering a load test for 1000 simultaneous transactions, consider it for both scenarios wherein the transactions are happening for different products or when it is happening for the same product. If your portal does a launch sale for an exclusive product that is available for a limited period, you may have too many people trying to buy it at the same time. Ask yourself if your system could withstand that load? Proves that their load results are sustainable Not just this, you should also consider whether your results are sustainable over a defined period of time. The system should operate without crashing. It is often recommended that a load test runs for 30 mins. While thirty minutes will be enough to detect most new performance changes as they are introduced, in order to make these tests legitimate, it is necessary to prove they can run for at least two hours at the same load. These time durations may vary for different programs/systems/applications. Uses Benchmarks A benchmark essentially is a point of reference based on which you can compare and assess the performance of your system. It is a set standard against which you can check the quality of your product/application/system. For some systems, like databases, standard benchmarks are readily available for you to test on. As a performance engineer, you must be aware of the performance benchmarks in your field/domain. For example, you’d find benchmarks for testing firewalls, databases, and end-to-end IT systems. The most commonly used benchmarking frameworks are Benchmark Framework 2.0 & TechEmpower. Understands User Behavior If you don’t have an understanding of user reactions in different situations, you cannot design an effective load test. A good performance engineer knows their user demographics, understands their key behavior and knows how the user would interact with the system. While it is impossible to predict user behavior entirely, for instance, a sale may result in 100,000 transactions per hour to barely 100 per hour, you should check user statistics, analyze user activity and conduct and prepare your system for optimum usage. All in all, besides strong technical skills, as a performance engineer, you must always be far-sighted. You must be able to see beyond what meets the eye and catch what others might miss. The role, invariably, requires a lot of technical expertise. But it also requires non-technical skills like problem-solving, attention-to-detail and insightfulness. About the Author Dr Sandeep Deshmukh is the founder and CEO at Workship. He holds a  PhD from IIT Bombay, and has worked in Big Data, Hadoop ecosystem, Distributed Systems, AI/ML, etc for 12+ yrs. He has been an Engineering Manager at DataTorrent and Data Scientist with Reliance Industries.
Read more
  • 0
  • 0
  • 27713
article-image-abandoning-agile
Aaron Lazar
23 May 2018
7 min read
Save for later

Abandoning Agile

Aaron Lazar
23 May 2018
7 min read
“We’re Agile”. That’s the kind of phrase I would expect from a football team, a troupe of ballet dancers or maybe a martial artist. Everytime I hear it come from the mouth of a software professional, I go like “Oh boy, not again!”. So here I am to talk about something that might touch a nerve or two, of an Agile fan. I’m talking about whether you should be abandoning agile once and for all! Okay, so what is Agile? Agile software development is an approach to software development, where requirements and solutions evolve through a collaborative effort of self-organizing and cross-functional teams, as well as the end user. Agile advocates adaptive planning, evolutionary development, early delivery, and a continuous improvement. It also encourages a rapid and flexible response to change. The Agile Manifesto was created by some of the top software gurus on the likes of Uncle Bob, Martin Fowler, et al. The values that it stands for are: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan Apart from these, it follows 12 principles, as given here, through which it aims to improve software development. At its heart, it is a mindset. So what’s wrong? Honestly speaking, everything looks rosy from the outside until you’ve actually experienced it. Let me ask you at this point, and I’d love to hear your answers in the comments section below. Has there never been a time when you felt at least one of the 12 principles were a hindrance to your personal, as well as team’s development process? Well, if yes, you’re not alone. But before throwing the baby out with the bathwater, let’s try and understand a bit and see if there’s been some misinterpretation, which could be the actual culprit. Here are some common misinterpretations of what it is, what it can and cannot do. I like to call them: The 7 Deadly Sins #1 It changes processes One of the main myths about Agile is that it changes processes. It doesn't really change your processes, it changes your focus. If you’ve been having problems with your process and you feel Agile would be your knight in shining armor, think again. You need something more than just Agile and Lean. This is one of the primary reasons teams feel that Agile isn’t working for them - they’ve not understood whether they should have gone Agile or not. In other words, they don’t know why they went Agile in the first place! #2 Agile doesn’t work for large, remote teams The 4th point of the Agile manifesto states, “developers must work together daily throughout the project”. Have you ever thought about how “awesome aka impractical” it is to coordinate with teams in India, all the way from the US on a daily basis? The fact is that it’s not practically possible for such a thing to happen when teams are spread across time zones. What it intends is to have the entire team communicating with each other on a daily basis and there’s always the possibility of a Special Point of Contact to communicate and pass on the information to other team members. So no matter how large the team, if implemented in the right way, Agile works. Strong communication and documentation helps a great deal here. #3 Prefer the “move fast and break things” approach Well, personally I prefer to MFABT. Mostly because at work, I’m solely responsible for my own actions. What about when you’re part of a huge team that’s working on something together? When you take such an approach, there are always hidden costs of being 'wrong'. Moreover, what if everytime you moved fast, all you did was break things? Do you think your team’s morale would be uplifted? #4 Sprints are counterproductive People might argue that sprints are dumb and what’s the point of releasing software in bits and pieces? I think what you should actually think about is whether what you’re focusing on can actually be done quicker. Faster doesn’t apply to everything. Take making babies for example. Okay, jokes apart, you’ll realise you might often need to slow things down in order to go fast, so that you reach your goal without making mistakes. At least not too many costly ones anyway. Before you dive right into Agile, understand whether it will add value to what you do. #5 I love micromanagement Well, too bad for you dude, Agile actually promotes self-driven, self-managed and autonomous teams that are learning continuously to adapt and adjust. In enterprises where there is bureaucracy, it will not work. Bear in mind that most organizations (may be apart from startups) are hierarchical in nature which brings with bureaucracy in some form or flavor. #6 Scrum saves time Well, yes it does. Although if you’re a manager and think Scrum is going to cut you a couple of hours from paying attention to your individual team members, you’re wrong. The idea of Scrum is to identify where you’ve reached, what you need to do today and whether there’s anything that might get in the way of that. Scrum doesn’t cover for knowing your team members problems and helping them overcome them. #7 Test everything, everytime No no no no…. That’s a wrong notion, which in fact wastes a lot of time. What you should actually be doing is automated regression tests. No testing is bad too; you surely don’t want bad surprises before you release! Teams and organisations tend to get carried away by the Agile movement and try to imitate others without understanding whether what they’re doing is actually in conjunction with what the business needs. Now back to what I said at the beginning - when teams say they’re agile, half of them only think they are. It was built for the benefit of software teams all across the globe, and from what teams say, it does work wonders! Like any long term relationship, it takes conscious efforts and time everyday to make it work. Should you abandon Agile? Yes and no. If you happen to have the slightest hint that one or more of the following are true for your organisation, you really need to abandon Agile or it will backfire: Your team is not self-managed and lacks matured and cross-functional developers Your customers need you to take approvals at every release stage Not everyone in your organisation believes in Agile Your projects are not too complex Always remember, Agile is not a tool and if someone is trying to sell you a tool to help you become Agile, they’re looting you. It is a mindset; a family that trusts each other, and a team that communicates effectively to get things done. My suggestion is to go ahead and become agile, only if the whole family is for it and is willing to transform together. In other words, Agile is not a panacea for all development projects. Your choice of methodology will come down to what makes the best sense for your project, your team and your organization. Don’t be afraid to abandon agile in favor of new methodologies such as Chaos Engineering and MOB Programming or even go back to the good ol’ waterfall model. Let us know what you think of Agile and how well your organisation has adapted to it, if has adopted it. You can look up some fun discussions about whether it works or sucks on Hacker news: In a nutshell, why do a lot of developers dislike Agile? Poor Man’s Agile: Scrum in 5 Simple Steps What is Mob Programming? 5 things that will matter in application development in 2018 Chaos Engineering: managing complexity by breaking things
Read more
  • 0
  • 2
  • 27264

article-image-quantum-computing-trick-or-treat
Prasad Ramesh
01 Nov 2018
1 min read
Save for later

Quantum computing - Trick or treat?

Prasad Ramesh
01 Nov 2018
1 min read
Quantum computing uses quantum mechanics in quantum computers to solve a diverse set of complex problems. It uses qubits to store information in parallel dimensions. Quantum computers can work through a solution involving large parameters with far fewer operations than a standard computer. What is so special about Quantum Computing? As they have potential to work through and solve complex problems of tomorrow, research and work on this area is attracting funding from everywhere. But these computers need a lot of physical space right now, kind of like the very first computers in the twentieth century. Quantum computers also pose a security threat since they are good at calculating large items/numbers. Quantum encryption anyone? Quantum computing is even available on the Cloud from different companies. There is even a dedicated language called Q# by Microsoft. Using concepts like entanglement to speed up computation, quantum computing can solve complex problems and is a tricky one, but I call it a treat. What about the security threat? Well, Dr. Alan Turing built a better computer to decrypt messages from another machine, we’ll let you think now.
Read more
  • 0
  • 0
  • 26810

article-image-5-application-development-tool-matter-2018
Richard Gall
13 Dec 2017
3 min read
Save for later

5 application development tools that will matter in 2018

Richard Gall
13 Dec 2017
3 min read
2017 has been a hectic year. Not least in application development. But it’s time to look ahead to 2018. You can read what ‘things’ we think are going to matter here, but here are the key tools we think are going to define the next 12 months in the area. 1. Kotlin Kotlin has been one of the most notable languages in 2017. It’s adoption has been dramatic over the last 12 months, and signals significant changes in what engineers want and need from a programming language. We think it’s likely to challenge Java's dominance throughout 2018 as more and more people adopt it. If you want a run down of the key reasons why you should start using Kotlin, you could do a lot worse than this post on Medium. Learn Kotlin. Explore Kotlin eBooks and videos. 2. Kubernetes Kubernetes is a tool that’s been following in the slipstream of Docker. It has been a core part of the growth of containerization, and we’re likely to see it move from strength to strength in 2018 as the technology matures and the size of container deployments continues to grow in size and complexity. Kubernetes’ success and importance was underlined earlier this year when Docker announced that its enterprise edition would support Kubernetes. Clearly, if Docker paved the way for the container revolution, Kubernetes is consolidating and helping teams take the next step with containerization. Find Packt’s Kubernetes eBooks and videos here. 3. Spring Cloud This isn’t a hugely well known tool, but 2018 might just be the year that the world starts to pay it more attention. In many respects Spring Cloud is a thoroughly modern software project, perfect for a world where microservices reign supreme. Following the core principles of Spring Boot, it essentially enables you to develop distributed systems in a really efficient and clean way. Spring is interesting because it represents the way Java is responding to the growth of open source software and the decline of the more traditional enterprise system. 4. Java 9 This nicely leads us on to the new version of Java 9. Here we have a language that is thinking differently about itself, that is moving in a direction that is heavily influenced by a software culture that is distinctive from where it belonged 5-10 years ago. The new features are enough to excite anyone that’s worked with Java before. They have all been developed to help reduce the complexity of modern development, modeled around the needs of developers in 2017 - and 2018. And they all help to radically improve the development experience - which, if you’ve been reading up, you’ll know is going to really matter for everyone in 2018. Explore Java 9 eBooks and videos here. 5. ASP.NET Core Microsoft doesn’t always get enough attention. But it should. Because a lot has changed over the last two years. Similar to Java, the organization and its wider ecosystem of software has developed in a way that moves quickly and responds to developer and market needs in an impressive way. ASP.NET Core is evidence of that. A step forward from the formidable ASP.NET, this cross-platform framework has been created to fully meet the needs of today’s cloud based, fully connected applications that run on microservices. It’s worth comparing it with Spring Cloud above - both will help developers build a new generation of applications, and both represent two of software’s old-guard establishment embracing the future and pushing things forward. Discover ASP.NET Core eBooks and videos.
Read more
  • 0
  • 0
  • 25405
article-image-6-ways-to-blow-up-your-microservices
Aaron Lazar
14 Jul 2018
6 min read
Save for later

6 Ways to blow up your Microservices!

Aaron Lazar
14 Jul 2018
6 min read
Microservices are great! They’ve solved several problems created by large monoliths, like scalability, fault tolerance, and testability, among others. However, let me assure you that everything’s not rosy yet, and there are tonnes of ways you can blow your microservices to smithereens! Here are 6 sure shot ways to meet failure with microservices, and to spice it up, I’ve included the Batman sound effects too! Disclaimer: Unless you’re Sheldon Cooper, what is and what isn’t sarcasm should be pretty evident in this one! #1 The Polyglot Conspiracy One of the most spoken about benefits of using the microservices pattern, is that you can use a variety of tools and languages to build your application. Great! Let’s say you’re building an e-commerce website with a chat option, maybe VR/AR thrown in too, and then the necessities like a payment page, etc. Obviously you’ll want to build it with microservices. Now, you also thought you might have different teams work on the app using different languages and tools. Maybe Java for the main app, Golang for some services and JavaScript for something else. Moreover, you also decided to use Angular as well as React on various components of your UI. Then one day the React team needs to fix bugs in production on Angular, because the Angular team called in sick. Your Ops team is probably pulling out their hair right now! You need to understand that different tech stacks behave differently in production! Going the Microservices route, doesn’t give you a free ticket to go to town on polyglot services. #2 Sharing isn’t always Caring Let’s assume you’ve built an app where various microservices connect to a single, shared database. It’s quite a good design decision, right? Simple, effective and what not. Now a business requirement calls for a change in the character length on one of the microservices. The team goes ahead and changes the length on one of the tables, and... That’s not all, what if you decide to use connection pools so you can reuse request to the database when required. Awesome choice! Imagine your microservices decided to run amok, submitting query after query to the database. It would knock out every other service for weeks! #3 WET is in; DRY is out? Well, everybody’s been saying Don't Repeat Yourself, these days - architects, developers, my mom. Okay, so you’ve built this application that’s based on event sourcing. There’s a list or store of events and a microservice in your application, that publishes a new event to the store when something happens. For the sake of an example, let’s say it’s a customer microservice that publishes an event “in-cart” whenever the customer selects a product. Another microservice, say “account”, subscribes to that aggregate type and gets informed about the event. Now here comes the best part! Suppose your business asks for a field type to be changed, the easiest way out is to go WET (We Enjoy Typing), making the change in one microservice and copying the code to all the others. Imagine you’ve copied to a scale of hundreds of microservices! Better still, you decided to avoid using Git and just use your event history to identify what’s wrong! You’ll be fixing bugs till you find a new job! #4 Version Vendetta We usually get carried away sometimes, when we’re building microservices. You tend to toss Kafka out of the window and rather build your own framework for your microservices. Not a bad idea at all! Okay, so you’ve designed a framework for the app that runs on event sourcing. So naturally, every microservice that’s connected will use event sourcing to communicate with the others. One fine day, your business asked for a major change in a part of the application, which you did, and the new version of one of the microservices sends the new event to the other microservices and… When you make a change in one microservice, you can’t be sure that all others will work fine, unless versions are changed in them too. You can make things worse by following a monolithic release plan for your microservices. You could keep your customers waiting for months to make their systems compatible, while you have your services ready but are waiting to release a new framework on a monolithic schedule. An awesome recipe for customer retention! #5 SPA Treatment! Oh yeah, Single Page Apps are a great way to build front end applications! So your application is built on the REST architecture and your microservices are connected to a single, massive UI. One day, your business requests for a new field to be added to the UI. Now, each microservice has it’s individual domain model and the UI has its own domain model. You’re probably clueless about where to add the new field. So you identify some free space on the front end and slap it on! Side effects add to the fun! Imagine you’ve changed a field on one service, side effects work like a ripple - passing it on to the next microservice, and then to next and they all will blow up in series like dominoes. This could keep your testers busy for weeks and no one will know where to look for the fault! #6 Bye Bye Bye, N Sync Let’s consider you’ve used synchronous communication for your e-commerce application. What you didn’t consider was that not all your services are going to be online at the same time. An offline service or a slow one can potentially lock or slow thread communication, ultimately blowing up your entire system, one service at a time! The best part is that it’s not always possible to build an asynchronous communication channel within your services. So you’ll have to use workarounds like local caches, circuit breakers, etc. So there you have it, six sure shot ways to blow up your microservices and make your Testing and Ops teams go crazy! For those of you who think that microservices have killed the monolith, think again! For the brave, who still wish to go ahead and build microservices, the above are examples of what you should beware of, when you’re building away those microservices! How to publish Microservice as a service onto a Docker How to build Microservices using REST framework Why microservices and DevOps are a match made in heaven    
Read more
  • 0
  • 0
  • 25329

article-image-five-most-surprising-applications-iot
Raka Mahesa
16 Aug 2017
5 min read
Save for later

Five Most Surprising Applications of IoT

Raka Mahesa
16 Aug 2017
5 min read
The Internet of Things has been growing for quite a while now. The promise of smart and connected gadgets has resulted in many, many applications of Internet of Things. Some of these projects are useful, yet some are not. Some of these applications, like smart TV, smartwatch, and smart home, are expected, whereas others are not. Let's look at a few surprising applications that tap into the Internet of Things.  Let’s get started with a project from Google.   1. Google's Project Jacquard  Simply put, project Jacquard is a smart jacket, a literal piece of clothing that you can wear that is connected to your smartphone. By tapping and swiping on the jacket sleeve, you can control the music player and map application on your smartphone. This project is actually a collaboration between Google and Levi's, where Google invented a fabric that can read touch input and Levi's applied the technology to a product people will actually want to wear.  Even right now, the idea of a fabric that we can interact with boggles my mind. My biggest problem with wearables like smart watch and smart band is that they felt like another device we need to take care of. Meanwhile, a jacket is something that we just wear, with its smart capability being an additional benefit. Not to mention that connected fabric allows more aspects of our daily life to be integrated with our digital life.  That said, project Jacquard is not the first smart clothing, there are other projects like Athos that embeds sensor to their clothing. Still, project Jacquard is the first one that allows people to actually interact with their clothing.  2. Hapifork  Hapifork is actually one of the first smart gadgets that I was aware of. As the name alludes to, Hapifork is a smart fork with capacitive sensor, motion sensor, vibration motor and a micro USB port. You might wonder why a fork needs all those bells and whistles. Well, you see, Hapifork uses those sensors to detect your eating motion and alerts you if you are eating too fast. After all, eating too fast can cause weight gain and other physical issues, so the fork tries to help you live a healthier life.  While the idea has some merits, I'm still not sure an unwieldy smart fork is a good way to make us eat healthier. I think actually eating healthy food is a better way to do that. That said, the idea of smart eating utensils is fascinating. I would totally get a smart plate with the capability of counting the amount of calories in our food.   3. Smart food maker  In 2016 there was a wave of smart food-making devices that started and successfully completed their crowdfunding project. These devices are designed to make it easier and quicker for people to prepare food. They are designed to be much easier than just using a microwave oven, that is. The problem is, these devices are pricey and are only able to prepare a specific type of food. There is CHiP, which can bake various kind of cookies from a set of dough and there is Flatev that can bake tortillas from a pod of dough.  While the concept may initially sound weird, having a specific device to make a specific type of food is actually not that weird. After all, we already have a machine that only makes a cup of fresh coffee, so having a machine that only makes a fresh plate of cookies could be the next natural step.  4. Smart tattoo  Of all the things that can be smart and connected, a tattoo is definitely not the one that comes to my mind. But apparently that's not the case with plenty of researchers from all over the world. There have been a couple of bleeding edge projects that resulted in connected tattoos. L'Oreal has created tattoos that are able to detect ultraviolet exposure, and Microsoft and MIT have created tattoos that users can use to interact with smartphones. And late last year a group of researchers created a tattoo with an accelerometer that can detect a user's heartbeat.  So far wearables have been smart accessories that you wear daily. Since you also wear your skin every day, would it also count as wearable?   5. Oombrella If you ever thought that human isn't a creative creature, just remember that it's also a human who invented the concept of smart umbrella. Oombrella is a connected umbrella that will notify you when it's about to rain and also will notify you if you’ve left it behind in a restaurant. These functionalities may sound passable at first, until you realize that the weather notification comes from your smartphone and you just need a weather app instead of a smart umbrella. That said, this project has been successfully crowdfunded, so maybe people actually want a smart umbrella.  About the author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 25117
Modal Close icon
Modal Close icon