Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-express-middleware
Pedro NarcisoGarcíaRevington
09 Dec 2016
6 min read
Save for later

Express Middleware

Pedro NarcisoGarcíaRevington
09 Dec 2016
6 min read
This post provides you with an introduction to Express middleware functions, what they are, and how you can apply a composability principle to combine simple middleware functions to build more complicated ones. You can find the code of this article at github.com/revington/express-middleware-tutorial. Before digging into Express and its middleware functions, let's examine the concept of a web server in Node. In Node, we do not create web applications as in PHP or ASP, but with web servers. Hello world Node web server Node web servers are created by exposing an HTTP handler to an instance of the HTTP class. This handler will have access to the request object, an instance of IncomingMessage; and the response object, an instance of ServerResponse. The following code listing (hello.js) implements this concept: var http = require('http'); function handler(request, response){ response.end('hello world'); } http.createServer(handler).listen(3000); You can run this server by saving the code to a file and running the following command in your terminal: $ node hello.js Open http://localhost:3000 in your favourite web browser to display our greeting. Exciting, isn't it? Of course, this is not the kind of web application we are going to get paid for. Before we can add our own business logic, we want some heavy lifting done for us, like cookie parsing, body parsing, routing, logging, and so on. This is where Express comes in. It will help you orchestrate all of this functionality, plus your business logic, and it will do it with two simple concepts: middleware functions and the middleware stack. What are middleware functions? In the context of an Express application, middleware functions are functions with access to the request, the response, and the next middleware in the pipeline. As you probably noticed, middleware functions are similar to our previous HTTP handler, but with two important differences: Request and Response objects are augmented by Express to expose its own API Middleware has access to the next middleware in the stack The latter leads us to the middleware stack concept. Middleware functions can be "stacked", which means that the given two-stacked middleware functions, A and B, an incoming request will be processed by A and then by B. In order to better understand these abstract concepts, we are going to implement a really simple middleware stack, shown here in the middleware-stack.js code listing: 'use strict'; const http = require('http'); function plusOne(req, res, next){ // Set req.counter to 0 if and only if req.counter is not defined req.counter = req.counter || 0; req.counter++; return next(); } function respond(req, res){ res.end('req.counter value = ' + req.counter); } function createMiddlewareStack(/*a bunch of middlewares*/){ var stack = arguments; return function middlewareStack(req, res){ let i = 0; function next(){ // pick the next middleware function from the stack and // increase the pointer let currentMiddleware = stack[i]; if(!currentMiddleware){ return; } i = i + 1; currentMiddleware(req, res, next); } // Call next for the first time next(); } } var myMiddlewareStack = createMiddlewareStack(plusOne, plusOne, respond); function httpHandler(req, res){ myMiddlewareStack(req, res); } http.createServer(httpHandler).listen(3000); You can run this server with the following command: $ node middleware-stack.js After reloading http://localhost:3000, you should read: req.counter value = 2 Let's analyze the code. We first define the plusOne function. This is our first middleware function and, as expected, it receives three arguments: req, res, next. The function itself is pretty simple. It ensures that the req object is augmented with the counter property, and that this property is incremented by one, then it calls the provided next() function. The respond middleware function has a slightly different signature. The next parameter is missing. We did not include next in the signature because theres.end() function terminates the request and, therefore, there is no need to call next(). While writting a middleware function, you can either terminate the request or call next(). Otherwise, the request will hang and the client will have no response. Bear in mind that calling next or terminating the request more than once will turn into difficult errors to debug. The createMiddlewareStack function is way more interesting than the previous ones and explains how the middleware stack works. The first statement creates a reference to the arguments object. In JavaScript, this is an Array-like object corresponding to the arguments passed to a function. Then, we define the next() function. On each next() call, we pick a reference to the ith element of the middleware stack, which, of course, is the next middleware function on the stack. Then, we increment the value of i and pass control to the current middleware with req, res, and our recently created next function. We invoke next() immediately after its definition. The mechanism is simple: everytime next() is invoked, the value of i is incremented and, therefore, on each call, we will pass control to the next middleware in the stack. Once our core functionality has been defined, the next steps are pretty straightforward. myMiddlewareStack is a middleware stack composed by plusOne and respond. Then, we define a very simple HTTP handler with just one responsibility: to transfer control to our middleware stack. Now, we have a good understanding of middleware functions and middleware stacks, and we are ready to rewrite our simple application with Express. Install express by running the following command: $ npm install express Create the file express.js as follows: 'use strict'; const express = require('express'), app = express(); function plusOne(req, res, next){ req.counter = req.counter || 0; req.counter++; return next(); } function respond(req, res){ res.end('Hello from express req.counter value = ' + req.counter); } app.use('/', plusOne, plusOne, respond); app.listen(3000); app.use mounts the specified middleware/s function/s at the specified path. In our case,"/". The path part can also be a pattern or regular expression. Again, run this server with this command: $ node express.js After reloading http://localhost:3000, we should be able to read the following: "Hello from express req.counter value = 2" So far, we have seen how: To create middleware functions and mount them at a given path To make changes to request/response objects To terminate a request To call the next middleware Where to go from here Express repository is full of examples covering topics like auth, content negotiation, session, cookies, and so on. A simple restfull API can be a good project to get yourself more familiarized with the Express API: middleware, routing, and views, among others. About the author Pedro NarcisoGarcíaRevington is a senior full stack developer with 10+ years of experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D), DD, and polyglot persistence.
Read more
  • 0
  • 0
  • 3190

article-image-why-are-open-source-developers-more-in-demand-than-ever
Erik Kappelman
13 Nov 2017
3 min read
Save for later

Why are open source developers more in demand than ever?

Erik Kappelman
13 Nov 2017
3 min read
There are many reasons why open source professionals are in high demand. But let’s start with the obvious one first: one of the biggest reasons open source professionals are in such demand today has nothing to do with the fact that they are open source professionals, but simply because technology has become such a core part of just about every organization in every industry. But, there’s more to it than that. Why open source developers are good for businesses Let's talk for a minute about what open source professional actually means, or what it could mean. I would separate open source professionals into two broad groups that overlap each other quite a bit. There are people who use open source tools in professional settings, such as Node.js, Angular 2, and others. Then there are the people who are the creators of open source tools, such as those at Red Hat, Ubuntu, and Firebase. But why are so many more people are using open source tools in a professional setting? Sure, fashion is a part of it, but there’s obviously a much more prosaic answer: open source tools are free to use. Technological innovation is seen as powering growth – and while the right people might cost money, taking advantage of these tools can make a transformative impact. Proprietary tools, after all, limit organizations – they require specific skillsets, and they demand you work in specific ways. It’s for this reason, then, that businesses are starting to consider the impact an open source tech strategy can have. And the consequence is then that these skills are more in demand than ever. Why is open source better? Open source is better because it allows more freedom. It empowers businesses – even very small ones – to innovate and create new solutions to problems. Traditional software – where you have vendor lock-in – doesn’t enable the same degree of innovation. And even for medium sized companies, it can still be very expensive to get the software solution you want. The world's biggest software companies are embracing open source The open source model is the antithesis of the monopolistic behemoth model we are accustomed too. The product is free and its ingredients aren’t a secret. This means that software will become the best version of itself because interest, not position, determines who is developing and how they are developing software. Furthermore, because the product is free, companies need to turn elsewhere to make a profit. They do this usually by selling enterprise level support, and I would say that their business model is working. The open source model forces competition by decentralizing power and allowing anyone with talent to get noticed very quickly. This is a recipe for success and there is one sure way to tell: What have Microsoft, IBM, Google, Amazon and the like been up to recently? Getting started on creating their own set open source tools of course. The titans of industry are now right down with you and me, because they know they would be missing out if they weren’t. Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for the Department of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 3182

article-image-why-everyone-calling-themselves-software-company
Hari Vignesh
12 Jun 2017
5 min read
Save for later

Why is everyone calling themselves a software company?

Hari Vignesh
12 Jun 2017
5 min read
If we consider the number of startups today, it comes close to 400+ million entrepreneurs who are trying to form 300+ million startups annually. The majority of these startups are software companies. Why has software occupied a majority of the market? Or is that just a myth? Let’s take a look at this by analyzing the pros and cons of the software industry. Why is the software market great? Though the software sector has faced setbacks, it has always been like a horse?—?which gets back up very soon. Right when the revolution was created in Silicon Valley, this sector has helped a lot of people in terms of employment, productivity, and assistance. It has other advantages as well. The need is still there The need for software is never depleting. This is because of the nature of software?—?virtual. Being virtually present can make it more flexible, customizable, scalable and can be transformed or created into anything. So for every business, there can always be a software solution and it has proved to be the best. So, as long as there is a need for software, this industry is going to rule the world. Humongous talent pool When there are more talented or skilled people, the industry’s roadmap will be set in motion for generations. The software industry holds an ecosystem of a variety of skilled personnel who strive to make an impact in this industry. Not only that, it also inspires many engineers and skilled personnel with it’s attractive luxury benefits. Accelerated results Software businesses will always produce accelerated results. Whether it is a success or a failure, it can be encountered within the first few months of starting it. This is once again a wonderful factor that I admire about this industry. Failing fast will trigger the necessary data for your next step. Attractive funding Many investors are ready to finance tech-based startups because of the accelerated results (the above factor). Either they grow 5x to 10x, or they fail immediately (which will ensure less loss for them). Initial investment is less This is not the case for all tech startups, but a few startups are created by engineers themselves and the cost of creating and deploying software will not cost much (pure software business). For example, for a B2C app business; if the co-founders are developers, then only the server cost is going to be the investment initially (time and effort is also an investment, but the context here is only money). Inspiring stories Many successful personalities have created their own milestones and people are really inspired by them all. Every tech entrepreneur wants to be the next millionaire and they are striving for that goal. With respect to the milestones that great leaders have created, this industry has also created very big empires and the empires are still growing. The point is, the limit is not set for this industry and it tends to expand every year to a new dimension. Challenges in software business Though the above factors may give you goose bumps, this industry has it’s own risks too. It can be taken as a risk or as a challenge. It all depends on one’s perspective. Market timing There something called market timing and the entire industry is dependent on it. Even if your business idea is awesome, you have sufficient funding and you have a great product; if the market is not open at that time, you will end up dismantled. Market need matters a lot and only when it is really needed, people succeed. Uncertainty is more The software market has seen potential growth but the uncertainty is more, i.e. the downfall or the recession cannot be predicted easily and the aftermath will be very catastrophic. The business will also tend to be very competitive, i.e. your product can be overthrown immediately by a competitor in few years. Not as easy as you think Surviving in this industry is not as easy as you think. Though there are many success stories, comparing millions of startups?—?the majority of them have failed. The success rate of the masses is always less and it takes great effort to be a rock star in this industry. I feel this is because of the myth that’s circulating recently in this industry?—?to be the young millionaire or to be a CEO. This makes your journey very tough if you don’t have the necessary skills and if you’re over ambitious.  Considering all of these factors, maybe starting a software business isn’t a bad idea after all. Though it has its ups and downs, it has continuously inspired a lot of lives. Every business journey is not easy and it is full of challenges.  "Computers themselves, and software yet to be developed, will revolutionize the way we learn." —Steve Jobs About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 3180

article-image-net-core-and-future-net
Mark Price
22 Apr 2016
5 min read
Save for later

.NET Core and the future of .NET

Mark Price
22 Apr 2016
5 min read
.NET Core is Microsoft’s new cross-platform implementation of .NET. Although the .NET Framework isn’t vanishing, .NET Core is set to be the future focus of Microsoft’s development platform. Born of the “need and desire to have a modern runtime that is modular and whose features and libraries can be cherry picked”, the .NET Core execution engine is also open source on GitHub. What .NET Core means for the future of working with .NET There are three groups of programmers that need to evaluate how they work with .NET: Existing .NET developers who are happy with their applications as-is. Existing .NET developers who want to move their applications cross-platform. Developers new to .NET. For existing .NET developers who have been working with .NET Framework for the past 15 years, switching to the cross-platform .NET Core means giving up a vast number of familiar APIs and entire frameworks such as WPF and WCF. WPF is used to build Windows desktop applications so that is not a surprise, however WCF is the best technology to use to build and consume SOAP services so it is a shame that it can currently only be used on Windows. Microsoft have hinted that WCF might be included in future versions of .NET Core, however personally I wouldn’t bet my business on that. If existing .NET developers need to continue to use technologies such as WCF then they should be aware that they must continue to use Windows as the host operating system. For those new to .NET, I would recommend learning .NET Core first, evaluate it and see if it can provide the platform you need. Only if it cannot, then look at .NET Framework. Today, .NET Core has not had a final release. In November 2015 Microsoft released a first candidate (RC1) with a “go-live” licence meaning that they support its use in production environments. Since then there have been delays, primarily to the command line tools, and Microsoft announced there will be a RC2 release, likely in May 2016. A date for final release has not been made. I recommend that a .NET developer not switch an existing application to .NET Core today. .NET Core should be evaluated for future projects, especially those that would benefit from the flexibility of cross-platform deployment. Taking .NET Core open source The decision to make .NET Core a piece of open source technology is important because it enables Microsoft teams to collaborate with external developers in the community and accept code contributions to improve .NET. A richer flow of ideas and solutions helps Microsoft teams step outside their bubble. Open source developers use a variety of programming languages and platforms. They are quick to learn new platforms and Microsoft’s .NET platform is one of the easiest to learn and most productive to use. The biggest challenge will be trusting that Microsoft really has embraced open source. As long as Scott Guthrie, one of the proponents of open source within Microsoft, is in charge of .NET then I believe we can trust them. Building and working cross-platform with .NET Core .NET Core 1.0 does not support the creation of desktop applications, for Windows or other platforms such as Linux. For cross-platform development, .NET Core 1.0 only supports web applications and services by using ASP.NET Core, and command line applications. .NET Core 1.0 also supports Universal Windows Platform apps that are cross-device but they are limited to Microsoft Windows 10 platforms including Xbox One and HoloLens. For cross-platform mobile development, use Xamarin. Xamarin is based on the Mono project, not on .NET Core. It is likely that Microsoft will slowly merge .NET Core and Xamarin so that in a few years Microsoft has a single “.NET Core 2.0” that supports cross-platform mobile and web development. This fits with Microsoft CEO Satya Nadella’s “Mobile First, Cloud First” strategy. .NET Core is designed to be componentized so that only the minimum number of packages that your application requires will be deployed as part of a continuous integration strategy. .NET Core supports Linux and Docker to allow the virtual machine infrastructure flexibility that DevOps needs. Changing the landscape with “Bash on Ubuntu on Windows” The Linux world has a more vibrant and faster-moving developer tool set than Microsoft’s Windows. Although Microsoft has the powerful PowerShell command line platform, it cannot compete with open source Bash. Bringing Bash to Windows is the final piece of a master plan that enables Linux developers to immediately feel at home on Windows, and for Microsoft developers to instantly gain all the amazing tools available for Bash, as well as use the same tools on both Windows and Linux. “Bash on Ubuntu on Windows” (BUW) means that not only has Microsoft provided the cross-platform .NET Core, and the cross-platform Visual Studio Code, but also the cross-platform developer command line tools. Now a developer can do their work on any platform and not have to learn new tools or APIs. .NET Core and the future of Microsoft .NET Core (and BUW and Visual Studio Code) all point to Microsoft finally recognizing that their Windows platform is a de facto legacy platform. Windows had a long 30-year run, but it’s reign is almost over. Microsoft’s Azure is their new developer platform and it is amazing. With Azure Stack coming soon to enable on-premise private cloud with the same features as they offer in their public cloud, I believe Microsoft is in the best position to dominate as a cloud developer platform. To enable our heterogenous future, and build mobile and cloud solutions, Microsoft has embraced open source and cross-platform tools and technologies. Microsoft’s wonderful developer tools and technologies aren’t just for Windows users any more, they are for everyone. About the Author Mark J. Price is a Microsoft Certified Trainer (MCT) and Microsoft Specialist, Programming in C# and Architecting Microsoft Azure Solutions, with more than 20 years of educational and programming experience. He is the author of  C# 6 and .NET Core 1.0: Modern Cross-Platform Development.
Read more
  • 0
  • 1
  • 3172

article-image-basics-information-architecture
Cheryl Adams
10 Oct 2016
5 min read
Save for later

The Basics of Information Architecture

Cheryl Adams
10 Oct 2016
5 min read
The truth is there is nothing basic about technology anymore. Simple terms like computers and servers have been replaced by virtualized machines and the cloud. Storage has been replaced by data lakes and pools. The terms often present a challenge for even the most seasoned technical resource. Depending on whom you ask, some technology terms can be defined in different ways.Information Architecture is truly a search-and-discovery mission, with the goal of finding all of the necessary information and organizing it in a useful way. The goal is to present a finished product that the audience or recipient can understand. A task completed by an Information Architect will be so well defined that not only is it scalable for growth, but also repeatable based on the type of business. So what is Information Architecture? It depends on whom you ask. The Information Architecture Institute defines Information Architecture as the practice of deciding how to arrange parts of something to be understandable. In the case of technology, this arrangement is based on known data, facts, or information.Let’s take a closer look at the word “architect.”Merriam-Webster defines architect as the manner in which the components of a computer or computer system are organized and integrated.This is really a two-fold task, organizing and integration. For example, you would not group internal employees with external customers. However, an integration tool may be needed if employees are assigned to work with external customers. To illustrate, let’s look at an unfinished puzzle in a box. The object of this project would be to sort the pieces in such a way that you could put together a complete picture. Hobbyists may sort out similar colored pieces or pieces that appear to make the frame of the picture. After some period of sorting and organizing, the pieces are placed together to make a complete picture. An Information Architect’s role can be very similar. Let’s consider another illustration by viewing this as a role in a project and walkthrough the responsibilities associated with it. An Architect has been placed on the project team for Company XYZ. The project consists of organizing existing or new information for a high growth company. Although this project may sound daunting, we’ll only focus on the Information Architect tasks. Solid communication and writing skills are needed as an Architect. As the information provider, being clear and understandable is the key. The Architect will be using these skills to define how the information will be organized.As an Architect, tools will be selected for presentation, modeling and workflow to define the User Experience. The User Experience (UX) defines a person's entire experience using a particular product, system or service. The first requirement for a great user experience is to meet the exact needs for the usage of a product or a service. These tasks are also handled by an Information Architect. An intermediate to advanced understanding in the technology field, including networking, is very important. In the technology field, you will need to be familiar with the different terms and components of the given environment. You will be detailing the known information in such a way that it is easy to understand. You may also discover new information in this process that will need to be documented as well.This research can be conducted through vendor and software reviews as well as online research. As we learned earlier, one of the key aspects of Architect is to design based on given or discovered information. A new method or workflow may need to be clarified or redefined as to how an organization shares, manages and monitors information. As with most information, it is not static, so the Architect will need to determine a workflow for making it repeatable or scalable. An Information Architect is instrumental in designing and defining how to best organize information across every channel and touchpoint throughout a company. The goal is to make the information easy to find, access and leverage. By digging into this information technology, these puzzle pieces will start to come together. These pieces consist of touch points that may include storage, security, servers, and applications. Some of the more challenging pieces in the box may be networking protocols and understanding how to gain access to various environments, applications, files, multiple project tools and more. It is truly an art of organizing systems in such a way that the User Experience (UX) has a solid workflow that is easy to understand and follow. The structure and content of the given system is designed by navigation and labeling structure that enhance the user experience. Thus, the finished product of an Information Architect is a well-definedsystem, project or service. About the author Cheryl Adams is a senior cloud data and infrastructure architect. Her work includes supporting healthcare data for large government contracts; deploying production based changes through scripting, monitoring, and troubleshooting; and monitoring environments using the latest tools for databases, web servers, web API and storage.
Read more
  • 0
  • 0
  • 3156

article-image-its-time-get-rid-white-space
Owen Roberts
31 Dec 2015
7 min read
Save for later

It's Time to Get Rid of That White Space

Owen Roberts
31 Dec 2015
7 min read
It’s been a few years since Responsive Web first burst onto the scene as a revolution in how we developed and designed websites. Gone are the days of websites that looked the same no matter what device or resolution you were using; replaced by reactive containers, grids, a few images, and white. Lots and lots of white. Ever since joining Packt I’ve started to look more and more at how every website I visit looks with a more critical eye, and the biggest common thread I’ve seen is the constant light grey or white that make up many of the biggest websites on the web. What to Avoid Let’s use Reddit’s new beta mobile design as an example. Reddit has always been pretty minimalist when it comes to design, but the current version shown above can be a nightmare to use at times. Have a go for yourself and check a few of the posts on the front page – buttons are far too small to be of use to most fingers, and I’ve often found myself clicking through to a video when I’ve wanted to read the comments. And look at all that white space! There have been more than a few times where I’ve clicked on something expecting it to take me to the article only to find that, actually, I’m meant to click slightly to the left. This is probably one of the biggest problems with white space, and for a website like Reddit where you can visit several times a day for several different topics can add up to a lot of frustration for customers. Obviously there are reasons for a white background sometimes – it’s what we’re used to, it’s easy to read large blocks of text with - it works with practically everything, why wouldn’t you use it?! Well, even without going into that some people naturally struggle with reading large chunks of text on a white background, in 2014 mobile officially passed the desktop as the average user’s device of choice; having your website stand out on mobile while also looking good is more important than ever in the world of responsive web. New customers can get put off by the slightest thing, and in today’s web a bland website can be taken as a warning sign. This is a huge shame, because it doesn’t take much to let your site stand out and shine against the rest in the world of mobile. Good use of space and fits on all devices. This is what you’re aiming for. (Not really) Luckily I’ve noticed is that a few developers are finally starting to put their own spin on the common responsive design we’ve seen for the last 3 years. In this article I’ll show you a few favorite twists some larger companies have done this year and hopefully inspire you to experiment ways to make your sites stand out so much more with very little effort! Words Aren’t Enough Heavy use of images was once frowned upon for mobile sites, but as savvy mobile users start getting bigger data limits we’re seeing an increase in richer mobile sites filled with auto-playing videos, gifs, and more. White space was great for saving data when data limits were a serious concern but with contacts offering high to unlimited data limits we can be a bit more relaxed when it comes to our limits. When used right pictures are far more effective than just reams and reams of words after all. So, let's look at 3 examples of websites I've used recently that really stood out to me and see how they effectively cut down on white space. GAME’s website here is filled to the brim with color and carefully chosen ads – in fact, 3/4ths of the front page is just ads with a few other links to back it up. This makes sense from the developers standpoint, as a shopping site needs to do two key things: Draw those who do not visit the site regularly to the biggest deals in order to get them to order the big bundles and the newest (And therefore most expensive) products. Get the customers exactly where they want to be with as few clicks as possible. The more links a user has to go through to get to what they need the more likely they are to just give up and leave, probably to head to Amazon, which is what you want to avoid. The simple presentation of this page really works for it. It's colorful and really helps draw the eye to where the designers want you to go. I really like the lack of words outside of the different categories on this page as well – all this color, all these images are used to create a professional looking site that doesn't use a lot of mobile data without relying on a heavy amount of white everywhere. Deliveroo is a food delivery service for restaurants, and that really comes through in its presentation. If you've ever used takeout apps before they can come across very differently depending on where you look (As an experiment, check the website designs and copy for Domino, Pizza Hut, and Papa Johns; they've all a distinct tone), Deliveroo's design sidesteps this, and helps all the restaurants featured on the site appear equally professional. This continues through the design scheme as well, subtle blues and a majority of the background being made up of professional looking dishes helps entice the customer while also being simple to set up. One thing I really like about this site is that the only white space found on the page, the forms, actually helps draw the customer immediately to them so they can get started off straight away. White space is a tool, just like everything else you have in a toolbox, and making the best use of it is important. Nandos, everyone’s favorite meme of 2015, has probably one of the best mobile designs I’ve seen for a website a user may visit. Every aspect of the website is aimed to look good and keeps the company aesthetic baked in – material design is in full force too. The initial page still manages to be minimalist, consisting of a swipe ad and a few navigation buttons helpfully fixed to the top of the current screen. The best part is with a simple tap of the menu button everything a hungry soul needs replaces the app without having to load up – it’s quick, painless, and dynamic. Why is this great? Simply put, it manages to capture the spirit of minimalism without looking bland. The font color naturally meshes well with the background making it easier on the eyes, everything is where it needs to be for easily flicking between, and it’s a real joy to mess around with. Final Thoughts I think it’s important to stress again that the typical use of bare containers on a white background isn’t a bad thing by itself, but we’re entering a time where other businesses and developers are now looking to catch the eye of everyone they can and the way to do that these days is through a powerful mobile site. You don’t have to be a designer or spend a lot of money to make a few simple changes that really bring your site to the next level. Give it a shot and see what mixing things up does to your site!
Read more
  • 0
  • 0
  • 3137
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-restfulapi
Tess Hsu
04 Nov 2016
4 min read
Save for later

What is a RestfulAPI?

Tess Hsu
04 Nov 2016
4 min read
Well, RestfulAPI is not only about the action but also about the whole React concept design pattern. Let’s describe it like this: roles: 1.”Browser” == “client”, 2.”Server” == “shop”, action: Client ask shop : ” please give me Black chocolate”, shop receive from Client: “ok, got your message, I will send you Black chocolate” result: shop send client Black chocolate You can see from this scenario that there are: roles, verb(ask, receive), result And they all play a drama called the ”Restful API”. So, the Restful API is the Representation State Transfer—it’s about the architectural style. There’s no standard on how you build your restful API (like, if you write a drama, there should be no standard pattern, right?). Instead, it’s all about the action between Browser | Server, and how you communicate with the object you want. Here is a sample graphic from the above scenario: So, RestfulAPI is this entire picture: Browser| Web Server, action by method: GET, POST, PUT, DELETE and so on, and a Browser that will react with the result to the web page. When you get this concept, you will see the key point is to communicate between the Browser | Web Server. So how do we do this? We primarily use http to connect both. So, now let’s look at what an http request is. What is an http request? With an http request, you can send the specific request and get the specific product you want directly, or you can send the common request and retrieve one of the products in the whole list. This will depend on your request and how complex the list of products happens to be. You can look at it this way: Nouns: Like whole collections of books, or resources, in http, this can be a URL such as http://book.com/books/{list}. Verbs: This is the action involving our bookshelf. So, you can get the book, remove the book… and in http, you can use GET, POST, PUT, DELETE. Content types: The book information is the author names, editor, publishing date, and so on. So, in http, you show this information with XML, JSON, or YAML format. Here is a graphic to show the whole concept: So if I want book 1, the URL to get book 1 can be http: //book.com/books/1. If I want the author information, such as the publish date from book 1, the URL can be http: //book.com/books/1?authorInfo=John&publishDate=2016. Here you can get authorInfo, so it is “John” and publish date is “2016” In JSON format, it looks like this: { “book”:[{ “authorInfo” : “John”, “publishDate”: “2016” }] } In this URL, you can use GET or POST to get the information. And if you want to know the difference between GET and POST, you can look here. So, the action should return the status, right? http has a status code to show the return status: 200: ok 404: Not Found 500: internal Server Error Here’s a link to look at the different statuses. How do you access the API through the web browser? You can actually open the web browser, click on the network and see the API running through the web event binding: For example: Here the resource (noun) is: https://stripe-s.pointroll.com/api/data/get/376?model=touareg&zip=55344 The method is get The content type resource is 376?model=touareg&zip=55344, which in JSON format is: And how will this information show in our final destination, the web browser? You can use any language, but here I use JavaScript: First, load the above resource. Second, define the condition: if process 1 is successful, you need to get the offers list deal, and get the title “499/month for 36 months.” If process 1 is not successful, then use the web browser to show the status. And finally, show the result in the website. The concept code will look like this: $('#main-menu a').click(function(event) { event.preventDefault(); $.ajax(this.href, { success: function(data) { $('#main').text(data.offers.deal.title); }, error: function() { $('#notification-bar').text('An error occurred'); //console.log("fn xhr.status: " + xhr.status); } }); }); So, the final expectation is to show the title “499/month for 36 months” on the browser’s web page. Conclusion The basic Restful API concept simply reduces the communication work between the frontend and backend. I recommend that you explore using it yourself and see how useful it can be. About the author Tess Hsu is a UI design and frontend programmer and can be found on GitHub.
Read more
  • 0
  • 0
  • 3134

article-image-npm-and-distribution-path-length-problems
Adam Lynch
07 Dec 2015
6 min read
Save for later

NPM and Distribution Path Length Problems

Adam Lynch
07 Dec 2015
6 min read
You might have been unfortunate enough to learn that Windows has a 256 character limit on file paths. You could've ran into this problem locally or on end users' machines. There's no real workaround but there are preventive measures you can take. Even if you haven't, feel free to take pleasure from reading my horror story. NPM Neither this problem nor our solution are exclusive to Node.js but a lot of the victims of the path length problem were probably running Node.js on Windows. Windows users know that they get left out in the cold often by npm package maintainers but even the design of npm itself is a problem on Windows from the get-go. npm stores your dependencies (listed in your package.json) in a node_modules directory. If those dependencies have dependencies of their own they're stored in their own node_modules directory (i.e. your-project/node_modules/a/node_modules/b/) and so on recursively. It's nice but in hindsight it's obviously incompatible with Windows's path length limit. Delete delete delete Most people have probably been lucky enough to only have come across this problem when trying to delete dependencies where then Windows complains that the path is too long. A simple way around this is to take a module halfway down deep into your dependency graph (i.e. node_modules/a/node_modules/b/node_modules/c/.../node_modules/h/) in Windows Explorer and move it somewhere closer to the root (e.g. node_modules/) to cut the file path down before trying to delete it again. This would have to be repeated for every culprit. There are also some tools which could help. I've noticed that you can delete really long paths while using 7-Zip File Manager to browse files. Runtime errors If you've ran into actual bugs caused by this, you could find a module halfway down the dependency graph and add it as a dependency to your project so it will be installed under the top level node_modules and not a node_modules directory n levels deep. Make sure to install the correct version and test thoroughly. There are also a few Node modules out there which "flatten" your dependency graph. The downside to these modules is that if there is a conflict (package A depends on version 1.0.0 of package Z and package B depends on version 3.2.1 of package Z) then the latest version of the module (package Z) is used, which could be problematic. So be careful. Can't npm fix this? You might see people reference Windows APIs (which support long paths) as a possible fix but it is very unlikely this will be fixed in npm. npm dedupe should help with this too but it's not reliable in my experience. Yes, they, can! This has been fixed as of npm 3.0.0 (yet to be released). Your dependencies will now be installed maximally flat. Insofar as is possible, all of your dependencies, and their dependencies, and THEIR dependencies will be installed in your project's node_modules folder with no nesting. You'll only see modules nested underneath one another when two (or more) modules have conflicting dependencies. Excuse me... Dances Mind you, it's a bit late for me. Unless npm 3 also ships with a time machine. More about that in a bit. Manually checking for exceedingly long paths Up until now, I've had to routinely check for long paths using Path Length Checker (on Windows) but a manual check is not good enough as stuff can still slip through the net. Introducing gulp-path-length So there's a simple Gulp plugin to help with this; gulp-path-length. You could use it like this in a Gulp task: var gulp = require('gulp'); var pathLength = require('gulp-path-length'); gulp.task('default', function(){ gulp.src('./example/path/to/directory/**', {read: false}) .pipe(pathLength()); }); If all is well, nothing will happen. If you have a path exceeding 256 characters, the gulp task will stop and an error reveal the offending path. This is really fast either way as Gulp doesn't need to read the contents of the files. The limit can be changed with a parameter; i.e. .pipe(pathLength({ maxLength: 50 });. This is fine if it's just for you locally but there are bigger fish to fry. Distributed long paths What if there are multiple developers working on your project? What if a developer is using Mac OS X or Linux? There could easily be false positives. It's one thing having issues locally or within a team, it's a whole other thing to have path length problems in production on end users machines. I've had that pleasure myself with Teamwork Chat, which we built on top of NW.js. NW.js is basically Node.js and Chromium mashed together to allow you to create desktop apps from Web apps. Any NW.js can access all of node's core modules and any other modules you've installed from nom, for example. Therefore the npm - Windows path length issue applies here too. Depending on how long the end user's username was, the user might've seen something like this when they tried to launch Teamwork Chat: A dummy application. None of our app code is executed. This means no error reports and no way the app could even auto-update once a patch was released. As a maintainer of nw-builder, I know we're not the only ones who have faced this problem. Is there anything we can do? Once the code is shipped, it's too late. Luckily in my case, we have a rough idea where the files will exist on end users machines thanks to our Windows installer. This is where gulp-path-length's rewrite option comes in. It can be used like this to simulate path lengths: var gulp = require('gulp'); var pathLength = require('gulp-path-length'); gulp.task('default', function(){ gulp.src('./example/path/to/directory/**', {read: false}) .pipe(pathLength({ rewrite: { match: './example/path/to/directory/', replacement: 'C:Usersa-long-username-hereAppDataBlahBlahBlah' } })); }); So it doesn't matter where you are on your filesystem or which operating system you're using, it will test the length of files in a given directory as if they're in a hypothetical directory on Windows. You could run this before you ship your code but we've added this to a compilation build step so we catch it as early as possible. If a Mac developer adds really long paths to the project (like an npm dependency which depends on a chain of lodash modules), they'll see right away that this will break stuff for some Windows users. For good measure, we also run it in a continuous integration step. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 3104

article-image-openstack-liberty-meets-mitaka-love-tokyo
Fahad Siddiqui
26 Oct 2015
5 min read
Save for later

OpenStack Liberty meets Mitaka; Love in Tokyo

Fahad Siddiqui
26 Oct 2015
5 min read
Since July 2010, when OpenStack was founded by Rackspace and Nasa, it has gone beyond boundaries and is one of the most outstanding Open Source projects around. Would you believe OpenStack helps CERN’s staff store and manage their massive amounts of data? For the last five years, Stackers have contributed their time and expertise to build a reliable and ready-to-use cloud infrastructure. Today more than 1,000 companies and 30,000 individuals from 172 countries are part of the project. Cloud is a vernacular term. Most people are on the cloud but not all end-users fully understand exactly what the cloud is. It is a challenge which OpenStack helps to address every day. Today it fuels the passion and imagination of a community of thousands of developers around the globe. The OpenStack event takes place six-monthly to communicate and discuss the new innovations and elements for the next software release. With only a day left until the OpenStack Tokyo Summit, users, developers, cloud-prophets and enterprises are excited to discuss the new release and the future of cloud computing. The summit is a four-day event, starting on the 27th of October where thousands of Stackers will be involved in hundreds of talks, workshops and lots of networking. The summit is scheduled with two views in mind. The main conference is dedicated to the vast majority of attendees - new-to-advanced users. The design summit is for the technical contributors and operators who have contributed to the Mitaka release cycle. For those who don't already know, the OpenStack Foundation name their software releases in alphabetical order; the last update, which was announced this July, is called "Liberty". The last summit was held in Vancouver where more than 6,000 attendees from 967 companies and 55 countries participated. Giants like Red Hat, HP, VmWare, Dell, IBM joined startups like Mirantis, SwiftStack, and Bluebox to make the world a more cloudy place. Highlights, keynotes, presentations, breakout and breakthrough session videos can be found here. This time the theme is "OpenStack-Powered Planet", which reflects a more responsible, advanced and secure infrastructure with OpenStack Cloud. Speakers from Bitnami, Huawei, NEC, ComCast, Metacloud and Fujitsu will discuss the power of a global network of public and private clouds, and OpenStack as a combined engine for advanced cloud computing technologies. New entrants in the Asia Pacific cloud market will be highlighted in visionary keynotes and breakout sessions, further underlining the community's advancement in cloud infrastructure. Reliability, security and improvement of the application ecosystem, and next-gen users will be highlighted to support the theme. Insightful breakout sessions will focus on software-led networking and recent progress on the OpenStack Neutron project! Attendees will also get a chance to get an insight into OpenStack's influence and development in areas such as Internet of Everything (IoE), next-gen telecom networks and the emerging Network Function Virtualisation (NFV). A few of the best talks and sessions which everyone is excited about include those by Amadeus, AppFormix, CERN, City Network, Comcast, Deutsche Telekom, eBay, GMO Internet, Huawei, Intel, Lithium Technologies, Kirin, NTT, PayPal, SKT, TubeMogul, WalmartLabs, Workday and Yahoo! These companies will give presentations, share case studies, conduct hands-on workshops and collaborative design sessions to help users Skill Up their knowledge and expertise with OpenStack. A number of Packt authors will also be present at the event, some of whom are speaking at and conducting working sessions. Find out more below: James Denton, Principal Network Architect for Rackspace, and author of Learning OpenStack Networking (Neutron) first and second edition will talk about a few uncommon issues with Nova and Neutron. Participants will learn some basic troubleshooting procedures, including tips, tricks, and processes of elimination. You can find more about the session here. Egle Sigler, Principal Architect on the Private Cloud team at Rackspace, OpenStack Foundation Board member, and co-author of OpenStack Cloud Computing Cookbook - Third Edition will be busy on the first two days of the event, giving talks, presentations and leading the work group sessions. At the event’s opening session, she will co-present the keynote about the Use Case of OpenStack in Yahoo! JAPAN. Later, she will be involved in a discussion on diversity in the OpenStack community and will also address a few basic as well as critical issues about DefCore. On the second day of the event she will dive deep into DefCore to discuss the latest issues with OpenStack interoperability. You can find more about the sessions here. Sriram Subramanian, Director of Software Engineering at Juniper Networks and co-author of the latest OpenStack Networking Cookbook will talk about ways of improving firewall performance and enhancement of OpenStack FWaaS. This session will include a demo of the work in progress. You can find more about his session here. Arthur Berezin, Director of Product at GigaSpaces and author of Production Ready OpenStack - Recipes for Successful Environments will give a talk about how to Build Cloud Native Microservices with Hybrid Workloads on OpenStack. On the 2nd day of the event, he will discuss ways of migrating Enterprise Applications into OpenStack. On the third day of the event he will go deep into Hybrid Cloud Orchestration on OpenStack. You can find more about his sessions here. If you’re going to OpenStack Summit Tokyo, we hope you have a great time. Keep an eye on #LoveOpenStackTokyo for our take on the conference as well as exclusive offers and content. Also please share your thoughts and tales about the event – even if you’re not there – using the hashtag #LoveOpenStackTokyo Every day one random tweet with hashtag #LoveOpenStackTokyo will get selected and the winner will get an eBook of one of the above mentioned speaker (author). Find more OpenStack tutorials and content on our dedicated page - visit it here.
Read more
  • 0
  • 0
  • 3097

article-image-biggest-app-developer-salary-and-skills-survey-2015
Packt Publishing
03 Aug 2015
1 min read
Save for later

The biggest App Developer salary and skills survey of 2015

Packt Publishing
03 Aug 2015
1 min read
See the highlights from our comprehensive Skill Up IT industry salary reports, with data from over 20,000 developers. What skills do you need to learn in order to earn more in application development?  Download the full size infographic here.    
Read more
  • 0
  • 0
  • 3063
article-image-vr-overrated
Raka Mahesa
11 Apr 2017
6 min read
Save for later

Is VR overrated?

Raka Mahesa
11 Apr 2017
6 min read
This post gives a short overview of what you can do and which tools to use to maximize the reach of VR content that you publish. It will also outline some pitfalls. Rule No. 1 of VR: No shortcuts Rule No. 2 of VR: seriously, no shortcuts. This is important. Stick to native SDKs and high-performance 3D rendering. Stay away from WebVR and JavaScript. WebVR will become a great standard one day in the future, but right now we are months away from the sort of adoption and user-friendly-ness it needs to be suitable for the run-in-the-middle consumer. If you don’t believe me or disagree, just head over to this Sketchfab 3D Model (or any other, really) and enter VR mode on your mobile phone. Unless you are on the latest-generation Android device you’ll see a distorted mess, which runs well below what anybody would call smooth. If you’re absolutely in no position to use a native-code VR player, you can choose krpano (or Kolor Panotour, which is built on top of krpano) to wrap your 360 images and video. They have decent enough cross-platform support and some nifty workarounds in place to mitigate the most common browser-based VR pitfalls. For 3D content use Unity 3D. There might be other tools, but especially true if you are a beginner, Unity offers the quickest results.  Serve everyone Figure out a way to serve everyone. As I have outlined in previous posts, there are many different VR device types out there, and all together, there’s not yet a huge amount of VR devices at all. Aiming to reach everyone will give you a decent audience reach in the end. If you set out and only publish your content on Oculus, or HTC Vive, or GearVR, you can be sure that you will exclude over 80 percent of the total audience. You should also offer a non-VR mode to access your content. While VR is a good way to get an extra kick out of an experience,you should never ever only offer a VR mode; this will exclude anyone without a VR device from your potential audience, which will shrink the reach of your content dramatically. You also have to consider social situations in which people might access your content (for example, on the bus or waiting in line at the airport) where it is not feasible to enter a VR-based view mode. Fragmentation & Pitfalls Fragmentation in VR is huge; it means that out of 100 users that access your VR content, only 10 might use the same type and generation of VR device at the same time. Unless you are creating a VR game (where it might be practical to only target one device type at a time) you should always aim to support all VR devices out there. Here are some pointers on how to make that feasible and some common pitfalls: If your main experience is desktop-based 3D content, you should offer a 360-image or video-based tour for mobile devices. This can easily be achieved by doing a 360 screen capture of your 3D content. The 360 images/videos might offer less interaction, but most of the time this format is enough to bring across your main story points. You can always hint to the user that there’s a full-blown immersive experience available on another platform, which the user then might check out or recommend to a friend. If your experience is video-based, make sure you offer multiple resolutions of your content, including a streaming version (you can host it via YouTube). You should offer streamed variants, but also an option to download content to the device (like Netfilx now does); this again helps with re-engagement in case a user accesses the content for the first time in a setting where VR-mode is not feasible, but wants to revisit it later to view it in VR. There are plenty of shareware video converters available to help you create all of those versions of your video, or you can use the free, open source tool ffmpeg (tutorial available here). Bear in mind that while VR video is a great experience for the first couple of times, it will quickly wear out its novelty, so make sure to create great content and distinguish it with cool features like spatial audio. Image-based experiences are inherently very scalable across all the different types of VR devices. They load fast and most of the time are not a big strain neither on PC nor on mobile devices, even older ones. Interactions, that is, hotspots or popups, need to be device-agnostic on all potential target devices. For this your hotspots need to be able to react to gaze-based interaction as well as touch based. You should stay away from motion-based gestures (like asking the user to nod his head to access a menu) because most of the time these only work somewhat reliably. The word is still out on the usability of direction-based menus (for example, “look down to open the menu”). Personally, I think they are a good option in combination with gaze-based interaction, but bear in mind: I have been using VR on a daily basis for a couple of years.I hardly qualify as a run-in-the-middle consumer anymore. File size: VR assets tend to be pretty large, at the same time the context of accessing this content is mobile. Don’t expect anyone to download a standalone mobile app, which is a couple of hundred megabytes in size because you baked it into a 4K video. Reduce file size and include streaming assets where possible. Provide some sort of offline fallback in case there’s no active Internet connection (for example, a very low res fall back video). You can keep file size down most efficiently by keeping your experiences short and sweet. VR still is not well suited for long sessions. A casual experience is best kept below 120 seconds.  Include Sharing options and a Feedback Channel Great content needs to be shared. Make sure your content can be shared and include calls to action for users about what to do once they consumed the main portion of your content. Direct them to related content, or to a website where they can learn more about what they have seen (outside VR).  Don’t Despair Today, VR is still a challenge. It is still a challenge to create great VR content, it’s a challenge to publish and deploy it and after that it’s an even bigger one to reach 100 percent of your potential audience. Don’t despair; everybody is dealing with the same issues and there’s no magic solution yet. We are working on it, however.  About the Author Andreas is the founder and CEO of Vuframe. He’s been working with augmented andvirtual reality on a daily basis for the past 8 years. Vuframe’s mission is to democratize AR & VR by removing the tech barrier for everyone. 
Read more
  • 0
  • 0
  • 3059

article-image-keeping-innovation-alive-oracle
Packt Publishing
23 May 2016
6 min read
Save for later

Keeping Innovation Alive at Oracle

Packt Publishing
23 May 2016
6 min read
Saurabh Gupta is the author of Advanced Oracle PL/SQL Developer's Guide. We spoke to him about his work at Oracle, and the organization’s future in a changing and increasingly open source landscape. All of the views expressed by Saurabh are his own and do not reflect the views of Oracle. Tell us about yourself – who are you and what do you do? I am a Database technologist with experience in database design, development and management. I work for the Database Product Management group at Oracle where I am fortunate to work alongside some really smart minds. As part of my job, I interact and engage with the Oracle partner community to drive the adoption of database technologies like Oracle 12c, Multitenant, Database In-Memory, and Database Cloud Services in their solution landscape. I evangelize Oracle database technologies through product road shows, conferences, workshops and various user group events. I love sharing my knowledge and I use two of the best mediums to achieve that. I have authored the first and second edition of "Oracle Advanced PL/SQL Developer Professional Guide" with Packt. I am a regular speaker at AIOUG events like Tech Days, OTN Yatra, and SANGAM. I was selected by IOUG committee to present at Collaborate'15. I'm a blogger and pretty active on Twitter – I tweet at @saurabhkg. Tell us about Oracle Pl/SQL. What is it for and what does it do? PL/SQL is the procedural extension of SQL (Structured Query Language). Although SQL is the de facto industry language for querying data from a database, it doesn't comply with high level programming concepts. For this reason, Oracle introduced PL/SQL language to code business logic in database and store it as a program for subsequent use. In its first release alongside Oracle 6, PL/SQL was limited in its capacity. But over the years, it has grown into one of the most mature high level languages. The language is practiced by almost all Oracle professionals who are into database development and design. How has the Oracle database landscape changed over the last few years? How does it compare to other databases such as SQL Server? In the last thirty years, Oracle has innovated; it has developed great products and has been a consistent leader in the database management space. Technologies such as Oracle Database, Real Application Clusters, Multitenant, Database In-Memory, Exadata smart innovations have all gained huge traction within Oracle's partner and customer community. A few years ago, Oracle focused on engineered systems family which were the smart blend of hardware and software. Oracle Exadata database machine is an engineered system that runs Oracle database in a grid computing model. The smart innovative features on Exadata help in addressing scenarios such as database consolidation, mixed workloads, and resource management. With the industry paradigm shifting to cloud services, Oracle's service offerings span across all tiers of the enterprise IT landscape, including Software As a Service (SaaS), Platform As a Service (PaaS) and Infrastructure As a Service (IaaS). Within PaaS, Oracle is investing lot of effort in strengthening their data management portfolio. The message is loud and clear that Oracle's cloud services are designed to cater the needs of an enterprise data driven application. It includes Oracle Database Cloud Service, Oracle Big Data Cloud Service, Oracle NoSQL Cloud Service, Oracle Big Data Preparation Cloud Service, Oracle Big Data Discovery Cloud Service, and Oracle Big Data SQL. Oracle Big Data SQL, a cutting-edge technology, is a superset of SQL that allows end users to issue a query against the data lying in different data artefacts i.e. HDFS, Hive, NoSQL or RDBMS. On Exadata, Big Data SQL performance is complemented by the smart features like smart scan. What are the biggest challenges you face - in terms of software and broader business pressures? In the software industry, the biggest challenge is to keep innovation alive and to control adoption timelines. If you look at IT industry trends, for any given problem a consumer has multiple solutions. This state of "flux" is pushing software vendors to the limit if they are to consider what makes their product distinctive. This is where innovation comes and gives an edge to the product. If you compromise on innovation, you lose the market in no time. At the same time, it is important to control the adoption rate in the market. The user community has to be briefed and empowered to work effectively with the product. Other challenges to be grounded are cost effectiveness, product marketing, rollout strategies, and supporting the community. What do you think the future holds for Oracle? Can it remain relevant in a world where open-source software is mainstream? Looking at the recent technology landscape, cloud services seem to be a central pillar in future roadmaps. Oracle has announced Oracle Public Cloud machine which brings Oracle Public Cloud's PaaS and IaaS capabilities to the partner's data center shielded within the company's firewall. Oracle is taking initiatives in nurturing the startups. Oracle has recently announced the Oracle Startup Cloud Accelerator program that will help IT startups to embrace Oracle cloud services by providing sufficient resources like technology, mentoring, go-to-market strategy, investors, and incubation centres. Well, there is no doubt that open-source community has grown steadily over the years. Developers across the world share the product code and help in developing a free software. However, I disagree with the assertion that "open-source software is mainstream". For enterprise-level IT management, you have to pick the products that are secure, compliant, and manageable and offer support services. Most open source databases are developer managed and create dependency on a limited set of resources. This is where commercial products have the edge; they satisfy compliance requirements, provide technical and business support, and invest heavily on innovation. It is advisable that Organizations must do a thorough TCO/ROI study before adopting open source products in IT mainstream. Oracle also offers some of the leading open source solutions for development You can find a list of Oracle’s open source initiatives listed here. Find Saurabh’s book - Advanced Oracle PL/SQL Developer's Guide - Second Edition here.
Read more
  • 0
  • 0
  • 3001

article-image-diving-juju
Adam Israel
17 Nov 2015
4 min read
Save for later

Diving into Juju

Adam Israel
17 Nov 2015
4 min read
In my last post, I introduced you to Juju and talked about how it could help you. Now I'd like to walk you through some real world examples of Juju in action. Workloads Bundles are used to represent workloads. Bundles can be simple, like the Wordpress example in the previous post, or complex. OpenStack If you're not familiar with OpenStack, it is a collection of services for managing a cloud computing platform. Think Infrastructure as a Service (IAAS). I've heard horror stories from people who've spent months trying to deploy and configure OpenStack. Several different tools for automating an OpenStack Deployment have been developed, and this video from the October 2015 OpenStack Summit compares them (with Juju a strong favorite): How easy is it to install OpenStack? It's this easy as this: $ juju quickstart openstack-base Sit back and wait, and soon you'll have an OpenStack environment with Keystone, Glance, Ceph, Nova-compute and more ready for testing. Big Data This is the area I'm personally most excited about. Big Data solutions are all aimed at taking some of the complexity out of analysing huge datasets. The base of these workloads is Apache Hadoop, bundled with tools like mapreduce, Tez, Hive, Pig and Storm. Drink from the Twitter firehose with Flume. Crunch Open Data from your favorite city or government to spot trends in voter turnout or track neighborhood gentrification or crime rates. Containers Containers are the new hot thing, but there's no reason why you can't use Juju to orchestrate the deployment of them. Docker? No problem. Kubernetes? Juju does that, too. There are advantages to containerizing your application. It gives you a nice layer of isolation, and the addition of container networking with Flannel makes it even more powerful. Juju steps in to compliment the benefits of a container by offering a way to manage and scale them in the cloud. As a developer, you can write a dockerfile to launch your application, and use the Docker charm to deploy it. Things at scale For a typical development workflow, you may only need the bare minimum of machines to run your application. Once you deploy to the cloud for production use, you're going to need the ability to scale your application. For example, if your database is running slow, you can easily add scale up: juju add-unit mysql -n 3 This would add three new units to MySQL and configure replication and failover, things that are complicated and often fragile to do by hand. Benchmarking The cloud offers a dizzying array of hardware options. Spinning rust or SSD. Lots of memory, or CPU, or both. 1 or 10 Gigabit networking. You can speculate about which options are best suited for your application but even the most well-informed of guesses can be wrong when put to practice. Benchmarking provides the ability to exercise a service in order to evaluate its performance, and collect hardware and software statistics to monitor how your workload is performing. Maybe you want to test your database under load, or stress your web application, or identify potential bottlenecks. Could it be disk or network I/O slowing you down? Is it poorly optimized database queries? This is the tool you'll want to use to answer those questions. Workloads are complex things, with many moving parts. Like Hydra, bottlenecks are a shifting target; strike down one and two more rise to take it's place. In order to tune workloads, I've gone hunting for blog posts or white papers showing best practices for the services I use. I'm often frustrated, though, because all the pretty graphs in the world don't help me if I can't replicate the results. It leads to a trust issue; sure, it ran fast for you, but how do I recreate it? Benchmarking's focus on repeatable, reliable testing means that you can repeat benchmarks over and over again and expect to see similar results. You can then make adjustments to your hardware or software, repeat the benchmark and compare the results. That effort can then be distilled into best practices that anyone using or deploying a service can benefit from. Conclusions Juju is a robust devops tool, reducing the complexity of cloud development and orchestration. It's growing community of users and contributors, including IBM, Intel, Microsoft, Cisco and China Telecom means it's going to be around for a long time. Dig deeper into the best DevOps solutions with a look at the best command line tools in our article – read it now! About the author Adam Israel has worn many hats over the past twenty years, from help desk to Chief Technical Officer, from point of sale software to search engines and ad server platforms. He currently works at Canonical, Ltd as a Software Engineer, with focus on cloud development and operations.
Read more
  • 0
  • 0
  • 2998
article-image-all-hail-our-robot-overlords
Owen Roberts
23 Mar 2016
3 min read
Save for later

All hail our robot overlords?

Owen Roberts
23 Mar 2016
3 min read
Imagine for a second that the year is 20XX. The remnants of humanity eke out a limited existence in the shadow of the ruins of Neo New London. For years the new ruling class, the machines, have ruled over the planet with a cold mechanical grip; hunting stragglers with ruthless tactics and superior Go playing strategies. Sounds horrible doesn’t it? Even worse, according to most news articles you might have read over the last few weeks we’re just a few years off this grimdark reality. Just two weeks ago Google’s AlphaGo won 4 out of 5 games against Lee Sedol, one of the greatest Go players currently in the world (Sedol won the 4th match). It’s an interesting look at how far we’ve come in the world of AI as well as how far we’ve still got to go – the turning point in the 4th match, for instance, was when AlphaGo made a mistake and did not realize it for 10 whole turns. But most of the reports that talked about the event didn’t really touch on the gravitas of the match for the world of AI and how far we’ve come without realizing it, mostly focusing on the nightmare of a robot ruled future. What was truly special about this match? We had previously thought we were 5 years away from this actually happening. AI has had a long history of hype and disappointment. Alan Turing predicted that machines would be able to successfully imitate humans by the year 2000, and 16 years later than that deadline we’ve finally gotten them to master on of the most complex games ever created. It shows just how skill Google’s team is at solving such a complex problem, and shows us a look at the practical use AI can have in the immediate future. When it comes to AI we’ve either always thought too big about what it can do or too small but in actuality the reality, especially right now, is somewhat in-between. Currently one of the best uses of AI and deep learning is with Google’s reverse image search function – finding sources based on matching key areas of an image. It’s a small feature in the grand scheme of things, but shows how far reaching AI can be when applied to old problems. The implications AI will have on products like voice assistants, search engines, chat bots, 3D scanners, language translators, automobiles, drones, medical imaging systems, and more is going to be huge. Sure, we’ll eventually get around to mass producing robots that can accurately adapt to the job they’re given but that’s a long way away at this stage. Right now the focus on AI should not be on reaching the pop culture future we’ve dreamed of for years but on specific applications that make our lives easier. The more data that is collected by our current AI means the better products we can create. This will cause a circular effect where more users jump on these better products, in turn giving our AI more data to work with to create EVEN BETTER products for even more users to jump on. So all in all, the immediate future of AI is going to be much smaller and less deadly than killer machines and human batteries. And even if I’m wrong and the worst happens we can at least have peace of mind that the Great Robot War will be pretty cool.
Read more
  • 0
  • 0
  • 2996

article-image-what-separates-good-developer-great-developer
Antonio Cucciniello
05 Jul 2017
4 min read
Save for later

What separates a good developer from a great developer?

Antonio Cucciniello
05 Jul 2017
4 min read
For developers looking to go to the next level and become a great developer, as opposed to someone who is considered just a good developer, this post is for you. In my opinion, there is a fine line between the two, but it could involve long term consequences if you do not get yourself to that next level. This may also be a different route than most of you expected, because I will be talking about various "soft" skills that will make a programmer great in the long term. Ability to learn A software engineer's capacity to learn new topics effectively and quickly is crucial. In a world of changing frameworks and standards that evolve by the day, this skill has never been needed more. You do not want someone on your team who only knows Java, and does not want to learn anything else. Just because your company is using Java today, does not mean that it will be using Java a few years from now. Even two months from now it could be completely different. You ideally want someone who has displayed their ability to go deep with at least one language or technology and can transfer that depth of knowledge over to other areas at an extremely fast rate. Positive thinker In life, your attitude toward a situation could greatly effect where you end up. You do not simply just want a deep technical experience.The way you think about the things presented to you truly affects your ability to produce daily. A positive thinker is someone who always sees the benefits that can be taken from a situation. That person is also more likely to take beneficial risks and will produce more in the end. The mindset and attitude of this individual will influence other team members as well and make the entire team more productive as a result. Great interpersonal & intrapersonal skills This skill can be described as someone who knows how to work with others in a positive and likeable manner. They know how to handle themselves, their emotions, and communicate with others. Youaren’t alone, you must deal with others. If you cannot deal with others in a positive manner, then your ability to create something revolutionary will be stunted. Having this skill allows you to avoid conflicts with others, helps get ideas across effectively, and provides a better work environment that will make for a more productive and effective team. Ability to teach others Not everyone is a good teacher. As you probably already know, there are plenty of teachers that are not particularly good at teaching. It is a difficult skill to put yourself in someone else's shoes and move them to the understanding of a specific topic. If you have a good teacher on your engineering team, you can greatly increase the level of your team's output. You do not need to have a PHD in linguistics or have written three novels to get ideas across. You just need to be able to make things clear for the other person. Inaccessible knowledge is knowledge wasted, so ensure that you explain in a way that does not leave people guessing. Make long term vs short term tradeoffs Sometimes, developers can solve problems that you give them, but without the perspective necessary for the larger and longer-term picture. They may be able to develop something very fast that solves an immediate problem, but create it in a way that will not allow this functionality to scale. Other times there may be a deadline to ship some code, but the engineer may take his time developing software that handles every case. In this scenario, you want them to make a short term tradeoff to hit the deadline and refactor later. You want someone who can make these time versus quality tradeoffs, depending on the intricacies of the scenario. Conclusion Overall, there are some qualities that make a good developer transcend into a great developer: the capacity to learn, the ability to teach effectively, having a positive attitude at all times, having awesome communication and interpersonal skills, and the capacity to make decisions that can affect short term and long term results. If you enjoyed this post I would love to hear your opinion. About the author  Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello 
Read more
  • 0
  • 0
  • 2983
Modal Close icon
Modal Close icon