Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Web Development

87 Articles
article-image-intro-meteor-js-full-stack-developers
Ken Lee
14 Oct 2015
9 min read
Save for later

Intro to Meteor for JS full-stack developers

Ken Lee
14 Oct 2015
9 min read
If you are like me, a JavaScript full-stack developer, your choices of technology might be limited when dealing with modern app/webapp development. You could choose a MEAN stack (MongoDB, Express, AngularJS, and Node.js), learn all four of these technologies in order to mix and match, or employ some ready frameworks, like DerbyJS. However , none of them provide you with the one-stop shop experience like Meteor, which stands out among the few on the canvas. What is Meteor? Meteor is an open-source "platform" (more than a framework) in pure JavaScript that is built on top of Node.js, facilitating via DDP protocol and leveraging MongoDB as data storage. It provides developers with the power to build a modern app/webapp that is equipped with production-ready, real-time (reactive), and cross-platform (web, iOS, Android) capabilities. It was designed to be easy to learn, even for beginners, so we could focus on developing business logic and user experience, rather than getting bogged down with the nitty-gritty of learning technologies' learning curve. Your First Real-time App: Vote Me Up! Below, we will be looking at how to build one reactive app with Meteor within 30 mins or less. Step 1: Installation (3-5 Mins) For OS X or Linux developers, head over to the terminal and install the official release from Meteor. $ curl https://install.meteor.com/ |sh For Windows developers, please download the official installer here. Step 2: Create an app (3-5 mins) After we have Meteor installed, we now can create new app simply by: $ meteor create voteMeUp This will create a new folder named voteMeUp under the current working directory. Check under the voteMeUp folder -- we will see that three files and one folder have been created. voteMeUp/ .meteor/ voteMeUp.html voteMeUp.css .meteor is for internal use. We should not touch this folder. The other three files are obvious enough even for beginners -- the HTML markup, stylesheet, and one JavaScript that made the barebone structure one can get for web/webapp development. The default app structure tells us that Meteor gives us freedom on folder structure. We can organise any files/folders we feel appropriate as long as we don't step onto the special folder names Meteor is looking at. Here, we will be using a basic folder structure for our app. You can visit the official documentation for more info on folder structure and file load order. voteMeUp/ .meteor/ client/ votes/ vote.html vote.js main.html collections/ votes.js server/ presets.js publications.js Meteor is a client-database-server platform. We will be writing codes for client and server independently, communicating through the reactive DB drivers API, publications, and subscriptions. For a brief tutorial, we just need to pay attention to the behaviour of these folders Files in client/ folder will run on client side (user browser) Files in server/ folder will run on server side (Node.js server) Any other folders, i.e. collections/ would be run on both client and server Step 3: Add some packages (< 3 mins) Meteor is driven by an active community, since developers around the world are creating reusable packages to compensate app/webapp development. This is also why Meteor is well-known for rapid prototyping. For brevity’s sake, we will be using packages from Meteor: underscore. Underscore is a JavaScript library that provides us some useful helper functions, and this package provided by Meteor is a subset of the original library. $ meteor add underscore There are a lot useful packages around; some are well maintained and documented. They are developed by seasoned web developers around the world. Check them out: Iron Router/Flow Router, used for application routing Collection2, used for automatic validation on insert and update operations Kadira, a monitoring platform for your app Twitter Bootstrap, a popular frontend framework by Twitter Step 3: Start the server (< 1 min) Start the server simply by: $ meteor Now we can visit site http://localhost:3000. Of course you will be staring at a blank screen! We haven't written any code yet. Let's do that next. Step 4: Write some code (< 20 Mins) As you start to write the first line of code, by the time you save the file, you will notice that the browser page will auto reload by itself. Thanks to the hot code push mechanism built-in, we don't need to refresh the page manually. Database Collections Let's start with the database collection(s). We will keep our app simple, since we just need one collection, votes, that we will put it in collections/votes.js like this: Votes = newMongo.Collection('votes'); All files in the collections/ folder run on both the client and the server side. When this line of code is executed, a mongo collection will be established on the server side. On the client side, a minimongo collection will be established. The purpose of the minimongo is to reimplement the MongoDB API against an in-memory JavaScript database. It is like a MongoDB emulator that runs inside our client browser. Some preset data We will need some data to start working with. We can put this in server/presets.js. These are just some random names, with vote count 0 to start with. if (Votes.find().count() === 0) { Votes.insert({ name: "Janina Franny", voteCount: 0 }); Votes.insert({ name: "Leigh Borivoi", voteCount: 0 }); Votes.insert({ name: "Amon Shukri", voteCount: 0 }); Votes.insert({ name: "Dareios Steponas", voteCount: 0 }); Votes.insert({ name: "Franco Karl", voteCount: 0 }); } Publications Since this is for an educational purpose, , we will publish (Meteor.publish()) all the data to the client side under server/publications.js. You most likely would not do this for a production application. Planning for publication is one major step in Meteor app/webapp development, so we don't want to publish too little or too much data over to client. Just enough data is what we always keep an eye out for. Meteor.publish('allVotes', function() { return Votes.find(); }); Subscriptions Once we have the publication in place, we can start to subscribe to them by the name, allVotes, as shown above in the publication. Meteor provides template level subscription, which means we could subscribe to the publication when a template is loaded, and get unsubscribed when the template is destroyed. We will be putting in our subscription under client/votes/votes.js, (Meteor.subscribe()). onCreated is a callback when the template name votes is being created. Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); The votes template put in client/votes/votes.html would be some simple markup such as the following: <template name="votes"> <h2>All Votes</h2> <ul> {{#each sortedVotes}} <li>{{name}} ({{voteCount}}) <button class="btn-up-vote">Up Vote</button></li> {{/each}} </ul> <h3>Total votes: {{totalVotes}}</h3> </template> If you are curious what those markups are with {{ and }}, enter Meteor Blaze, which is a powerful library for creating live-updating on client side. Similar to AngularJS and React, Blaze serves as the default front-end templating engine for Meteor, but it is simpler to use and easier to understand. The Main Template There must be somewhere to start our application. client/main.html is the place to kickoff our template(s). <body> {{> votes}} </body> Helpers In order to show all of our votes we will need some helper functions. As you can see from the previous template, {{#each sortedVotes}}, where a loop should happen and print out the names and their votes in sorting order {{totalVotes}}, which is supposed to show the total vote count. We will put this code into the same file we have previously worked on: client/votes/votes.js, and the complete code should be: Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); Template.votes.helpers({ 'sortedVotes': function() { return Votes.find({}, { sort: { voteCount: -1 } }) }, 'totalVotes': function() { var votes = Votes.find(); if (votes.count() > 0) { return _.reduce(votes.fetch(), function(memo, obj) { return memo + obj.voteCount; }, 0); } } }); Sure enough, the helpers will return all of the votes, sorted in descending order (the larger number on top), and returning the sum (reduce - function provided by underscrore) of votes. This is all we need to show the vote listing. Head over to the browser, and you should be seeing the listing on-screen! Events In order to make the app useful and reactive we need an event to update the listing on the fly when someone votes on the names. This can be done easily with an event binding to the 'Up Vote' button. We will be adding the event handler in the same file: client/votes/votes.js Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); Template.votes.helpers({ 'sortedVotes': function() { return Votes.find({}, { sort: { voteCount: -1 } }) }, 'totalVotes': function() { var votes = Votes.find(); if (votes.count() > 0) { return _.reduce(votes.fetch(), function(memo, obj) { return memo + obj.voteCount; }, 0); } } }); Template.votes.events({ 'click .btn-up-vote': function() { Votes.update({ _id: this._id }, { $inc: { voteCount: 1 } }); } }); This new event handler just did a quick and dirty update on the Votes collections, by field name _id. Each event handler will have this pointing to the current template context -- i.e. the {{#each}} in the template indicates a new context. So this._id will return the current _id of each record in the collection. Step 5: Done. Enjoy your first real-time app! You can now visit the site with different browsers/tabs open side by side. Action on one will trigger the reactive behavior on the other. Have fun voting! Conclusion By now, we can see how easily we can build a fully functional real-time app/webapp using Meteor. With "great power comes great responsibility[RJ8] " (pun intended), and proper planning/structuring for our app/webapp is of the utmost importance once we have empowered by these technologies. Use it wisely and you can improve both the quality and performance of your app/webapp. Try it out, and let me know if you are sold. Resources: Meteor official site Meteor official documentation Meteor package library: Atmosphere Discover Meteor Want more JavaScript content? Look no further than our dedicated JavaScript page.  About the Author Ken Lee is the co-found of Innomonster Pte. Ltd. (http://innomonster.com/), a specialized website/app design & development company based in Singapore. He has eight years of experience in web development, and is passionate about front-end and JS full-stack development. You can reach him at ken@innomonster.com.
Read more
  • 0
  • 0
  • 19962

article-image-nodejs-its-easy-get-things-done
Packt Publishing
05 Sep 2016
4 min read
Save for later

With Node.js, it’s easy to get things done

Packt Publishing
05 Sep 2016
4 min read
Luciano Mammino is the author (alongside Mario Casciaro) of the second edition of Node.js Design Patterns, released in July 2016. He was kind enough to speak to us about his life as a web developer and working with Node.js – as well as assessing Node’s position within an exciting ecosystem of JavaScript libraries and frameworks. Follow Luciano on Twitter – he tweets from @loige.  1.     Tell us about yourself – who are you and what do you do? I’m an Italian software developer living in Dublin and working at Smartbox as Senior Engineer in the Integration team. I’m a lover of JavaScript and Node.js and I have a number of upcoming side projects that I am building with these amazing technologies.  2.     Tell us what you do with Node.js. How does it fit into your wider development stack? The Node.js platform is becoming ubiquitous; the range of problems that you can address with it is growing bigger and bigger. I’ve used Node.js on a Raspberry Pi, in desktop and laptop computers and on the cloud quite successfully to build a variety of applications: command line scripts, automation tools, APIs and websites. With Node.js it’s really easy to get things done. Most of the time I don't need to switch to other development environments or languages. This is probably the main reason why Node.js fits very well in my development stack.  3.     What other tools and frameworks are you working with? Do they complement Node.js? Some of the tools I love to use are RabbitMq, MongoDB, Redis and Elastic Search. Thanks to the Npm repository, Node.js has an amazing variety of libraries which makes integration with these technologies seamless. I was recently experimenting with ZeroMQ, and again I was surprised to see how easy it is to get started with a Node.js application.  4.     Imagine life before you started using Node.js. What has its impact been on the way you work? I started programming when I was very young so I really lived "a life" as a programmer before having Node.js. Before Node.js came out I was using JavaScript a lot to program the frontend of web applications but I had to use other languages for the backend. The context-switching between two environments is something that ends up eating up a lot of time and energy. Luckily today with Node.js we have the opportunity to use the same language and even to share code across the whole web stack. I believe that this is something that makes my daily work much easier and enjoyable.  5.     How important are design patterns when you use Node.js? Do they change how you use the tool? I would say that design patterns are important in every language and in this case Node.js makes no difference. Furthermore due to the intrinsically asynchronous nature of the language having a good knowledge of design patterns becomes even more important in Node.js to avoid some of the most common pitfalls.  6.     What does the future hold for Node.js? How can it remain a really relevant and valuable tool for developers? I am sure Node.js has a pretty bright future ahead. Its popularity is growing dramatically and it is starting to gain a lot of traction in enterprise environments that have typically bound to other famous and well-known languages like Java. At the same time Node.js is trying to keep pace with the main innovations in the JavaScript world. For instance, in the latest releases Node.js added support for almost all the new language features defined in the ECMAScript 2015 standard. This is something that makes programming with Node.js even more enjoyable and I believe it’s a strategy to follow to keep developers interested and the whole environment future-proof.  Thanks Luciano! Good luck for the future – we’re looking forward to seeing how dramatically Node.js grows over the next 12 months. Get to grips with Node.js – and the complete JavaScript development stack – by following our full-stack developer skill plan in Mapt. Simply sign up here.
Read more
  • 0
  • 0
  • 19863

article-image-5-things-that-matter-web-development-2018
Richard Gall
11 Dec 2017
4 min read
Save for later

5 things that will matter in web development in 2018

Richard Gall
11 Dec 2017
4 min read
2017 has been an interesting year in web development. Today the role of a web developer is stretched across the stack - to be a great developer you need to be confident and dexterous with data, and have an eye for design and UX. Yes, all those discrete roles will be important in 2017, but being able to join the pieces of the development workflow together - for maximum efficiency - will be hugely valuable in 2018. What web development tools will matter most in 2018? Find out here. But what will really matter in 2018 in web development? Here's our list of the top 5 things you need to be thinking about… 1. Getting over JavaScript fatigue JavaScript fatigue has been the spectre haunting web development for the last couple of years. But it's effects have been very real - it's exhausting keeping up with the rapidly expanding ecosystem of tools. 'Getting over it', then, won't be easy - and don't think for a minute we're just saying it's time to move on and get real. Instead it's about taking the problem seriously and putting in place strategies to better manage tooling options. This article is a great exploration of JavaScript fatigue and it puts the problem in a rather neat way: JS Fatigue happens when people use tools they don't need to solve problems they don't have. What this means in practical terms, then, is that starting with the problem that you want to solve is going to make life much better in 2018. 2. Web components Web components have been a development that's helping to make the work of web developers that little bit easier. Essentially they're reusable 'bits' that don't require any support from a library (like jQuery for example), which makes front end development much more streamlined. Developments like this hint at a shift in the front end developer skillset - something we'll be watching closely throughout 2018. If components are making development 'easier' there will be an onus on developers to prove themselves in a design and UX sphere. 3. Harnessing artificial intelligence AI has truly taken on wider significance in 2017 and has come to the forefront not only of the tech world's imagination but the wider public one too. It's no longer an academic pursuit. It's not baked into almost everything we do. That means web developers are going to have to get au fait with artificial intelligence. Building more personalized UX is going to be top of the list for many organizations in 2018 - pressure will be on web developers to successfully harness artificial intelligence in innovative ways that drive value for their businesses and clients. 4. Progressive web apps and native-like experiences This builds on the previous two points. But ultimately this is about what user expectations are going to look like in 2018. The demand is going to be for something that is not only personalized (see #3), but something which is secure, fast and intuitive for a user, whatever their specific context. Building successful progressive web apps require a really acute sense of how every moving part is impacting how a user interacts with it - from the way data is utilised to how a UI is built. 2018 is the year where being able to solve and understand problems in a truly holistic way will be vital. 5. Improving the development experience 5. Web development is going to get simultaneously harder and easier - if that makes sense. Web components may speed things up, but you're time will no doubt quickly be filled by something else. This means that in 2018 we need to pay close attention to the development experience. If for example, we're being asked to do new things, deliver products in new ways, we need the tools to be able to do that. If agility and efficiency remain key (which they will of course), unlocking smarter ways of working will be as important as the very things we build. Tools like Docker will undoubtedly help here. In fact, it's worth looking closely at the changing toolchain of DevOps - that's been having an impact throughout 2017 and certainly will in 2018 too.
Read more
  • 0
  • 0
  • 19777

article-image-angular-2-dependency-injection-powerful-design-pattern
Mary Gualtieri
27 Jun 2016
5 min read
Save for later

Angular 2 Dependency Injection: A powerful design pattern

Mary Gualtieri
27 Jun 2016
5 min read
From 7th to 13th November we're celebrating two of the hottest tools in the JavaScript universe. Check out our best Angular and React content here - and save up to 80%! Dependency Injection is one of the biggest features of Angular. It allows you to inject dependencies in different components throughout your web application without needing to know how these dependencies are created. So what does this actually mean? If a component depends on a service, you do not create this service. Instead, you have a constructor request this service, and the framework then provides it to you. You can actually view dependency injection as a design pattern or framework. In Angular 1, you must tell the framework how to create a service. Let’s take a look at a code example. There is nothing out of the ordinary with this sample code. The class is set up to construct a house object when needed. However, the problem with this code example is that the constructor assigns the needed dependencies, and it knows how these objects are created. What is the big deal you may ask? First, this makes the code very hard to maintain, and second, the code is even harder to test. Let’s rewrite the code example as follows: What just happened? The dependency creation is moved out of the constructor, and the constructor is extended to expect all of the needed dependencies. This is significant becauseyou want to create a new house object. All you have to do is pass all of the needed dependencies to the constructor. This allows the dependencies to be decoupled from your class, allowing you to pass mocked dependencies when you write a test. Angular 2 has made a drastic, but welcome, changeto dependency injection. Angular 2 provides more control for maintainability, and it is easier to test. In the new version of Angular, it focuses more on how to get these dependencies. Dependency Injection consists of three things: Injector Provider Dependency The injector object exposes the APIs in order for you to create instances of dependencies. A provider tells the injector how to create the instance of a dependency. This is done by the provider taking a token and map to a factory function that creates an object. A dependency is the type of which an object should be created. What does this look like in code? Let’s dissect this code. You have to import an injector from Angular 2 in order to expose some static APIs to create the injectors. The resolveAndCreate() function is a factory one that creates an injector and takes a list of providers. However, you must ask yourself, “How does the injector know which dependencies are needed in order to represent a house?” Easy! Lets take a look at the following code: First, you need to import injectfrom the framework and apply the decorator to all of the parameters in the constructor. By attaching the Inject decorator to the House class, the metadata is used by the dependency injection system. To put it simply, you tell the dependency injectionthat the first constructor parameter should be of the Couch type, the second of the Table type, and the third of the Doors type. The class declares the dependencies, and the dependency injection can read this information whenever the application needs to create an object of House. If you take a look at the resolveAndCreate() method, it creates an injector from an array of binding. The passed-in bindings, in this case, are types from the constructor parameters. You might be wondering how dependency injection in Angular 2 works in the framework. Luckily, you do not have to create injectors manually when you build Angular 2 components. The developers behind Angular 2 have created an API that hides all of the injector system when you build components in Angular 2. Let’s explore how this actually works. Here, we have a very basic component, but what happens if you expand this component? Take a look: As you added a class, you now need to make it available in your application as an injectable. Do this by passing provider configurations to your application injector. Bootstrap() actually takes care of creating the root injector for you. It takes a list of providers as a second argument and then passes it straight to the injector when it is created. It looks similar to this: One last thing to consider when using dependency injection is: what do you do if you want a different binding configuration in a specific component? You just simply add a providers property to the component, as follows: Remember that providers do not construct the instances that will be injected, but it does construct a child injector that is created for the component. To conclude, Angular 2 introduces a new dependency injection system. The new dependency injection system allows for more control to maintain your code, to test it more easily, and to rely on interfaces. The new dependency injection system is built into Angular 2 and only has one API for dependency injection into components. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the Web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri.
Read more
  • 0
  • 0
  • 18835

article-image-restful-apis-cloud-iot-social-media-emerging-technologies
Pavan Ramchandani
01 Jun 2018
13 min read
Save for later

What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies

Pavan Ramchandani
01 Jun 2018
13 min read
Two decades ago, the IT industry saw tremendous opportunities with the dot-com boom. Similar to the dot-com bubble, the IT industry is transitioning through another period of innovation. The disruption is seen in major lines of business with the introduction of recent technology trends like Cloud services, Internet of Things (IoT), single-page applications, and social media. In this article, we have covered the role and implications of RESTful web APIs in these emerging technologies. This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. Cloud services We are in an era where business and IT are married together. For enterprises, thinking of a business model without an IT strategy is becoming out of the equation. Keeping the interest on the core business, often the challenge that lies ahead of the executive team is optimizing the IT budget. Cloud computing has come to the rescue of the executive team in bringing savings to the IT spending incurred for running a business. Cloud computing is an IT model for enabling anytime, anywhere, convenient, on-demand network access to a shared pool of configurable computing resources. In simple terms, cloud computing refers to the delivery of hosted services over the internet that can be quickly provisioned and decommissioned with minimal management effort and less intervention from the service provider. Cloud characteristics Five key characteristics deemed essential for cloud computing are as follows: On-demand Self-service: Ability to automatically provision cloud-based IT resources as and when required by the cloud service consumer Broad Network Access: Ability to support seamless network access for cloud-based IT resources via different network elements such as devices, network protocols, security layers, and so on Resource Pooling: Ability to share IT resources for cloud service consumers using the multi-tenant model Rapid Elasticity: Ability to dynamically scale IT resources at runtime and also release IT resources based on the demand Measured Service: Ability to meter the service usage to ensure cloud service consumers are charged only for the services utilized Cloud offering models Cloud offerings can be broadly grouped into three major categories, IaaS, PaaS, and SaaS, based on their usage in the technology stack: Software as a Service (SaaS) delivers the application required by an enterprise, saving the costs an enterprise needs to procure, install, and maintain these applications, which will now be offered by a cloud service provider at competitive pricing Platform as a Service (PaaS) delivers the platforms required by an enterprise for building their applications, saving the cost the enterprise needs to set up and maintain these platforms, which will now be offered by a cloud service provider at competitive pricing Infrastructure as a Service (IaaS) delivers the infrastructure required by an enterprise for running their platforms or applications, saving the cost the enterprise needs to set up and maintain the infrastructure components, which will now be offered by a cloud service provider at competitive pricing RESTful APIs' role in cloud services RESTful APIs can be looked on as the glue that connects the cloud service providers and cloud service consumers. For example, application developers requiring to display a weather forecast can consume the Google Weather API. In this section, we will look at the applicability of RESTful APIs for provisioning resources in the cloud. For an illustration of RESTful APIs, we will be using the Oracle Cloud service platform. Users can set up a free trial account via https://Cloud.oracle.com/home and try out the examples discussed in the following sections. For example, we will try to set up a test virtual machine instance using the REST APIs. The high-level steps required to be performed are as follows: Locate REST API endpoint Generate authentication cookie Provision virtual machine instance Locating the REST API endpoint Once users have signed up for an Oracle Cloud account, they can locate the REST API endpoint to be used by navigating via the following steps: Login screen: Choose the relevant Cloud Account details and click the My Services button as shown in the screenshot ahead: Home page: Displays the cloud services Dashboard for the user. Click the Dashboard icon as shown in the following screenshot: Dashboard screen: Lists the various cloud offerings. Click the Compute Classic offering: Compute Classic screen: Displays the details of infrastructure resources utilized by the user: Site Selector screen: Displays the REST endpoint: Generating an authentication cookie Authentication is required for provisioning the IT resources. For this purpose, we will be required to generate an authentication cookie using the Authenticate User REST API. The details of the API are as follows: API detailsDescriptionAPI functionAuthenticate supplied user credential and generate authentication cookie for use in the subsequent API calls.Endpoint <RESTEndpoint captured in previous section>/authenticate/ Example: https://compute.eucom-north-1.oracleCloud.com/authenticate/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+jsonAccept: application/oracle-compute-v3+jsonRequest body user: Two-part name of the user in the format/Computeidentity_domain/user password: Password for the specified userSample request:{ "password": "xxxxx", "user": "/Compute-586113456/test@gmail.com" } Response header properties set-cookie: Authentication cookie value The following screenshot shows the authentication cookie generated by invoking the Authenticate User REST API via the Postman tool: Provisioning a virtual machine instance Consumer are allowed to provision IT resources on the Oracle Compute Cloud infrastructure service, using the LaunchPlans or Orchestration REST API. For this demonstration, we will use the LaunchPlans REST API. The details of the API are as follows: API functionLaunch plan used to provision infra resources in Oracle Compute Cloud Service.Endpoint <RESTEndpoint captured in above section>/launchplan/ Example: https://compute.eucom-north-1.oracleCloud.com/launchplan/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+json Accept: application/oracle-compute-v3+json Cookie: <Authentication cookie> Request body instances: Array of instances to be provisioned. For details of properties required by each instance, refer to http://docs.oracle.com/en/Cloud/iaas/compute-iaas-Cloud/stcsa/op-launchplan--post.html. relationships: Mention if any relationship with other instances. Sample Request: { "instances": [ { "shape": "oc3", "imagelist": "/oracle/public/oel_6.4_2GB_v1", "name": "/Compute-586113742/test@gmail.com/test-vm-1", "label": "test-vm-1", "sshkeys":[] } ] } Response body Provisioned list of instances and their relationships The following screenshot shows the creation of a test virtual machine instance by invoking the LaunchPlan REST API via the Postman tool: HTTP Response Status 201 confirms the request for provisioning was successful. Check the provisioned instance status via the cloud service instances page as shown here: Internet of things The Internet of Things (IoT), as the name says, can be considered as a technology enabler for things (which includes people as well) to connect or disconnect from the internet. The term IoT was first coined by Kelvin Ashton in 1999. With broadband Wi-Fi becoming widely available, it is becoming a lot easier to connect things to the internet. This a has a lot of potential to enable a smart way of living and already there are many projects being spoken about smart homes, smart cities, and so on. A simple use case can be predicting the arrival time of a bus so that commuters can get a benefit, if there are any delays and plan accordingly. In many developing countries, the transport system is enabled with smart devices which help commuters predict the arrival or departure time for a bus or train precisely. Gartner analysts firm has predicted that more than 26 billion devices will be connected to the internet by 2020. The following diagram from Wikipedia shows the technology roadmap depicting the applicability of the IoT by 2020 across different areas:   IoT platform The IoT platform consists of four functional layers—the device, data, integration, and service layers. For each functional layer, let us understand the capabilities required for the IoT platform: Device Device management capabilities supporting device registration, provisioning, and controlling access to devices. Seamless connectivity to devices to send and receive data. Data Management of huge volume of data transmitted between devices. Derive intelligence from data collected and trigger actions. Integration Collaboration of information between devices.ServiceAPI gateways exposing the APIs. IoT benefits The IoT platform is seen as the latest evolution of the internet, offering various benefits as shown here: The IoT is becoming widely used due to lowering cost of technologies such as cheap sensors, cheap hardware,  and low cost of high bandwidth network. The connected human is the most visible outcome of the IoT revolution. People are connected to the IoT through various means such as Wearables, Hearables, Nearables, and so on, which can be used to improve the lifestyle, health, and wellbeing of human beings: Wearables: Wearables are any form of sophisticated, computer- like technology which can be worn or carried by a person, such as smart watches, fitness devices, and so on. Hearables: Hearables are wireless computing earpieces, such as headphones. Nearables: Nearables are smart objects with computing devices attached to them, such as door locks, car locks, and so on. Unlike Wearables or Hearables, Nearables are static. Also, in the healthcare industry, the IoT-enabled devices can be used to monitor patients' heart rate or diabetes. Smart pills and nanobots could eventually replace surgery and reduce the risk of complications. RESTful APIs' role in the IoT The architectural pattern used for the realization of the majority of the IoT use cases follows the event-driven architecture pattern. The event-driven architecture software pattern deals with the creation, consumption, and identification of events. An event can be generalized to refer the change in state of an entity. For example, a printer device connected to the internet may emit an event when the printer cartridge is low on ink so that the user can order a new cartridge. The following diagram shows the same with different devices connected to the internet:   The common capability required for devices connected to the internet is the ability to send and receive event data. This can be easily accomplished with RESTful APIs. The following are some of the IoT APIs available on the market: Hayo API: The Hayo API is used by developers to build virtual remote controls for the IoT devices in a home. The API senses and transmits events between virtual remote controls and devices, making it easier for users to achieve desired actions on applications by simply manipulating a virtual remote control. Mozilla Battery Status API: The Mozilla Battery Status API is used to monitor system battery levels of mobile devices and streams notification events for changes in the battery levels and charging progress. Its integration allows users to retrieve real-time updates of device battery levels and status. Caret API: The Caret API allows status sharing across devices. The status can be customized as well. Modern web applications Web-based applications have seen a drastic evolution from Web 1.0 to Web 2.0. Web 1.0 sites were designed mostly with static pages; Web 2.0 has added more dynamism to it. Let us take a quick snapshot of the evolution of web technologies over the years. 1993-1995Static HTML Websites with embedded images and minimal JavaScript1995-2000Dynamic web pages driven by JSP, ASP, CSS for styling, JavaScript for client side validations.2000-2008Content Management Systems like Word Press, Joomla, Drupal, and so on.2009-2013Rich Internet Applications, Portals, Animations, Ajax, Mobile Web applications2014 OnwardsSinge Page App, Mashup, Social Web   Single-page applications Single-page applications are web applications designed to load the application in a single HTML page. Unlike traditional web applications, rather than refreshing the whole page for displaying content change, it enhances the user experience by dynamically updating the current page, similar to a desktop application. The following are some of the key features or benefits of single-page applications: Load contents in single page No refresh of page Responsive design Better user experience Capability to fetch data asynchronously using Ajax Capability for dynamic data binding RESTFul API role in single-page applications In a traditional web application, the client requests a URI and the requested page is displayed in the browser. Subsequent to that, when the user submits a form, the submitted form data is sent to the server and the response is displayed by reloading the whole page as follows: Social media Social media is the future of communication that not only lets one interact but also enables the transfer of different content formats such as audio, video, and image between users. In Web 2.0 terms, social media is a channel that interacts with you along with providing information. While regular media is a one-way communication, social media is a two-way communication that asks for one's comments and lets one vote. Social media has seen tremendous usage via networking sites such as Facebook, LinkedIn, and so on. Social media platforms Social media platforms are based on Web 2.0 technology which serves as the interactive medium for collaboration, communication, and sharing among users. We can classify social media platforms broadly based on their usage as follows: Social networking servicesPlatforms where people manage their social circles and interact with each other, such as Facebook.Social bookmarking servicesAllows one to save, organize, and manage links to various resource over the internet, such as StumbleUpon.Social media newsPlatform that allows people to post news or articles, such as reddit.Blogging servicesPlatform where users can exchange their comments on views, such as Twitter.Document sharing servicesPlatform that lets you share your documents, such as SlideShare.Media sharing servicesPlatform that lets you share media contents, such as YouTube.Crowd sourcing servicesObtaining needed services, ideas, or content by soliciting contributions from a large group of people or an online community, such as Ushahidi. Social media benefits User engagement through social media has seen tremendous growth and many companies use social media channels for campaigns and branding. Let us look at various benefits social media offers: Customer relationship managementA company can use social media to campaigns their brand and potentially benefit with positive feedback from customer review.Customer retention and expansionCustomer reviews can become a valuable source of information for retention and also help to add new customers.Market researchSocial media conversations can become useful insight for market research and planning.Gain competitive advantageAbility to get a view of competitors' messages which enables a company to build strategies to handle their peers in the market.Public relationsCorporate news can be conveyed to audience in real time.Cost controlCompared to traditional methods of campaigning, social media offers better advertising at cheaper cost.   RESTful API role in social media Many of the social networks provide RESTful APIs to expose their capabilities. Let us look at some of the RESTful APIs of popular social media services: Social media servicesRESTFul APIReferenceYouTubeAdd YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.https://developers.google.com/youtube/v3/FacebookThe Graph API is the primary way to get data out of, and put data into, Facebook's platform. It's a low-level HTTP-based API that you can use to programmatically query data, post new stories, manage ads, upload photos, and perform a variety of other tasks that an app might implement.https://developers.facebook.com/docs/graph-api/overviewTwitter Twitter provides APIs to search, filter, and create an ads campaign.https://developer.twitter.com/en/docs To summarize, we discussed modern technology trends and the role of RESTful APIs in each of these areas including its implication on the cloud, virtual machines, user experience for various architecture, and building social media applications. To know more about designing and working with RESTful web services, do check out Java RESTful Web Services, Second Edition. Getting started with Django and Django REST frameworks to build a RESTful app How to develop RESTful web services in Spring
Read more
  • 0
  • 0
  • 18831

article-image-what-api-economy
Darrell Pratt
03 Nov 2016
5 min read
Save for later

What is the API Economy?

Darrell Pratt
03 Nov 2016
5 min read
If you have pitched the idea of a set of APIs to your boss, you might have run across this question. “Why do we need an API, and what does it have to do with an economy?” The answer is the API economy - but it's more than likely that that is going to be met with more questions. So let's take some time to unpack the concept and get through some of the hyperbole surrounding the topic. An economy (From Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location. - Wikipedia If we take the definition of economy from Wikipedia and the definition of API as an Application Programming Interface, then what we should be striving to create is a platform (as the producer of the API) that will attract a set of agents that will use that platform to create, trade or distribute goods and services to other agents over the Internet (our geography has expanded). The central tenet of this economy is that the APIs themselves need to provide the right set of goods (data, transactions, and so on) to attract other agents (developers and business partners) that can grow their businesses alongside ours and further expand the economy. This piece from Gartner explains the API economy very well. This is a great way of summing it up: "The API economy is an enabler for turning a business or organization into a platform." Let’s explore a bit more about APIs and look at a few examples of companies that are doing a good job of running API platforms. The evolution of the API economy If you asked someone what an API actually was 10 or more years ago, you might have received puzzled looks. The Application Programming Interface at that time was something that the professional software developer was using to interface with more traditional enterprise software. That evolved into the popularity of the SDK (Software Development Kit) and a better mainstream understanding of what it meant to create integrations or applications on pre-existing platforms. Think of the iOS SDK or Android SDK and how those kits and the distribution channels that Apple and Google created have led to the explosion of the apps marketplace. Jeff Bezos’s mandate that all IT assets have an API at Amazon was a major event in the API economy timeline. Amazon continued to build APIs such as SNS, SQS, Dynamo and many others. Each of these API components provided a well-defined service that any application can use and has significantly reduced the barrier to entry for new software and service companies. With this foundation set, the list of companies providing deep API platforms has steadily increased. How exactly does one profit in the API economy? If we survey a small set of API frameworks, we can see that companies use their APIs in different ways to add value to their underlying set of goods or create a completely new revenue stream for the company. Amazon AWS Amazon AWS is the clearest example of an API as a product unto itself. Amazon makes available a large set of services that provide defined functionality and for which Amazon charges with rates based upon usage of CPU and storage (it gets complicated). Each new service they launch addresses a new area of need and work to provide integrations between the various services. Social APIs Facebook, Twitter and others in the social space, run API platforms to increase the usage of their properties. Some of the inherent value in Facebook comes from sites and applications far afield from facebook.com and their API platform enables this. Twitter has had a more complicated relationship with its API users over time, but the API does provide many methods that allow both apps and websites to tap into Twitter content and thus extend Twitter’s reach and audience size. Chat APIs Slack has created a large economy of applications focused around its chat services and built up a large number of partners and smaller applications that add value to the platform. Slack’s API approach is one that is centered on providing a platform for others to integrate with and add content into the Slack data system. This approach is more open than the one taken by Twitter and the fast adoption has added large sums to Slack’s current valuation. Along side the meteoric rise of Slack, the concept of the bot as an assistant has also taken off. Companies like api.ai are offering services to enable chat services with AI as a service. The service offerings that surround the bot space are growing rapidly and offer a good set of examples as to how a company can monetize their API. Stripe Stripe competes in the payments as a service space along with PayPal, Square and Braintree. Each of these companies offers API platforms that vastly simplify the integration of payments into web sites and applications. Anyone who has built an e-commerce site before 2000 can and will appreciate the simplicity and power that the API economy brings to the payment industry. The pricing strategy in this space is generally on a per use case and is relatively straightforward. It Takes a Community to make the API economy work There are very few companies that will succeed by building an API platform without growing an active community of developers and partners around it. While it is technically easy to create and API given the tooling available, without an active support mechanism and detailed and easily consumable documentation your developer community may never materialize. Facebook and AWS are great examples to follow here. They both actively engage with their developer communities and deliver rich sets of documentation and use-cases for their APIs.
Read more
  • 0
  • 0
  • 18099
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-5-things-you-need-to-learn-to-become-a-server-side-web-developer
Amarabha Banerjee
19 Jun 2018
6 min read
Save for later

5 things you need to learn to become a server-side web developer

Amarabha Banerjee
19 Jun 2018
6 min read
The profession of a back end web developer is ringing out loud and companies seek to get a qualified server-side developer to their team. The fact that the back-end specialist has comprehensive set of knowledge and skills helps them realize their potential in versatile web development projects. Before diving into what it takes to succeed at back end development as a profession, let’s look at what it’s about. In simple words, the back end is that invisible part of any application that activates all its internal elements. If the front-end answers the question of “how does it look”, then the back end or server-side web development deals with “how does it work”. A back end developer is the one who deals with the administrative part of the web application, the internal content of the system, and server-side technologies such as database, architecture and software logic. If you intend to become a professional server-side developer then there are few basic steps which will ease out your journey. In this article we have listed down five aspects of server-side development: servers, databases, networks, queues and frameworks, which you must master to become a successful server side web developer. Servers and databases: At the heart of server-side development are servers which are nothing but the hardware and storage devices connected to a local computer with working internet connection. So everytime you ask your browser to load a web page, the data stored in the servers are accessed and sent to the browser in a certain format. The bigger the application, the larger the amount of data stored in the server-side. The larger the data, the higher possibility of lag and slow performance. Databases are the particular file formats in which the data is stored. There are two different types of databases - Relational and Non- Relational. Both have their own pros and cons. Some of the popular databases which you can learn to take your skills up to the next level are NoSQL, SQL Server, MySQL, MongoDB, DynamoDB etc. Static and Dynamic servers: Static servers are physical hard drives where application data, CSS and HTML files, pictures and images are stored. Dynamic servers actually signify another layer between the server and the browser. They are often known as application servers. The primary function of these application servers is to process the data and format it as per the web page when the data fetching operation is initiated from the browser. This makes saving data much easier and process of data loading becomes much faster. For example, Wikipedia servers are filled with huge amounts of data, but they are not stored as HTML pages, rather they are stored as raw data. When they are queried by the browser, the application browser processes the data and formats it into the HTML format and then sends it to the browser. This makes the process a whole lot faster and space saving for the physical data storage. If you want to go a step ahead and think futuristic, then the latest trend is moving your servers on the cloud. This means the server-side tasks are performed by different cloud based services like Amazon AWS, and Microsoft Azure. This makes your task much simpler as a back end developer, since you simply need to decide which services you would require to best run your application and the rest is taken care off by the cloud service providers. Another aspect of server side development that’s generating a lot of interest among developer is is serverless development. This is based on the concept that the cloud service providers will allocate server space depending on your need and you don’t have to take care of backend resources and requirements. In a way the name Serverless is a misnomer, because the servers are there, just that they are in the cloud and you don’t have to bother about it. The primary role of a backend developer in a serverless system would be to figure out the best possible services and optimize the running cost on the cloud, deploy and monitor the system for non-stop robust performance. The communication protocol: The protocol which defines the data transfer between client side and server side is called HyperTextTransfer Protocol (HTTP). When a search request is typed in the browser, an HTTP request with a URL is sent to the server and the server then sends a response message with either request succeeded or web page not found. When an HTML page is returned for a search query, it is rendered by the web browser. While processing the response, the browser may discover links to other resources (e.g. an HTML page usually references JavaScript and CSS pages), and send separate HTTP Requests to download these files. Both static and dynamic websites use exactly the same communication protocol/patterns. As we have progressed quite a long way from the initial communication protocols, newer technologies like SSL, TLS, IPv6 have taken over the web communication domain. Transport Layer Security (TLS) – and its predecessor, Secure Sockets Layer (SSL), which is now deprecated by the Internet Engineering Task Force (IETF) – are cryptographic protocols that provide communications security over a computer network. The primary reason these protocols were introduced was to protect user data and provide increased security. Similarly newer protocols had to be introduced around late 90’s to cater to the increasing number of internet users. Protocols are basically unique identification pointers that determine the IP address of the server. The initial protocol used was IPv4 which is currently being substituted by IPv6 which has the capability to provide 2^128 or 3.4×1038 addresses. Message Queuing: This is one of the most important aspects of creating fast and dynamic web applications. Message Queuing is the stage where data is queued as per the different responses and then delivered to the browser. This process is asynchronous which means that the server and the browser need not interact with the message queue at the same time. There are some popular message queuing tools like RabbitMQ, MQTT, ActiveMQ which provide real time message queuing functionality. Server-side frameworks and languages: Now comes the last but one of the most important pointers. If you are a developer with a particular choice of language in mind, you can use a language based framework to add functionalities to your application easily. Also this makes it more efficient. Some of the popular server-side frameworks are Node.js for JavaScript, Django for Python, Laravel for PHP, Spring for Java and so on. But using these frameworks will need some amount of experience in respective languages. Now that you have a broad understanding of what server-side web development is, and what are the components, you can jump right into server-side development, databases and protocols management to progress into a successful professional back-end web developer. The best backend tools in web development Preparing the Spring Web Development Environment Is novelty ruining web development?  
Read more
  • 0
  • 0
  • 18074

article-image-adblocking-and-future-web
Sam Wood
11 Apr 2016
6 min read
Save for later

Adblocking and the Future of the Web

Sam Wood
11 Apr 2016
6 min read
Kicked into overdrive by Apple's iOS9 infamously coming with adblocking options for Safari, the content creators of the Internet have woken up to the serious challenge of ad-blocking tech. The AdBlock+ Chrome extension boasts over 50 million active users. I'm one of them. I'm willing to bet that you might be one too. AdBlock use is rising massively and globally and shows no sign of slowing down. Commentators have blamed the web-reading public, have declared web publishers have brought this on themselves and even made worryingly convincing arguments that adblocking is a conspiracy by corporate supergiants to kill the web as we know it. They all agree on one point though - the way we present and consume web content is going to have to evolve or die. So how might adblocking change the web? We All Go Native One of the most proposed and most popular solutions to the adblocking crisis is to embrace "native" advertising. Similar to sponsorship or product placement in other media, native advertising interweaves its sponsor into the body of the content piece. By doing so, an advert is made immune to the traditional scripts and methods that identify and block net ads. This might be a thank you note to a sponsor at the end of a blog, an 'advertorial' upsell of a product or service, or corporate content marketing where a company produces and promotes their own content in a bid to garner your attention for their paid products. (Just like this blog. I'm afraid it's content marketing. Would you like to buy a quality tech eBook? How about the Web Developer's Reference guide - your Bible for everything you need to know about web dev! Help keep this Millennial creative in a Netflix account and pop culture tee-shirts.) The Inevitable Downsides Turns out nobody wants to read sponsored content - only 24% of readers scroll down on a native ad. A 2014 survey by Contently revealed two-thirds of respondents saying they felt deceived by sponsored advertising. We may see this changing. As the practice becomes more mainstream, readers come to realize it does not impact on quality or journalistic integrity. But it's a worrying set of statistics for anyone who hoped direct advertising might save them from the scourge of adblock. The Great App Exodus There's a increasingly popular prediction that adblocking may lead to a great exodus of content from browser-based websites to exist more and more in a scattered app-based ecosystem. We can already see the start of this movement. Every major content site bugs you to download its dedicated app, where ads can live free of fear. If you consume most of your mobile media through Snapchat Discover channels, through Facebook mobile sharing, or even through IM services like Telegram, you'll be reading your web content in that apps dedicated built-in reader. That reader is, of course, free of adblocking extensions. The Inevitable Downsides The issue here is one of corporate monopoly. Some journalists have criticized Facebook Instant (the tech which has Facebook host articles from popular news sites for increased load times) for giving Facebook too much power over the news business. Vox's Matthew Yglesias's predicts restructuring where "instead of digital media brands being companies that build websites, they will operate more like television studios — bringing together teams that collaborate on the creation of content, which is then distributed through diverse channels that are not themselves controlled by the studio." The control that these platforms could exert raises troubling concerns for the future of the Internet as a bastion of free and public speech. User Experience with Added Guilt Alongside adding advertising <script> tags, web developers are increasingly creating sites that detect if you're using AdBlocking software and punishing you accordingly. This can take many forms - from a simple plea to be put on your whitelist in order to keep the servers running, to the cruel and inhuman: Some sites are going as far as actively blocking content for users using adblockers. Trying accessing an article on the likes of Forbes or CityAM with an adblocker turned on. You'll find yourself greeted with an officious note and a scrambled page that refuses to show you the goods unless you switch off the blocker. The Inevitable Downsides No website wants to be in a position where it has to beg or bully their visitors. Whilst your committed readers might be happy to whitelist your URL, antagonizing new users is a surefire way to get them to bounce from the site. Sadly, sabotaging their own sites for ad blocking visitors might be one of the most effective ways for 'traditional' web content providers to survive. After all, most users block ads in order to improve their browsing experience. If the UX of a site on the whitelist is vastly superior to the UX under adblock, it might end up being more pleasant to browse with the extension off. An Uneasy Truce between Adblockers and Content In many ways adblocking was a war that web adverts started. From the pop-up to the autoplaying video, web ad software has been built to be aggressive. The response of adblockers is an indiscriminate and all-or-nothing approach. As Marco Arment, creator of the Peace adblocking app, notes "Today’s web readers [are so] fed up that they disable all ads, or even all Javascript. Web developers and standards bodies couldn’t be more out of touch with this issue, racing ahead to give browsers and Javascript even more capabilities without adequately addressing the fundamental problems that will drive many people to disable huge chunks of their browser’s functionality." Both sides need to learn to trust one another again. The AdBlock+ Chrome extension now comes with an automatic whitelist of sites; 'guilt' website UX works to remind us that a few banner ads might be the vital price we pay to keep our favorite mid-sized content site free and accessible. If content providers work to restore sanity to the web on their ends, then our need for adblocking software as users will diminish accordingly. It's a complex balance that will need a lot of good will from both 'sides' - but if we're going to save the web as we know it, then a truce might be necessary. Building a better web? How about checking out our Essential Web Dev? Get five titles for only $50!  
Read more
  • 0
  • 1
  • 17704

article-image-align-your-product-experience-strategy-with-business-needs
Packt Editorial Staff
02 May 2018
10 min read
Save for later

Align your product experience strategy with business needs

Packt Editorial Staff
02 May 2018
10 min read
Build a product experience strategy around the needs of stakeholders Product experience strategists need to conduct thorough research to ensure that the products being developed and launched align with the goals and needs of the business. Alignment is a bit of a buzzword that you're likely to see in HBR and other publications, but don't dismiss it - it isn't a trivial thing, and it certainly isn't an abstract thing. One of the pitfalls of product experience strategy - and product management more generally - is that understanding the needs of the business isn't actually that straightforward. There's lots of moving parts, lots of stakeholders. And while everyone should be on the same page, even subtle differences can make life difficult. This is why product experience strategists do detailed internal research. It: Helps designers to understand the company's vision and objectives for the product. It allows them to understand what's at stake. Based on this, they work with stakeholders to align product objectives and reach a shared understanding on the goals of design. Once organizational alignment is achieved, the strategist uses research insights to develop a product experience strategy. The research is simply a way of validating and supporting that strategy. The included research activities are: Stakeholder and subject-matter expert (SME) interviews Documents review Competitive research Expert product reviews   Talk to key stakeholders Stakeholders are typically senior executives who have a direct responsibility for, or influence on, the product. Stakeholders include product managers, who manage the planning and day-to-day activities associated with their product, and have a direct decision-making authority over its development. In projects that are important to the company, it is not uncommon for the executive leadership from the chief executive and down to be among the stakeholders due to their influence and authority to the direct overall product strategy. The purpose of stakeholder interviews is to gather and understand the perspective of each individual stakeholder and align the perspectives of all stakeholders around a unified vision around the scope, purpose, outcomes, opportunities and obstacles involved in undertaking a new product development project. Gaps among stakeholders on fundamental project objectives and priorities, will lead to serious trouble down the road. It is best to surfaces such deviations as early as possible, and help stakeholders reach a productive alignment. The purpose of subject-matter experts (SMEs) interviews is to balance the strategic high- level thinking provided by stakeholders, with detailed insights of experienced employees who are recognized for their deep domain expertise. Sales, customer service, and technical support employees have a wealth of operational knowledge of products and customers, which makes them invaluable when analyzing current processes and challenges. Prior to the interviews, the experience strategist prepares an interview guide. The purpose of the guide is to ensure the following: All stakeholders can respond to the same questions All research topics are covered if interviews are conducted by different interviewers Interviews make the best use of stakeholders' valuable time Some of the questions in the guide are general and directed at all participants, others are more specific and focus on the stakeholders specific areas of responsibility. Similar guides are developed for SME interviews. In-person interviews are the best, because they take place at the onset of the project and provide a good opportunity to build rapport and trust between the designer and interviewee. After a formal introduction regarding the purpose of the interview and general questions regarding the person's role and professional experience, the person is asked for their personal assessment and opinions on various topics. Here is a sample of different topics: Objectives and obstacles Prioritized goals for the project What does success look like What kind of obstacles the project is facing, and suggestions to overcome them Competition Who are your top competitors Strength and weaknesses relative to the competition Product features and functionality Which features are missing Differentiating features Features to avoid The interviews are designed to last no more than an hour and are documented with notes and audio recordings, if possible. The answers are compiled and analyzed and the result is presented in a report. The report suggests a unified list of prioritized objectives, and highlights gaps and other risks that have been reported. The report is one of the inputs into the development of the overall product experience strategy. Experts understand product experience better than anyone Product expert reviews, sometimes referred to as heuristic evaluations, are professional assessments of a current product, which are performed by design experts for the purpose of identifying usability and user experience issues. The thinking behind the expert review technique is very practical. Experience designers have the expertise to assess the experience quality of a product in a systematic way, using a set of accepted heuristics. A heuristic is a rule of thumb for assessing products. For example, the error prevention heuristic deals with how well the evaluated product prevents the user from making errors. The word heuristic often raises questions about its meaning, and the method has been criticized for its inherent weaknesses due to the following: Subjectivity of the evaluator Expertise and domain knowledge of the evaluator Cultural and demographic background of the evaluator These weaknesses increase the probability that the outcome of an expert evaluation will reflect the biases and preferences of the evaluator, resulting in potentially different conclusions about the same product. Still, expert evaluations, especially if conducted by two evaluators, and their aligned findings, have proven to be an effective tool for experience practitioners who need a fast and cost-effective assessment of a product, particularly digital interfaces. Jacob Nielsen developed the method in the early 1990s. Although there are other sets of heuristics, Nielsen's are probably the most known and commonly used. His initial set of heuristics was first published in his book, Usability Engineering, and is brought here verbatim, as there is no need for modification: Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use: Accelerators--unseen by the novice user--may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. Every product experience strategy needs solid competitor research Most companies operate in a competitive marketplace, and having a deep understanding of the competition is critical to the success and survival. Here are few of the questions that a competitive research helps addresses: How does a product or service compare to the competition? What are the strength and weaknesses of competing offerings? What alternatives and choices does the target audience have? Experience strategists use several methods to collect and analyze competitive information. From interviews with stakeholder and SMEs, they know who the direct competition is. In some product categories, such as automobiles and consumer products, companies can reverse-engineer competitive products and try to match or surpass their capabilities. Additionally, designers can develop extensive experience analysis of such competitive products, because they can have a first-hand experience with it. With some hi-tech products, however, some capabilities are cocooned within proprietary software or secret production processes. In these cases, designers can glean the capabilities from an indirect evidence of use. The Internet is a main source of competitive information, from the ability to have a direct access to a product online, to reading help manuals, user guides, bulletin boards, reviews, and analysis in trade publications. Occasionally, unauthorized photos or documents are leaked to the public domain, and they provide clues, sometimes real and sometimes bogus, about a secret upcoming product. Social media too is an important source of competitive data in the form of customers reviews on Yelp, Amazon, or Facebook. With the wealth of this information, a practical strategy to surpass the competition and delivering a better experience can be developed. For example, Uber has been a favorite car hailing service for a while. This service has also generated public controversy and had dissatisfied riders and drivers who are not happy with its policies, including its resistance for tips. By design, a tipping function is not available in the app, which is the primary transaction method between the rider, company and, driver. Research indicates, however, that tipping for the service is a common social norm and that most people tip because it makes them feel better. Not being able to tip places riders in an uncomfortable social setting and stirs negative emotions against Uber. The evidence of dissatisfaction can be easily collected from numerous web sources and from interviewing actual riders and drivers. For Uber competitors, such as Lyft and Curb, by making tipping an integrated part of their apps, provides an immediate competitive edge that improves the experience of both riders, who have an option to reward the driver for their good service, and drivers, who benefit from an increased income. This, and additional improvements over the inferior Uber experience, become a part of an overall experience strategy that is focused on improving the likelihood that riders and drivers will dump Uber in their favor. [box type="note" align="" class="" width=""]You read an extract from the book Exploring Experience Design written by Ezra Schwartz. This book will help you unify Customer Experience, User Experience and more to shape lasting customer engagement in a world of rapid change.[/box] 10 tools that will improve your web development workflow 5 things to consider when developing an eCommerce website RESTful Java Web Services Design  
Read more
  • 0
  • 0
  • 16666

article-image-node-6-what-expect
Darwin Corn
16 Jun 2016
5 min read
Save for later

Node 6: What to Expect

Darwin Corn
16 Jun 2016
5 min read
I have a confession to make—I’m an Archer. I’ve taken my fair share of grief for this, with everyone from the network admin that got by on Kubuntu, to the C dev running Gentoo getting in on the fun. As I told the latter, Gentoo is for lazy people; lazy people that want to have their cake and eat it too with respect to the bleeding edge. Given all that, you’ll understand that when I was asked to write this post, my inner Archer balked at the thought of digging through source and documentation to distill this information into something palatable for your average back end developer. I crossed my fingers and wandered over to their GitHub, hoping for a clear roadmap, or at least a bunch of issues milestoned to 6.0.0. I was disappointed even though a lively discussion on dropping XP/Vista support briefly captured my attention. I’ve also been primarily doing front end work for the last few months. While I pride myself on being an IT guy who does web stuff every now and then (someone who cares more about the CI infrastructure than the product that ships), circumstances have dictated that I be in the ‘mockup’ phase for a couple volunteer gigs as well as my own projects, all at once. Convenient? Sure. Not exactly conducive to my generalist, RenaissanceMan self-image, though (as an aside, the annotations in Rap Genius read like a Joseph Ducreux meme generator). Back end framework and functionality has been relegated to the back of my mind. As such, I watched Node bring io.js back into the fold and go to Semver this fall with all the interest of a house cat. A cat that cares more about the mouse in front of him than the one behind the wall. By the time Node 6 drops in April, I’m going to be feasting on back end work. It would behoove me to understand the new tools I’ll be using. So I ignored the Archer on my left shoulder and, and listening to my editor on the right, dug around the source for a bit. Of note is that there’s nothing near as ground breaking as the 4.0.0 release, where Node adopted the ES6 support introduced in io.js 1.0.0. That’s not so much a slight at Node development as a reflection of the timeline of JavaScript development. That said, here’s some highlights from my foray into their issue tracker: Features Core support for Promises I’ve been playing around with Ember for so long that I was frankly surprised that Node didn’t include Core support for Promises. A quick look at the history of Node shows that they originally existed (based on EventEmitters), but that support was removed in 0.2 and ever since server-side promises have been the domain of various npm modules. The PR is still open, so it remains to be seen if the initial implementation of this makes it into 6.0.0. Changing FIPS crypto support In case you’re not familiar with the Federal Information Processing Standard, MDN has a great article explaining it. Node’s implementation left some functionality to be desired, namely that Node compiled with FIPS support couldn’t use nonFIPS hashing algorithms (md5 for one, breaking npm). Node 6 compiled with FIPS support will likely both fix that and flip it’s functionality, such that, by default, FIPS support will be disabled (in Node compiled with it), requiring invocation in order to be used. Replacing C http-parser and DNS resolvers with JS implementations ThesePRs are still open (and have been for almost a year) so I’d say seeing them come in 6.0.0 is unlikely, though the latest discussion on the latter seems centered around CI so I might be wrong on it not making 6.0.0. That being said, while not fundamentally changing functionality too much, it will be cool to see more of the codebase being written in JavaScript. Deprecations fs reevaluation This primarily affects pre-v4 graceful-fs, on which many modules, including major ones such as npm itself and less, are dependent. The problem lies not insofar as these modules are dependent on graceful-fs, as they don’t require v4 or newer. This could lead to breakage for a lot of Node apps whose developers aren’t on their game when it comes to keeping the dependencies up to date. As a set and forget kind of guy, I’m thankful this is just getting deprecated and not outright removed. Removals sys The sys module was deprecated in Node 4 and has always been deprecated in io.js. There was some discussion on its removal last summer, and it was milestoned to 6.0.0. It’s still deprecated as of Node 5.7.0, so we’ll likely have to wait until release to see if it’s actually removed. Nothing groundbreaking, but there is some maturation of the platform in both codebase and deprecations as well as outright removals. If you live on the bleeding edge like I do, you’ll hop on over to Node 6 as soon as it drops in April, but there’s little incentive to make the leap for those taking a more conservative approach to Node development. About the Author Darwin Corn is a Systems Analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the Information Technology world.
Read more
  • 0
  • 0
  • 16114
article-image-5-mistakes-web-developers-make-when-working-mongodb
Charanjit Singh
21 Oct 2016
5 min read
Save for later

5 Mistakes Web Developers Make When Working with MongoDB

Charanjit Singh
21 Oct 2016
5 min read
MongoDB is a popular document-based NoSQL database. Here in this post, I am listing some mistakes that I've found developers make while working on MongoDB projects. Database accessible from the Internet Allowing your MongoDB database to be accessible from the Internet is the most common mistake I've found developers make in the wild. Mongodb's default configuration used to expose the database to Internet; that is, you can connect to the database using the URL of the server it's being run on. It makes perfect sense for starters who might be deploying a database on a different machine, given how it is the path of least resistance. But in the real world, it's a bad default value that often is ignored. A database (whether Mongo or any other) should be accessible only to your app. It should be hidden in a private local network that provides access to your app's server only. Although this vulnerability has been fixed in newer versions of MongoDB, make sure you change the config if you're upgrading your database from a previous version, and that the new junior developer you hired didn't expose the database that connects to the Internet with the application server. If it's a requirement to have a database accessible from the open-Internet, pay special attention to securing the database. Having a whitelist of IP addresses that only have access to the database is almost always a good idea. Not having multiple database users with access roles Another possible security risk is having a single MongoDB database user doing all of the work. This usually happens when developers with little knowledge/experience/interest in databases handle the database management or setup. This happens when database management is treated as lesser work in smaller software shops (the kind I get hired for mostly). Well, it is not. A database is as important as the app itself. Your app is most likely mainly providing an interface to the database. Having a single user to manage the database and using the same user in the application for accessing the database is almost never a good idea. Many times this exposes vulnerabilities that could've been avoided if the database user had limited access in the first place. NoSQL doesn't mean "secure" by default. Security should be considered when setting the database up, and not left as something to be done "properly" after shipping. Schema-less doesn't mean thoughtless When someone asked Ronny why he chose MongoDB for his new shiny app, his response was that "it's schema-less, so it’s more flexible". Schema-less can prove to be quite a useful feature, but with great power comes great responsibility. Often times, I have found teams struggling with apps because they didn't think the structure for storing their data through when they started. MongoDB doesn’t require you to have a schema, but it doesn't mean you shouldn't properly think about your data structure. Rushing in without putting much thought into how you're going to structure your documents is a sure recipe for disaster. Your app might be small and simple and so easy right now, but simple apps become complicated very quickly. You owe your future self to have a proper well thought out database schema. Most programming languages that provide an interface to MongoDB have libraries to impose some kind of database schema on MongoDB. Pick your favorite and use it religiously. Premature Sharding Sharding is an optimization, so doing it too soon is usually a bad idea. Many times a single replica set is enough to run a fast smooth MongoDB that meets all of your needs. Most of the time a bad schema and (bad) indexing are the performance bottlenecks many users try to solve with sharding. In such cases sharding might do more harm because you end up with poorly tuned shards that don't perform well either. Sharding should be considered when a specific resource, like RAM or concurrency, becomes a performance bottleneck on some particular machine. As a general rule, if your database fits on a single server, sharding provides little benefit anyway. Most MongoDB setups work successfully without ever needing sharding. Replicas as backup Replicas are not backup. You need to have a proper backup system in place for your database and not consider replicas as a backup mechanism. Consider what would happen if you deploy the wrong code that ruins the database. In this case, replicas will simply follow the master and have the same damage. There are a variety of ways that you can use to backup and restore your MongoDB, be it filesystem snapshots or mongodump or a third party service like MMS. Having proper timely fire drills is also very important. You should be confident that the backups you're making can actually be used in a real-life scenario. Practice restoring your backups before you actually need them and verify everything works as expected. A catastrophic failure in your production system should not be the first time when you try to restore from backups (often only to find out you're backing up corrupt data). About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 15858

article-image-beating-jquery-making-web-framework-worth-its-weight-code
Erik Kappelman
20 Apr 2016
5 min read
Save for later

Beating jQuery: Making a Web Framework Worth its Weight in Code

Erik Kappelman
20 Apr 2016
5 min read
Let me give you a quick disclaimer. This is a bit of a manifesto. Last year I started a little technology company with some friends of mine. We were lucky enough to get a solid client for web development right away. He was an author in need of a blogging app to communicate with the fans of his upcoming book. In another post I have detailed how I used Angular.js, among other tools, to build this responsive, dynamic web app. Using Angular.js is a wonderful experience and I would recommend it to anyone. However, Angular.js really only looks good by comparison. By this I mean, if we allow any web framework to exist in a vacuum and not simply rank them against one another, they are all pretty bad. Before you gather your pitchforks and torches to defend your favorite flavor let me explain myself. What I am arguing in this post is that many of the frameworks we use are not worth their weight in code. In other words, we add a whole lot of code to our apps when we import the frameworks, and then in practice using the framework is only a little bit better than using jQuery, or even pure JavaScript. And yes I know that using jQuery means including a whole bunch of code into your web app, but frameworks like Angular.js are many times built on top of jQuery anyway. So, the weight of jQuery seems to be a necessary evil. Let’s start with a simple http request for information from the backend. This is what it looks like in Angular.js: $http.get('/dataSource').success(function(data) { $scope.pageData = data; }); Here is a similar request using Ember.js: App.DataRoute = Ember.Route.extend({ model: function(params) { return this.store.find('data', params.data_id); } }); Here is a similar jQuery request: $.get( "ajax/stuff.html", function( data ) { $( ".result" ).html( data ); alert( "Load was performed." ); }); It's important for readers to remember that I am a front-end web developer. By this, I mean I am sure there are complicated, technical, and valid reasons why Ember.js and Angular.js are far superior to using jQuery. But, as a front-end developer, I am interested in speed and simplicity. When I look at these http requests and see that they are overwhelmingly similar I begin to wonder if these frameworks are actually getting any better. One of the big draws to Angular.js and Ember.js is the use of handlebars to ease the creation of dynamic content. Angular.js using handlebars looks something like this: <h1> {{ dynamicStuff }} </h1> This is great because I can go into my controller and make changes to the dynamicStuff variable and it shows up on my page. However, the following accomplishes a similar task using jQuery. $(function () { var dynamicStuff = “This is dog”; $(‘h1’).html( dynamicStuff ); }); I admit that there are many ways in which Angular.js or Ember.js make developing easier. DOM manipulation definitely takes less code and overall the development process is faster. However, there are many times that the limitations of the framework drive the development process. This means that developers sacrifice or change functionality simply to fit the framework. Of course, this is somewhat expected. What I am trying to say with this post is that if we are going to sacrifice load-times and constrict our development methods in order to use the framework of our choice can they at least be simpler to use? So, just for the sake of advancement lets think about what the perfect web framework would be able to do. First of all, there needs to be less set up. The brevity and simplicity of the http request in Angular.js is great, but it requires injecting the correct dependencies in multiple files. This adds stress, opportunities to make mistakes and development time. So, instead of requiring the developer to grab each specific tool for each specific implementation what if the framework took care of that for you? By this I mean if I were to make an http request like this: http( ‘targetURL’ , get , data) When the source is compiled or interpreted the needed dependencies for this http request are dynamically brought into the mix. This way we can make a simpler http request and we can avoid the hassle of setting up the dependencies. As far as DOM manipulation goes, the handlebars seem to be about as good as it gets. However, there needs to be better ways to target individual instances of a repeated elements such as <p> tags holding the captions in a photo gallery. The current solutions for problems like this one are overly complex. Especially when this issue involves one of the most common things on the internet, a photo gallery. About the Author As you can see, I am more of a critic than a problem solver. I believe the issues I bring up here are valid. As we all become more and more entrenched in the Internet of Things, it would be nice if the development process caught up with the standards of ease that end users demand.
Read more
  • 0
  • 0
  • 15824

article-image-gdpr-is-good-for-everyone-businesses-developers-customers
Richard Gall
14 Apr 2018
5 min read
Save for later

GDPR is good for everyone: businesses, developers, customers

Richard Gall
14 Apr 2018
5 min read
Concern around GDPR is palpable, but time is running out. It appears that many businesses don’t really know what they’re doing. At his Congressional hearing, Mark Zuckerberg’s notes read “don’t say we’re already GDPR compliant” - if Facebook aren’t ready yet, how could the average medium sized business be? But the truth is that GDPR couldn’t have come at a better time. Thanks in part to the Facebook and Cambridge Analytica scandal, the question of user data and online privacy has never been so audible within public discourse. That level of public interest wasn’t around a year ago. Ultimately, GDPR is the best way to tackle these issues. It forces businesses to adopt a different level of focus - and care - towards its users. It forces everyone to ask: what counts as personal data? who has access to it? who is accountable for the security of that data? These aren’t just points of interest for EU bureaucrats. They are fundamental questions about how businesses own and manage relationships with customers. GDPR is good news for web developers In turn, this means GDPR is good news for those working in development too. If you work in web development or UX it’s likely that you’ve experienced frustration when working against the requirements and feedback of senior stakeholders. Often, the needs of users are misunderstood or even ignored at the expense of what the business needs. This is especially true when management lacks technical knowledge and makes too many assumptions. At its worst, it can lead down the path of ‘dark patterns’ where UX is designed in such a way to ‘trick’ customers in behaving in a certain way. But even when intentions aren’t that evil, the mindset that refuses to take user privacy - and simple user desires - seriously can be damaging. Ironically, the problems this sort of negligence is causing isn’t just leading to legal issues. It’s also bad for business. That’s because when you engineer everything around what’s best for the business in a crude and thoughtless way, you make life hard for users and customers. This means: Customers simply have a bad experience and could get a better one elsewhere Customers lose trust Your brand is damaged GDPR will force businesses to get out of the habit of lazy thinking. It makes issues around UX, data protection so much more important than it otherwise would be. It also forces businesses to start taking the way software is built and managed much more seriously. GDPR will change bad habits in businesses What GDPR does, then, is it will force businesses to get out of the habit of lazy thinking. It makes issues around UX, data protection so much more important than it otherwise would be. It also forces businesses to start taking the way software is built and managed much more seriously. This could mean a change in the way that developers work within their businesses in the future. Siloes won’t just be inefficient, they might just lead to a legal crisis. Development teams will have to work closely with legal, management and data teams to ensure that the software they are developing is GDPR compliant. Of course, this will also require a good deal of developer training to be fully briefed on the new landscape. It also means we might see new roles like Chief Data Officer becoming more prominent. But it’s worth remembering that for non-developers, GDPR is going to also require much more technical understanding. If recent scandals have shown us anything, it’s that a lot of people don’t fully understand the capabilities that even the smallest organizations have at their disposal. GDPR will force the non-technical to become more informed about how software and data interact - and most importantly how software can sometimes exploit or protect users. GDPR will give developers a renewed focus on the code they write Equally, for developers, GDPR also forces a renewed focus on the code they write. Discussions around standards have been a central point of contention in the open source world for some time. There has always been an unavoidable, quiet tension between innovation and standard compliance. Writing in Smashing Magazine, digital law expert Heather Burns has some very advice on this: "Your coding standards must be preventive as well. You should disable unsafe or unnecessary modules, particularly in APIs and third-party libraries. An audit of what constitutes an unsafe module should be about privacy by design, such as the unnecessary capture and retention of personal data, as well as security vulnerabilities. Likewise, code reviews should include an audit for privacy by design principles, including mapping where data is physically and virtually stored, and how that data is protected, encrypted, and sandboxed." Sure, all of this seems like a headache, but all of this should make life better for users and customers. And while it might seem frustrating to not be able to track users in the way that we might have in the old world, by forcing everyone to focus on what users really want  - not what we want them to want - we’ll ultimately get to a place that’s better for everyone.
Read more
  • 0
  • 0
  • 15619
article-image-isomorphic-javascript
Sam Wood
26 Jan 2017
3 min read
Save for later

Why you should learn Isomorphic JavaScript in 2017

Sam Wood
26 Jan 2017
3 min read
One of the great challenges of JavaScript development has been wrangling your code for both the server- and the client-side of your site or app. Fullstack JS Devs have worked to master the skills to work on both the front and backend, and numerous JS libraries and frameworks have been created to make your life easier. That's why Isomorphic JavaScript is your Next Big Thing to learn for 2017. What even is Isomorphic JavaScript? Isomorphic JavaScript are JavaScript applications that run on both the client and the server-side. The term comes from a mathematical concept, whereby a property remains constant even as its context changes. Isomorphic JavaScript therefore shares the same code, whether it's running in the context of the backend or the frontend. It's often called the 'holy grail' of web app development. Why should I use Isomorphic JavaScript? "[Isomorphic JavaScript] provides several advantages over how things were done 'traditionally'. Faster perceived load times and simplified code maintenance, to name just two," says Google Engineer and Packt author Matt Frisbie in our 2017 Developer Talk Report. Netflix, Facebook and Airbnb have all adopted Isomorphic libraries for building their JS apps. Isomorphic JS apps are *fast*, operating off one base of code means that no time is spent loading and parsing any client-side JavaScript when a page is accessed. It might only be a second - but that slow load time can be all it takes to frustrate and lose a user. But Isomorphic apps are faster to render HTML content directly in browser, ensuring a better user experience overall. Isomorphic JavaScript isn't just quick for your users, it's also quick for you. By utilizing one framework that runs on both the client and the server, you'll open yourself up to a world of faster development times and easier code maintenance. What tools should I learn for Isomorphic JavaScript? The premier and most powerful tool for Isomorphic JS is probably Meteor - the fullstack JavaScript platform. With 10 lines of JavaScript in Meteor, you can do what will take you 1000s elsewhere. No need to worry about building your own stack of libraries and tools - Meteor does it all in one single package. Other Isomorphic-focused libraries include Rendr, created by Airbnb. Rendr allows you to build a Backbone.js + Handlebars.js single-page app that can also be fully rendered on the server-side - and was used to build the Airbnb mobile web app for drastically improved page load times. Rendr also strives to be a library, rather than a framework, meaning that it can be slotted into your stack as you like and gives you a bit more flexibility than a complete solution such as Meteor.
Read more
  • 0
  • 0
  • 14731

article-image-five-benefits-dot-net-going-open-source
Ed Bowkett
12 Dec 2014
2 min read
Save for later

Five Benefits of .NET Going Open Source

Ed Bowkett
12 Dec 2014
2 min read
By this point, I’m sure almost everyone has heard of the news about Microsoft’s decision to open source the .NET framework. This blog will cover what the benefits of this decision are for developers and what it means. Remember this is just an opinion and I’m sure there are differing views out there in the wider community. More variety People no longer have to stick with Windows to develop .NET applications. They can choose between operating systems and this doesn’t lock developers down. It makes it more competitive and ultimately, opens .NET up to a wider audience. The primary advantage of this announcement is that .NET developers can build more apps to run in more places, on more platforms. It means a more competitive marketplace, and improves developers and opens them up to one of the largest growing operating systems in the world, Linux. Innovate .NET Making .NET open source allows the code to be revised and rewritten. This will have dramatic outcomes for .NET and it will be interesting to see what developers do with the code as they continually look for new functionalities with .NET. Cross-platform development The ability to cross-develop on different operating systems is now massive. Previously, this was only available with the Mono project, Xamarin. With Microsoft looking to add more Xamarin tech to Visual Studio, this will be an interesting development to watch moving into 2015. A new direction for Microsoft By opening .NET up as open source software, Microsoft seems to have adopted a more "developer-friendly" approach under the new CEO, Satya Nadella. That’s not to say the previous CEO ignored developers, but by being more open as a company, and changing its view on open source, has allowed Microsoft to reach out to communities easier and quicker. Take the recent deal Microsoft made with Docker and it looks like Microsoft is heading in the right direction in terms of closing the gap between the company and developers. Acknowledgement of other operating systems When .NET first came around, around 2002, the entire world ran on Windows—it was the head operating system, certainly in terms of the mass audience. Today, that simply isn’t the case—you have Mac OSX, you have Linux—there is much more variety, and as a result .NET, by going open source, have acknowledged that Windows is no longer the number one option in workplaces.
Read more
  • 0
  • 0
  • 14316
Modal Close icon
Modal Close icon