Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-real-tech-knowledge-gap-your-boss
Richard Gall
13 Aug 2015
6 min read
Save for later

The Real Tech Knowledge Gap - Your Boss

Richard Gall
13 Aug 2015
6 min read
The technology skills gap is, apparently, an abyss of economic despair and financial ruin. It has been haunting the pages of business websites and online tech magazines for some time now. It effectively follows a lazy but common line of thought – that young people aren’t capable of dealing with the rigours of STEM subjects. That perspective ignores a similar, but even more problematic issue. It’s not so much that we have a skills gap, it’s more of a fundamental ‘understanding-gap’. And it’s not the kids who are the problem – it’s senior management. There are plenty of memes that reflect this state of affairs on social media - ‘When my boss tries to code’ being a personal favourite. Anyone working in a technical role recognizes variations of these day-to-day workplace problems. From communicating design constraints on a website, to discussing how to understand and act on data, that gap between those with technical knowledge and skills and those with strategic control – and authority – is a common experience of the modern workplace. Your boss, trying to code (Courtesy of devopsreactions.tumblr.com) However, just because there’s a knowledge gap among organizational leaders doesn’t mean that they’re not aware of it – if anything, they’re more aware of it than ever. And this is where the problem really lies. Business leaders expect a lot from technology – a morning spent reading Forbes, Gartner, or any other business website, can yield a long to-do list. Responsive website, Big Data strategy, cloud migration – these buzzwords that float around in the upper echelons of industry, by the sort of people that use the phrase ‘thought leader’ sincerely, put pressure on those with real expertise. Often, they are asked to deliver something they simply cannot. ‘Chief Whatever Officers’ – A Symptom of Managerial Failure One of the symptoms of this problem is the phenomenon of the ‘Chief Whatever Officer’ – someone leading a certain project or area of the business without any real authority within an organization. The Chief Data Officer is one of the most common examples of this – someone who can plan and implement, say, a Big Data strategy, and can use data to improve the business in different ways. Effectively, this Chief Data Officer sits in lieu of senior management’s knowledge. This isn’t immediately a problem – after all, this is basically how any organization is built, with people doing different jobs based on their skillset. The problem happens when, to take this example, a data strategy is treated as separate siloed project within a business, rather than something that requires very real leadership and authority. A lot of Big Data reports (you can find them pretty easily on any online business magazine looking for easy signups), and business think pieces such as this one from Forbes, and this from Read Write, talk a lot about the failure of Big Data projects. The Read Write one is particularly interesting as it singles out ‘corporate culture’ as one of the biggest causes of Big Data projects. However, if you want to be frank about it, it’s ultimately the fault of senior managers – the very people responsible for cultivating an organizational culture. It goes on to argue for a ‘cultural affinity for data’, which isn’t that helpful – the problem, as their own research highlights, isn’t so much there is no affinity for data, but instead that there is a fundamental lack of understanding of how it should be used. Information Age recently published an article taking issue with the concept of the CWO: It is an absurd notion that we need to define and hire someone as a ‘chief’ each time a challenge or opportunity arises that requires leadership attention and accountability. Isn’t this what we pay the big bucks to the CEO and his or her team to do? Whether or not you want to focus on the CWO here is irrelevant – the important point here is ultimately that we’re seeing senior management neglect their responsibilities, instead delegating to project managers with trendy job titles. The article doesn’t ever quite follow through with the implications of their point here, choosing instead to link it to vague trends in management philosophy in which a few straw-man gurus are held responsible. To put it another way, it’s not simply that there’s a knowledge gap, but instead that the very adjectives regularly espoused by the tech world – collaborative, agile, and innovative – are being used by business leaders to simultaneously relinquish and consolidate their position. This might sound counter-intuitive, and to a certain extent it is especially if you want to avoid the common growing pains of an expanding business – as technology has become more and more integral to businesses all around the world it comes to be seen as an easy solution that can deliver results in a really simple way. It’s effectively a form of hubris – the idea that you can solve things with just a click of your fingers (or a new hire). Pernicious tech solutionism is stopping you from doing your job The mistake being made here is that while technology – from the algorithms that predict customer behaviour to responsive websites that offer seamless and elegant user experiences – always targets points of friction and inefficiency, actually using that technology – integrating it and understanding how it can be used most effectively within an organization – is far from frictionless. Of course, it’s easy for anyone to make that mistake – just about everyone has, at some point, thought that something is going to revolutionize their lives or the way they work only to be disappointed when it turns out you’re still overworked and stressed. But it’s those leading our businesses and organizations who are most prone to misunderstanding – and then overestimating how different technologies could improve the organization. It’s another form of what Evgeny Morozov calls ‘tech solutionism’. It’s particularly pernicious because it stops people who really can implement solutions from properly doing their job. Adopting a More Holistic Approach What can be done about this situation? Ultimately, over time those with the best understanding of how to use tools will take greater control over these companies. But we’ll probably also need to see better integration of technology – a more holistic approach is required. In theory, this is how the CWO role should function – with chiefs reporting horizontally, so different departments and projects are in dialogue with one another, rather than simply reporting to a superior. But the managerial knowledge gap stops this from happening – essentially those leading technical projects are expected to simply deliver impressive benefits without necessarily having the resources or the support needed. Perhaps it’s not just a knowledge gap but also a cultural change – one which your boss loves, but doesn’t fully understand. If it is, then maybe it’s time to finally properly embrace it and try to really understand it. That way we might not have to keep battling through failed projects, stretched budgets, and damaged egos.  
Read more
  • 0
  • 0
  • 2982

article-image-value-existing-3d-data-age-augmented-virtual-reality
Stefan Minack
14 Mar 2017
5 min read
Save for later

The Value of Existing 3D Data in the Age of Augmented & Virtual Reality

Stefan Minack
14 Mar 2017
5 min read
In this article, we take a look at what is necessary and what you can achieve with the data you might already possess (i.e. OBJ, FBX, Collada) to make it work in the context of virtual and augmented reality. As AR/VR technologies are being considered to change the way human beings interact (as was the case with the advent of the PC, Internet, and mobile), applying these technologies in practice will surely affect the way we convey messages to our customers, the way we do business together, and the day-to-day communication with our co-workers. These technologies equip us with visualization, information, and playfulness at the same time, and they are ready at hand wherever we go. Getting Started Working with real-time visualization can be rough. It can be difficult because using Virtual and Augmented Reality is no less than rendering images in real time, but with plenty of sensor data or processing images from the device’s camera on top of it. Luckily there is enough software and hardware around that takes away the pain of dealing with the second part of that equation: processing the sensor data. Software tools such as Vuforia, Wikitude and also hardware and software embedded into devices such as mobile phones and tablets. Nonetheless, we should keep in mind that processing sensor data still uses some of the system's resources and limits the space and computation cycles we will have left to render the visualization. We do have some constraints we need to optimize the content for, to run as smoothly as possible. This means we need to take care of things like the limited memory and processing power of the device, something that is very common in designing games. So for our use case, the 3D data can come in plenty of different formats and flavors. To make it feasible for our application we need to do some work. Two Types of Data First, there is data that is only created to look as good as possible but does not have to be accurate in any kind of way. It is amere visually correct representation of an object. If you are working in the field of creating high-end CGIs, it normally does not matter if an image takes two or two and a half hours to compute. It also doesn’t matter if it takes 30 or 40 gigabytes of RAM to draw those images to a hard disk. Working with animations makes the processing time decrease, but we are still far away from creating those images in real time. Secondly, there is the kind of data that is created by engineers—data that is a virtual representation of an actual physical object, with all its mechanical properties. This can go down to the point where every nut and every bolt on a machine, every small detail, is present in the model. Most of the time, this kind of data is problematic in different kind of ways. On the one hand it is not possible to display it on, let’s say, a mobile device. On the other hand this level of detail is usually confidential data, something an engineer does not want to have floating around somewhere on the Internet. In both cases there has to be some kind of data reduction. This could be achieved by just doing manual labor, in terms of reducing objects that are not necessarily needed for the presentation. It could also be achieved by recreating models with the right amount of details, or even by using algorithms, and by recreating or mimicking the visual characteristics of the materials used on the object we want to present. Conclusion and Outlook We are still talking about data that is already there. Data that was already created in the process of designing and engineering a product, data we only have to take care of in the right and feasible way. While this real-time representation of a product cannot compete with the visual fidelity of a pre-rendered image or video, it can carry a lot of value by adding different levels of interaction and information to the 3D model. Example of a 3D tour with interaction (Vuframe) Practically speaking, it can be a deal breaker if a consumer sees that an object (a couch, a TV, or a piano) will fit into her home or not. AR helps her to bring immovable goods into her home. With VR she could do it the other way around and explore spaces that are either not existing yet or hard to reach (future properties or production centers on the other side of the globe). In short, we are all visual thinkers to a great extent and AR/VR will become the leading technologies facilitating this aspect. Sofa in life-size Augmented Reality based on CAD data (Vuframe) For companies and freelancing professionals that are into creating or working with 3D data, this is a huge step. They can utilize the data they have already created, optimize it, and put it into a better context by adding interaction, different layers of information that can be displayed, and use it for an example in sales and marketing activities, or even employee education. And by keeping in mind that this is an option, it is possible to create the data in a way that they can skip most of the optimization process, which reduces costs and production time. About the Author Stefan Minack is a 3D artist and head of content at Vuframe. He’s been working in 3D content creation on a daily basis for the past 8 years. Vuframe’s mission is to democratize AR & VR by removing the tech barrier for everyone.
Read more
  • 0
  • 0
  • 2976

article-image-firebase-and-react
Asbjørn Enge
27 Jan 2016
5 min read
Save for later

Firebase and React

Asbjørn Enge
27 Jan 2016
5 min read
Firebase is a realtime database for the modern web. Data in your Firebase is stored as JSON and synchronized in realtime to every connected client. React is a JavaScript library for creating user interfaces. It is declarative, composable, and promotes functional code. React lets you represent your UI as a function of its state. Together they are dynamite! In this post we will explore creating a React application with a Firebase backend. We will take a look at how we can use Firebase as a Flux store to drive our UI. Create a project Our app will be an SPA (Single Page Application) and we will leverage modern JavaScript tooling. We will be using npm as a package manager, babel for ES2015+ transpilation, and browserify for bundling together our scripts. Let's get started. $ mkdir todo && cd todo $ npm init $ npm install --save react@0.14.0-rc1 react-dom@0.14.0-rc1 firebase $ npm install --save-dev budo babelify We have now made a folder for our app todo. In that, we have created a new project using npm init (defaults are fine) and we have installed React, Firebase, and some other packages. I didn't mention budo before, but it is a great browserify development server. NOTE: At the time of writing, React is at version 0.13.3. In the next major release (0.14), React itself is decoupled from the DOM rendering. In order for this blogpost not to be obsolete in a matter of weeks we use the 0.14.0-rc1 version of React. Since we intend to render DOM nodes from our React components, we also need to install the react-dom module. React basics Let's start by creating a basic React application. We will be making a list of TODO items. Make a file index.js in the todo directory and open it in your favorite editor. import React from 'react' import ReactDOM from 'react-dom' class TodoApp extends React.Component { constructor(props) { super(props) this.state = { todos : [ { text : 'Brew coffee', id : 1 }, { text : 'Drink coffee', id : 2 } ] } } render() { let todos = this.state.todos.map((todo) => <li key={todo.id}>{todo.text}</li>) return ( <div className="todoAppWrapper"> <ul> {todos} </ul> </div> ) } } ReactDOM.render(<TodoApp />, document.body) Now we can run that application in a browser using budo (browserify under the hood). $ budo index.js --live -- -t babelify Navigate to http://localhost:9966/ and verify that you can see our two TODO items on the page. Notice the --live parameter we pass to budo. It automatically enables livereload for the bundle. If you're not familiar with it, get familiar with it! Set up Firebase For setting up a new Firebase, check out the below details on the subject. FireFlux To build large applications with React a good approach is to use the Flux application architecture. At the heart of Flux sits the store. A Flux store holds application state and trigger events whenever that state changes. Components listen to these events and re-render accordingly. This aligns perfectly with how Firebase works. Firebase holds your data/state and triggers events whenever something changes. So what we are going to do is use our Firebase as a Flux store. We'll call it FireFlux :-) Make a file fireflux.js in the todo directory and open it in your favorite editor. import Firebase from 'firebase/lib/firebase-web' import { EventEmitter } from 'events' const ref = new Firebase('https://<name>.firebaseio.com/todos') const fireflux = new EventEmitter() fireflux.store = { todos : [] } fireflux.actions = { addTodo : function(text) { ref.push({ text : text }) }, removeTodo : function(todo) { ref.child(todo.id).remove() } } ref.on('value', (snap) => { let val = snap.val() || [] if (typeof val == 'object') val = Object.keys(val).map((id) => { let todo = val[id] todo.id = id return todo }) fireflux.store.todos = val fireflux.emit('change') }) export { fireflux as default } Notice we import the firebase/lib/firebase-web library from the Firebase module. This module includes both a browser and node version of the Firebase library; we want the browser version. The fireflux object is an EventEmitter. This means it has functionality like .on() to listen and .emit() to trigger events. We attach some additional objects: store and actions. The store will hold our todo, and the actions are just convenience functions to interact with our store. Whenever Firebase has updated data - ref.on('value', fn) - it will update fireflux.store.todos and trigger the change event. Let's see how we can hook this up to our React components. import React from 'react' import ReactDOM from 'react-dom' import fireflux from './fireflux' class TodoApp extends React.Component { constructor(props) { super(props) this.state = { todos : [] } } render() { let todos = this.state.todos.map((todo) => { return ( <li key={todo.id}> <button onClick={this.removeTodo.bind(this, todo)}>done</button> {todo.text} </li> ) }) return ( <div className="todoAppWrapper"> <button onClick={this.addTodo}>Add</button> <ul> {todos} </ul> </div> ) } addTodo() { let todo = window.prompt("Input your task") fireflux.actions.addTodo(todo) } removeTodo(todo) { fireflux.actions.removeTodo(todo) } componentDidMount() { fireflux.on('change', () => { this.setState({ todos : fireflux.store.todos }) }) } } ReactDOM.render(<TodoApp />, document.body) First, take a look at TodoApp's componentDidMount. It sets up a listener for FireFlux's change event and updates its state accordingly. Calling this.setState on a React component triggers a re-render of the component. We have also included an Add and some done buttons. They make use of fireflux.actions.* to interact with Firebase. Give them a try and notice how the interface automatically updates when you add and finish items. Hopefully you can now hit done for the last one! About the author Asbjørn Enge is a software enthusiast living in Sandes, Norway. He is passionate about free software and the web. He cares about modular design, simplicity, and readability and his preferred languages are Python and JavaScript. He can be found on Twitter @asbjornenge
Read more
  • 0
  • 0
  • 2965

article-image-being-data-scientist-jupyter-part-2
Marijn van
04 May 2016
8 min read
Save for later

Being a Data Scientist with Jupyter – Part 2

Marijn van
04 May 2016
8 min read
This is the second part of a two-part piece on Jupyter, a computing platform used by many scientists to perform their data analysis and modeling. This second part will dive into some code and give you a taste of what it is like to be a data scientist. If you want to type along, you will need a Python installation with the following packages: the Jupyter-notebook (formerly the ipython notebook) and Python's scientific stack. For installation instructions, see here or here. Go ahead and fire up the notebook by typing jupyter notebook into a command prompt, which will start a web server and point your browser to localhost:8888, and click on the button to create a new notebook backed by an IPython kernel. The code for our first cell will be the following: Cell 1: %pylab inline By executing the cell (shift + enter), Jupyter will populate the namespace with various functions from the Numpy and Matplotlib packages as well as configuring the plotting engine to display figures as inline HTML images. Output 1: Populating the interactive namespace from numpy and matplotlib The experiment I'm a neuroscientist myself, so I'm going to show you a magic trick I once performed for my students. One student would volunteer to be equipped with an EEG cap and was put in front of a screen. On the screen, nine playing cards were presented to the volunteer with the instruction, "Pick any of these cards." Image of the different cards that the volunteer could select After the volunteer memorized the card, playing cards would flash across the screen one by one. The volunteer would mentally count the number of times his/her card was shown and not say anything to anyone. At the end of the sequence, I would analyze the EEG data and could tell with frightful accuracy which of the cards the volunteer had picked. The secret to the trick is an EEG component called P300: a sharp peak in the signal when something grabs your attention (such as your card flashing across the screen). The data I've got a recording of myself as a volunteer; grab it here. It is stored as a MATLAB file, which can be loaded using SciPy's loadmat function. The code will be the following. Cell 2: import scipy.io # Import the IO module of SciPy m = scipy.io.loadmat('tutorial1-01.mat') # Load the MATLAB file EEG = m['EEG'] # The EEG data stream labels = m['labels'].flatten() # Markers indicating when which card was shown # The 9 possible cards the volunteer could have picked cards = [ 'Ace of spades', 'Jack of clubs', 'Queen of hearts', 'King of diamonds', '10 of spaces', '3 of clubs', '10 of hearts', '3 of diamonds', 'King of spades', ] The preceding code slaps the data onto our workbench. From here, we can use a huge assortment of tools to visualize and manipulate the data. The EEG and label variables are of the numpy.ndarray type, which is a data structure that is the bread and butter of data analysis in Python. It makes it easy to work with numeric data in the form of a multidimensional array. For example, we can query the size of the array via the following code. Cell 3: print 'EEG dimensions:', EEG.shape print 'Label dimensions:', labels.shape Output 3: EEG dimensions: (7, 288349) Label dimensions: (288349,) I recorded EEG with seven electrodes, collecting numerous samples over time. Let's visualize the EEG stream through the following code. Cell 4: figure(figsize=(15,3)) # Make a new figure of the given dimensions (in inches) bases = 100 * arange(7) # Add some vertical whitespace between the 7 channels plot(EEG.T + bases) # The .T property returns a version where rows and columns are transposed xlabel('Time (samples)') # Label the X-axis, a good scientist always labels his/her axes! Output 4:   Output of cell 4 Note that NumPy's arrays are very clever concerning arithmetical operators such as addition. Adding a single value to an array will add the value to each element in the array. Adding two equally sized arrays will sum up the corresponding elements. Adding a 1D array (a vector) to a 2D array (a matrix) will sum up the 1D array to every row of the 2D array. This is known as broadcasting and can save a ton of tedious for loops. The labels variable is a 1D array that contains mostly zeros. However, on the exact onset of the presentation of a playing card, it contains the integer index (starting from 1) of the card being shown. Take a look at the following code. Cell 5: figure(figsize=(15,3)) scatter(arange(len(labels)), labels, edgecolor='white') # Scatter plot Output 5:   Output of cell 5 Slicing up the data Cards were shown at a rate of two per second. We are interested in the response generated whenever a card was shown, so we cut one-second-long pieces of the EEG signal that starts from the moment a card was shown. These pieces will be named “trials”. A useful function here is flatnonzero, which returns all the indices of an array that contain a non-zero value. It effectively gives us the time (as an index) when a card was shown if we use it in a clever way. Execute the following code. Cell 6: # Get the onset of the presentation of each card onsets = flatnonzero(labels) print 'Onset of the first 10 cards:', onsets[:10] print 'Total number of onsets:', len(onsets) # Here is how we can use the onsets variable classes = labels[onsets] print 'First 10 cards shown:', classes[:10] Output 6: Onsers of the first 10 cards:[ 7789 8790 9814 10838 11862 12886 13910 14934 15958 16982] Total number of onsets: 270 First 10 cards shown: [3 6 7 9 1 8 5 2 4 9] In Line 7, we used another cool feature of NumPy's arrays: fancy indexing. In addition to the classical indexing of an array using a single integer, we can index a NumPy array with another NumPy array as long as the second array contains only integers. Another useful way to index arrays is to use slices. Let’s use this to create a three-dimensional array containing all the trials. Take a look at the following code. Cell 7: nchannels = 7 # 7 EEG channels sample_rate = 2048. # The sample rate of the EEG recording device was 2048Hz nsamples = int(1.0 * sample_rate) # one second's worth of data samples ntrials = len(onsets) trials = zeros((ntrials, nchannels, nsamples)) for i, onset in enumerate(onsets): # Extract a slice of EEG data trials[i, :, :] = EEG[:, onset:onset + nsamples] print trials.shape Output 7: (270, 7, 2048) We now have 270 trials (one trial for each time a card was flashed across the screen). Each trial consists of a little one-second piece of EEG recorded with seven channels using 2,048 samples. Let’s plot one of the trials by running the following code. Cell 8: figure(figsize=(4,4)) bases = 100 * arange(7) # Add some vertical whitespace between the 7 channels plot(trials[0, :, :].T + bases) xlabel('Time (samples)') Output 8:   Output of cell 8 Reading my mind Looking at the individual trials is not all that informative. Let's calculate the average response to each card and plot it. To get all the trials where a particular card is shown, we can use the final way to index a NumPy array: using another array consisting of Boolean values. This is called Boolean or masked indexing. Take a look at the following code. Cell 9: # Lets give each response a different color colors = ['k', 'b', 'g', 'y', 'm', 'r', 'c', '#ffff00', '#aaaaaa'] figure(figsize=(4,8)) bases = 20 * arange(7) # Add some vertical whitespace between the 7 channels # Plot the mean EEG response to each card, such an average is called an ERP in the literature for i, card in enumerate(cards): # Use boolean indexing to get the right trial indices erp = mean(trials[classes == i+1, :, :], axis=0) plot(erp.T + bases, color=colors[i]) Output 9:   Output of cell 9 One of the cards jumps out: the one corresponding to the green line. This line corresponds to the third card, which turns out to be Queen of Hearts. Yes, this was indeed the card I had picked! Do you want to learn more? This was a small taste of the pleasure it can be to manipulate data with modern tools such as Jupyter and Python's scientific stack. To learn more, take a look at Numpy tutorial and Matplotlib tutorial. To learn even more, I recommend Cyrille Rossant's IPython Interactive Computing and Visualization Cookbook. About the author Marijn van Vliet is a postdoctoral researcher at the department of Neuroscience and Biomedical Engineering at Aalto University. He uses Jupyter to analyse EEG and MEG recordings of the human brain in order to understand more about it processes written and spoken language.  He can be found on Twitter @wmvanvliet.
Read more
  • 0
  • 0
  • 2961

article-image-basic-game-engine-patterns-that-make-game-development-simple
Daan van
04 Nov 2015
10 min read
Save for later

Basic Game Engine Patterns that Make Game Development Simple

Daan van
04 Nov 2015
10 min read
The phrase "Do not reinvent the wheel" is often heard when writing software. It definitely makes sense not to spend your time on tasks that others already have solved. Reinventing the wheel has some real merit. It teaches you alot about the problem, especially what decision need to be made when solving it. So in this blog post we will reinvent a game engine to learn what is under the hood of most game development tools. StopWatch We are going to learn about game engines by creating a game from scratch. The game we will create is a variant of a stop watch game. You will need to press the spacebar for a fixed amount of time. The object is to come as close as you can get to a target time. You can play the finished game to get a feeling for what we are about to create. Follow along If you want to follow along, download StopWatch-follow-along.zip and extract it in a suitable location. If you now open index.html in your browser you should see a skeleton of the game. Create the Application We will get things started by adding an application.js. This file will be responsible to start the game. Open the directory in your favorite editor and open index.html. Add the following script tag before the closing body-tag. <script src="js/application.js"></script> This references a JavaScript file that does not exist yet. Create a directory js below the root of the project and create the file js/application.js with the following content (function(){ console.log('Ready to play!'); })(); This sets up an immediatly invoked function expression that creates a scope to work in. If you reload the StopWatch game and open the developer tools, you should Ready to play! in the console. Create the Library The application.js setup the game, so we better create something to setup. In index.html above the reference to js/application.js, refer to js/stopwatch.js: <script src="js/stopwatch.js"></script> <script src="js/application.js"></script> stopwatch.js will contain our library that deal with all the game related code. Go ahead and create it with the following content: (function(stopwatch){ })(window.stopwatch = window.stopwatch || {}); The window.stopwatch = window.stopwatch || {} makes sure that a namespace is created. Just to make sure we have wired everything up correctly change application.js so that it checks the stopwatch namespace is available. (function(){ if (!stopwatch) { throw new Error('stopwatch namespace not found'); } console.log('Ready to play!'); })(); If all goes well you still should be greeted with Ready to play! in the browser. Creating a Game Something should be responsible of keeping track of game state. We will create a Game object for this. Open js/stopwatch.js and add the following code. var Game = stopwatch.Game = function(seconds){ this.target = 1000 * seconds; // in milliseconds }; Inside the immediatly invoked function expression. This creates an constructor that accepts a number of seconds that will serve as the target time in the game. Creating a Game object The application.js is responsible for all the setup, so it should create a game object. Open the js/application.js and add: var game = new stopwatch.Game(5); window.game = game; The first line create a new game with a target of five seconds. The last line exposes it so we can inspect it in the console. Reload the StopWatch game in the browser and type game in the console. It should give you a representation of the game we just created. Creating a GameView Having a Game object is great, but what use is it to us if we can not view it? We will create a GameView for that purpose. We would like to show the target time in the game view so go ahead and add the following line to index.html, just below the h1-tag. <div id="stopwatch"><label for="target">Target</label><span id="target" name="target">?</span></div> This will create a spot for us to place the target time in. If you refresh the StopWatch game, you should see "Target: ?" in the window. Just like we create a Game, we are going to create a GameView. Head over to js/stopwatch.js and add: var GameView = stopwatch.GameView = function(game, container){ this.game = game; this.container = container; this.update(); }; GameView.prototype.update = function(){ var target = this.container.querySelector('#target'); target.innerHTML = this.game.target; }; The GameView constructor accepts a Game object and a container to place the game in. It stores these arguments and then calls the update method. The update method searches within the container for the the tag with id target and writes the value of game.target into it. Creating a GameView object Now that we created a GameView we better hook it up to the game object we already created. Open js/application.js and change it to: var game = new stopwatch.Game(5); new stopwatch.GameView(game, document.getElementById('stopwatch')) This will create a GameView object with the game object and the div-tag we just created. If you refresh the StopWatch game the question mark will be substituted with the target time of 5000 milliseconds. Show Current Time Besides the target time, we would also want to show the current time, i.e. the time that is ticking away towards the target. This is quit similar to the target, with a slight twist. Instead of a property we are using a getter-method. In index.html add a line for the current time. <label for="current">Current</label><span id="current" name="current">?</span> In js/stopwatch.js, right after the constructor add: Game.prototype.current = function(){ return 0; }; Finally change the update-method of GameView to also update the current state. var target = this.container.querySelector('#target'); target.innerHTML = this.game.target; var current = this.container.querySelector('#current'); current.innerHTML = this.game.current(); Refresh the StopWatch game to see the changes. Starting & Stopping the Game We would like the current time start ticking when we press the spacebar and stop ticking when we release the spacebar. For this we are going to create start and stop methods on the Game. We also need to keep track if the game is already started or stopped, so we start by initializing them in the constructor. Change the Game constructor, found in the js/stopwatch.js, to initialize started and stopped properties. this.target = 1000 * seconds; // in milliseconds this.started = false; this.stopped = false; Next add a start and stop method that record that time when the game was started and stopped. Game.prototype.start = function(){ if (!this.started) { this.started = true; this.startTime = new Date().getTime(); this.time = this.startTime; } }; Game.prototype.stop = function(){ if (!this.stopped) { this.stopped = true; this.stopTime = new Date().getTime(); } } At last, we can change the current method to use the start and stop times. if (this.started) { return (this.stopped ? this.stopTime: this.time) - this.startTime; } return 0 If you now refresh the StopWatch game, we can test the functionality in the console tab. The follow excerpt demonstrates that Ready to play > game.start(); < undefined > // wait a few seconds > game.stop(); < undefined > game.current(); < 7584 // depends on how long you wait Update the GameView You might have noticed that despite the current of the game changed, the GameView did not reflect this. I.e. after running the above excerpt, the StopWatch window still shows zero for the current time. Let's create a game loop that continously updates the view. In order to achieve this we need to assign the GameView object to a variable and update it inside the game loop. Change js/application.js accordingly: var view = new stopwatch.GameView(game, document.getElementById('stopwatch')); function loop(){ view.update(); requestAnimationFrame(loop); } loop(); This uses the requestAnimationFramefunction to schedule the next run of the loop. If you now refresh the StopWatch game and rerun the excerpt above the current should be updated in the view. Update the Game Eventhough the GameView is updated when we start and stop the game, it still does not show the current time when it is ticking. Let's remedy this. The current-method of the Game is depending on the time property, but this is not updated. Create a tick-method on Game in js/stopwatch.js that updates the time property. Game.prototype.tick = function(){ this.time = new Date().getTime(); }; and call it in the game loop in js/application.js. function loop(){        game.tick();        view.update();        requestAnimationFrame(loop);    } Refreshing the game and rerunning the excerpt will update the current time when it ticks. Connect User Input Manipulating the Game object is fine when checking that the game works, but it is not very useable. We will change that. The Game will process user input. It will start when the spacebar is pressed and will stop when the spacebar is released again. We can make this happen by registering listeners for keydown and keyup events. When an event is triggered the listeners get called and can inspect the event and check what key was pressed, as demonstrated in the following code in js/application.js. document.body.addEventListener('keydown', function(event){ if (event.keyCode == 32 /* space */) { game.start(); } }); document.body.addEventListener('keyup', function(event){ if (event.keyCode == 32 /* space */) { game.stop(); } }); Try this out in the browser and you will be able to control the game with the spacebar. Birds Eye View Lets take a step back and see what we have achieved and identify the key parts. Game state, we created a Game that is responsible for keeping track of all the details of the game. **Game View*, next we created a GameView that is responsible for presenting the Game to the player. Game Loop, the game loop continuoulsy triggers the Game View to render it self and it updates the Game. User Input like time, key presses or mouse movement, is transformed into Game controls that change the game state. These are the four key ingredients to every game. Every game engine provides means to create, manipulate and manage all these aspect. Although there are local variations how game engines achieve this, it all drills down to this. Summary We took a peek under the hood of game engines by creating a game and identifying what is common to all games. I.e. game state to keep track of the game, game view to present to the players, user input to control the game and game loop to breath life into the game. About the author Daan van Berkel is an enthusiastic software craftsman with a knack for presenting technical details in a clear and concise manner. Driven by the desire for understanding complex matters, Daan is always on the lookout for innovative uses of software.
Read more
  • 0
  • 0
  • 2950

article-image-2018-year-opportunity-tech-training
James Cronin
08 Jan 2018
3 min read
Save for later

2018 is a year of opportunity and change in tech training

James Cronin
08 Jan 2018
3 min read
Why is open source software changing the enterprise? Emerging technologies are increasingly relevant in the enterprise world. Thanks to the growth of the internet and open source technology, a market once controlled solely by the big vendors has been opened and democratised. Rather than buying and using one vendor technology stack, developers and businesses are now choosing and combining the best tools for the job. As a result, the way technology is purchased and adopted is changing, with new tools (and consequently new skills) in increasing demand. ‘Open Source is the norm’, says Sam Lambert, GitHub’s Senior Director of Infrastructure Engineering. ‘A lot of large enterprises view being open source as an essential way of propagating the use of their technologies, and they're open sourcing stuff quickly.’ With the rapid pace of change, it can be difficult to keep up with what’s happening in the software world. As open source becomes mainstream, things get even more complicated - but also much more exciting. Why open source software is always an opportunity for businesses For businesses and developers today, software engineering isn’t just about putting software to work and shipping code, it’s also about problem solving at an even more fundamental level. Understanding what tools to use for different types of problems, and what stacks to develop and deploy with, creates huge opportunities for businesses in the age of digital transformation. As businesses search for new technologies and skill sets to gain advantage, training must adapt to support them - and herein lies the opportunity. As technology stacks are increasingly driven by choice, efficiency and practical application, training must follow suit. So the opportunity lies in: Understanding the technologies themselves, and crucially why they are relevant for enterprise teams and their objectives Understanding the way developers work and learn, and providing hands on material allowing them to skill up as efficiently as possible How Packt can help businesses leverage open source software Packt was founded to help make sense of this emerging technology, and to provide practical resources to help developers put software to work in new ways. This has given us a great understanding of the technologies themselves and how they are used, as well as the way our customers learn and put their skills into practice. We are capturing and sharing this insight and content with our partners, helping developers and businesses to better understand and utilise these tools and technologies. Through our book & video content, Courseware packs and our Live & On Demand training offering, we are providing practical, expert-written material, giving you everything you need to deliver high impact technical training on the emerging technologies driving digital transformation. If you are interested to talk further about this story of industry change, and to explore these emerging technologies and the opportunity in more detail, feel free to get in touch with me: jamesc@packtpub.com.
Read more
  • 0
  • 0
  • 2946
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-eight-things-developers-last-spent-5
Sam Wood
21 Dec 2015
1 min read
Save for later

Eight Things Developers Last Spent $5 On

Sam Wood
21 Dec 2015
1 min read
In our Skill Up Year in Review survey, we asked developers what they last spent $5 on. Here's what a few of them said. 1. Coffee Developers are machines that turn caffeine into code: over 300 surveyed developers last spent $5 on a coffee. 2. Apps and Games "I swear, I only spent $5 on Candy Crush Saga purchases!" 3. A Casino Chip We're confident that spending $5 to skill up your tech knowledge is a better way to get rich than gambling. ;) 4. A Cat We couldn't find a picture of a cat on the internet; please accept this pug GIF instead. 5. Cat Pajamas We managed to find a cat picture! We assume that 'Cat Pajamas' means pajamas to be worn by cats. 6. Bearings for an R2D2 droid   $5 is a better price than you'll be charged at Tosche Station, that's for sure. (Also, I've now fulfilled my obligatory Christmas 2016 Star Wars reference.) 7. Homeworld We assume this is the game, but we're charmed by the idea of being able to buy interplanetary real estate for $5. 8. Skilling Up! Over 800 developers last spent their $5 on a book, course, or some other learning resource. Developers never really do stop learning!
Read more
  • 0
  • 0
  • 2937

article-image-services-reactive-observation
Alejandro Perezpaya
22 Aug 2016
6 min read
Save for later

Services with Reactive Observation

Alejandro Perezpaya
22 Aug 2016
6 min read
When creating apps, it's a good practice to write your business logic and interaction layers into Service Objects or Interactors. This way, you can have them as modular components for your app, thus avoiding the repetition of code, plus you can follow single responsibility principles, and test thebehavior in an isolated fashion. The old-school delegate way As an example, we are going to create a WalletService, which will have the responsibility of managing current credits. The app is not reactive yet, so we are going to sketch this interactor with delegates and try to get the job done. We want a simple Interactor with the following features: Notifications on updates The increase credits method The use credits method Define the protocol for the service You can do this by implementing the following: public protocol WalletServiceDelegate { funccreditsUpdated(credits: Int) } The first requirement is now defined. Create the service Now you can create the service: public class WalletService { public var delegate: WalletServiceDelegate? private (set) var credits: Int public init(initialCredits: Int = 0) { self.credits = initialCredits } public func increase(quantity: Int) { credits += quantity } public func use(quantity: Int) { credits -= quantity } } With these few lines, our basic requirements have been met. Ready for use But, we are using delegates and hence we will need to create an object so that our delegate protocol can use this interface! This will be sufficient for our needs at the moment: class Foo: WalletServiceDelegate { func creditsUpdated(credits: Int) { print(credits) } } let service = WalletService() let myDelegate = Foo() service.delegate = myDelegate Well, while this is working, we need to use WalletService in more parts of the project. That means rewriting the actual code to work as a program class with static vars and class functions. It also needs to support multiple delegate subscriptions (adding and removing them too). That will mean a really complex code for a really simple service, and this problem will be repeated all over your shared services. There's a framework for this called RxSwift! The RxSwift way Our code will look like the following after removing the delegate dependency: public class WalletService { private (set) var credits: Int init(initialCredits: Int = 0) { self.credits = initialCredits } public func increase(quantity: Int) { credits += quantity } public func use(quantity: Int) { credits -= quantity } } We want to operate this as a program class, so we will make a way for that with RxSwift. Rewriting our code with RxSwift When you dig into RxSwift's units, you realize that Variable is the unit that fits the most requirements, and you can easily mutate its value and subscribe to changes easily. If you contain this unit in a public and static variable, you will be able to operate these services as a program class: import RxSwift import RxCocoa public class WalletService { private (set) static var credits = Variable<Int>(0) public class func increase(quantity: Int) { credits.value += quantity } public class func use(quantity: Int) { credits.value -= quantity } } The result of this is great, simple code that is easy to operate, with no protocoland no instanciable classes. Usage with RxSwift Let’s subscribe to credit changes. You need to use the Variable as a Driver or an Observable. We will use it as a driver: let disposeBag = DisposeBag() // First subscription WalletService.credits.asDriver() .driveNext { print($0) } .addDisposableTo(disposeBag) // Second subscription WalletService.credits.asDriver() .driveNext { print("Second: ($0)") } .addDisposableTo(disposeBag) WalletService.increase(10) WalletService.use(5) This clearly shows a lot of advantages. We didn't depend on a class, and we can add as many subscriptions as we want! As you can see, RxSwift helps you make cleaner services/interactors with a simple API as higher functionality. With this pattern, you can have subscriptions over the app to changing data, so we will navigate through the app and have all of the views updated with the last changes without forcing a redraw of the view for every update in the dependency data. A higher complexity example Dealing with geocalization is tough. It’s a reality, but there are ways to make interoperability with it easier. To avoid future problems and repeated or messy code, create a workaround:CoreLocation. With RxSwift, you have easier access to subscribe to CoreLocation updates, but I consider that approach not good enough. In this case, I recomend making a class around CoreLocation, using it as a shared Instance, to manage the geolocation updates with a global interoperability allowing upi to pause or start updates without much code: import RxSwift import RxCocoa import CoreLocation public class GeolocationService { static let sharedInstance = GeolocationService() // Using the explicit operator is not a good practice, // but we are 100% sure we are setting a driver into this variables on init. // If we don't do that, the compiler wont pass, even thought, it works. // Apple might do something here for the future versions of Swift. private (set) var authorizationStatus: Driver<CLAuthorizationStatus>! private (set) var location: Driver<CLLocation>! private (set) var heading: Driver<CLHeading>! private let locationManager = CLLocationManager() init() { locationManager.distanceFilter = kCLDistanceFilterNone locationManager.desiredAccuracy = kCLLocationAccuracyBestForNavigation authorizationStatus = bindAuthorizationStatus() heading = bindHeading() location = bindLocation() } public func requestAuthorization() { locationManager.requestAlwaysAuthorization() } public func startUpdatingLocation() { locationManager.startUpdatingLocation() locationManager.startUpdatingHeading() } public func stopUpdatingLocation() { locationManager.stopUpdatingLocation() locationManager.stopUpdatingHeading() } private func bindHeading() -> Driver<CLHeading> { return locationManager.rx_didUpdateHeading .asDriver(onErrorDriveWith: Driver.empty()) } private func bindLocation() -> Driver<CLLocation> { return locationManager.rx_didUpdateLocations .asDriver(onErrorJustReturn: []) .flatMapLatest { $0.last.map(Driver.just) ?? Driver.empty() } } private func bindAuthorizationStatus() -> Driver<CLAuthorizationStatus> { weak var wLocationManager = self.locationManager return Observable.deferred { let status = CLLocationManager.authorizationStatus() guard let strongLocationManager = wLocationManager else { return Observable.just(status) } return strongLocationManager .rx_didChangeAuthorizationStatus .startWith(status) }.asDriver(onErrorJustReturn: CLAuthorizationStatus.NotDetermined) } } As result of this abstraction, we now have easy operability over geolocation with just subscriptions to our exposed variables. About the author Alejandro Perezpaya has been writing code since he was 14. He has been developing a multidisciplined profile, working as an iOS Developer but also as a Backend and Web Developer formultiple companies. After working for years in Madrid and New York startups (Fever, Cabify, Ermes), he started Triangle, which is a studio based in Madrid, a few months ago, where he crafts high quality software products.
Read more
  • 0
  • 0
  • 2931

article-image-you-cant-disrupt-print-book
Richard Gall
12 Feb 2016
4 min read
Save for later

You Can’t Disrupt the Print Book

Richard Gall
12 Feb 2016
4 min read
We live in a time of relentless technological change, with the mantra of innovation lingering over our everyday experience – in our jobs and our private lives. But one technology has endured for more than half a millennium – the print book. As the world around us has changed, the print book has proven to be a reliable way for delivering and sharing stories. The digital age has always looked like being the death knell for the print book. To a certain extent, you can trace the transformation of content alongside changes in the types of devices we use. Interactive, short, and easily digestible – these are the sort of things that seemed antithetical to the print book. Yet the pattern of disruption has somehow failed. Even as we have grown to love our Kindles and iPads (and we know many of you adore eBooks!), still print books show no sign of disappearing. The New York Times noted that Digital books accounted for 20% of the market in 2014, ‘roughly the same as they did a few years ago’. This was a sign not so much that eBooks had failed to displace print, but more that sales had plateaued, neatly integrating into our consumption habits and lifestyles, rather than transforming them. Perhaps this shows that sometimes the next big thing isn’t always the best – a lesson for anyone working in tech, who lives their lives committed to innovation and improvement. Arguably print books have never been more important. At a time when just about every interface we encounter is electronic, from the electric light that stares out at us as we work, to the connectivity we carry in our handbags and pockets, print books are an alternative interface that reminds us that there’s another world out there – that there’s another way to learn and explore new ideas. If your computer and your mobile connect you, your print book disconnects you. It says: stop. I’m focusing on this for a minute; I’m focusing on me. It’s also that moment when you disconnect that problems look different and you can tackle them in a way you had never thought of before. Sometimes the answer isn’t going to be found by simply gazing at your IDE – you simply need to switch off, and withdraw into a book. It’s easy to think that problem solving is all about connecting, talking, networking, but sometimes it isn’t. It’s about reading, thinking, and deliberating. Print books don’t simply offer a way to solve a problem without a backlight – they also become material evidence of your ideas and experience. In our frameless and flat online world where everything is available at a few clicks of a button, it can be difficult to orient yourself – eBook collections can be great, but they can easily be submerged in the mountains of data on your hard drive, alongside the films you’ll never watch, the songs you’ll never listen to. But it’s not just about those books you have read – it’s those print books in your library, however small, that are important too – they signal where you might go next, what skills you might learn. They let you visualize the future in a way that even your Packt account isn’t quite able to do (although we’re working on that…). So if you want to take control of your skills, and your imagination, maybe it’s time to rethink the value of an old-fashioned print book. Here at Packt we’re dedicated to innovation, driven by what’s new, but sometimes even we have to concede that the old stuff is the best… If you want to upgrade your Packt eBooks to print, it’s simple – and you can grab the print copy at half-price! Simply follow these steps:
Read more
  • 0
  • 0
  • 2921

article-image-gdc-2014-synopsis
Ed Bowkett
22 Jun 2014
5 min read
Save for later

GDC 2014: A Synopsis

Ed Bowkett
22 Jun 2014
5 min read
The GDC of 2014 came and went with something of a miniature Big Bang for game developers. Whether it was new updates to game engines, or new VR peripherals announced, this GDC had plenty of sparks and I am truly excited for the future of game development. In this blog I’m going to cover the main announcements that took my attention and why I’m excited for them. The clash of the Titans Without a shadow of a doubt, out of the main announcements that came out of GDC 14 that was the most appealing to me was the announcement of the updates to the three main game engines, Unity, Unreal, and CryEngine all within a short timeframe. All introduced unique price models that will be covered in a separate blog post, but it was like having a second Christmas, particularly for me, who has a strong interest in this area, both from a hobbyist perspective and in my current role concerning game development books. All three offered a long list of changes and massive updates to various parts of their engine and at some point in the future, I hope to dabble in all three and provide insight on which I preferred and why. The advancement of the hobbyist developer Not to be outdone by the big three, smaller tools announced various new features including Gamemaker, who announced a partnership to develop on the Playstation 4, and Construct 2 announced a similar deal with Wii U (admittedly before GDC). These are hugely significant for me. Support for the new consoles with tools that are primarily aimed at the hobbyist in us all opens up a massive market for potential indie developers and those just trying game development for fun, with the added benefit of the console ecosystem! It means my dream of the game studio I created with Game Dev Tycoon can finally come true. Would you like a side of immersion with your games? I might as well be honest here. VR and I don’t get along. Not in the sense that we broke up after a long relationship and are no longer speaking, I mean more the sense that I just don’t get it. It probably also has something to do with my motion sickness, but that’s less fun. No, in all seriousness, I have no doubt that VR will revolutionize gaming in a big way. From what we’ve seen with certain games such as EVE: Valkyrie, VR has a unique opportunity to take gaming beyond just the screen and for the masses of people out there that love video games, this can only be a positive thing. With Sony announcing Project Morpheus, Oculus Rift releasing a new headset, and Microsoft expressing a strong interest in developing a headset in this area, the area will only continue to expand and competition is not a bad thing. The one question I have is whether it can go from the current gimmicky idea with the large, bulky headset, and become a tour de force in the gaming community. Consoles reaching out to indie developers GDC has always been focussed on indie games and development in the past and this year was no exception. But it wasn’t from the traditional PC love for indie games. Consoles are beginning to cotton on that indie games are much loved and indeed highly played, and as a result, 2014 was the year where the main consoles announced efforts to release more indie games onto their platforms, while trying to drive more indie developers to their respective consoles. Sony, for example, introduced PhyreEngine in GDC 2013, but plans to extend further support through the partnerships of, as mentioned earlier in this article, GameMaker: Studio and MonoGame. Through these tools and their promotion, Sony hopes to improve relations with indie developers and encourage them to use the Sony ecosystem. A similar announcement was made by Nintendo, by introducing the Nintendo Web Framework. They also promoted the fact that Nintendo would be willing to get it promoted and marketed properly. These announcements are both significant and positive for the future of game development, as, from my view, indie games are only going to increase in popularity and to have the ecosystems available for these people to develop on the popular consoles can only be a good thing, and will allow those that are not on an expensive budget or working for an AAA studio to create games and reach a wider audience. That’s the ambition of Sony and Nintendo, I believe. So there you have it; the big announcements that grabbed my attention at GDC. Whilst I could have mentioned Amazon Fire TV and further announcements by Valve, or gone into depth with specific peripherals, I felt an overview of what was announced at GDC was better; the analysis of these announcements can be covered more in depth at a later stage. However what is evident from this blog and what came out of GDC 2014 in general is that game development is an extremely healthy area and is continuously being pushed to the limits and constantly innovated. As an avid fan of games and a mere newbie at game development this excites me and keeps me interested. How was GDC 2014 for you? Any issues that you thought I should have included? Let me know!
Read more
  • 0
  • 0
  • 2913
article-image-2014-in-hardware
Ed Bowkett
11 Feb 2015
5 min read
Save for later

The Year that was: Hardware 2014

Ed Bowkett
11 Feb 2015
5 min read
Hardware went under some pretty big changes in 2014, this blog will focus on what I believe were the most significant. Bear in mind, like my previous blogs, this is purely opinion; feel free to counter this with anything you found equally as important. 1)    Internet of Things continues to be a thing Don’t get me wrong, I love the fact that I have a wristband that tracks the amount of exercise I do (not enough apparently) and that it records the type of sleep patterns I have. I like the idea that if I get a certain type of coffee machine I can receive notifications telling me my coffee machine is going to brew me a fresh cup at exactly 13:37. That’s all cool. However, I felt in 2014, the Internet of Things was just for ‘really neat’ things. It felt gimmicky. Whilst I am aware of the IoT beginning to have effects in the health system, 2014 was not the breakout year for this. Besides, when this does happen, is the phrase Internet of Things appropriate? Even the phrase sounds gimmicky. In my view, when IoT does mature to the point where it affects every element of society, then it becomes less about the internet of ‘things’ and more about the internet of ‘everything’. With Gartner reporting the Internet of things to be at the peak of inflated expectations, we have some way to go before the IoT emerges to the desirable stage yet. 2)    Wearables Wearables became such a thing last year. That sounds like a moan. Partly it is. I have, as mentioned above, a wristband, a Fitbit, that tells me how much exercise I’ve done, how much sleep I’ve had, and I love it.  Wearables when it was first announced were a great way of selling technology in the form of bettering your health. For a time most of the wearables coming out, I lapped up accordingly. Yet the more the year progressed, the more it became impossible to filter which wearables were actually useful and which actually benefitted your health. This isn’t an argument against competition in the market, competition is healthy, it’s more an observation that as a consumer, what is beneficial and will help you achieve your long term goals with these wearables has become a lot more cloudy and difficult to ascertain. Yet wearables appear here to stay and when I have self-tying shoes then I guess we’ve become fully assimilated with technology. 3)    Drones Consumer drones exploded onto the scene in 2014. No longer had an area exclusively held by the NSA, hobbyists are increasing flying drones, Quadcopters. These drones and Quadcopters are increasingly becoming easier to obtain, cheaper to buy. As a result issues have risen both over airspace concerns and privacy concerns. Bearing in mind these drones can be adapted to have cameras and video recorders. Whilst I am all in favor of hobbyists creating things, after all inventions come about this way, there needs to be a limit where these drones can be used. 4)    3D Printing Another area of the hardware market that received much attention but ultimately the majority of us are waiting on a more affordable price and really to figure out why we as consumers would want a 3D printer. This is might be a slightly biased viewpoint admittedly to really write about, given the only 3D printing I have experienced is at conferences where the cost of one was astronomical. Yet, I’ve not seen evidence of a 3D printing being really useful to the masses. As Gartner points out it’s just coming down from the ‘peak of inflated expectations’ so it has some way to go before it gets to the stage where the mass market adopts it. At present, it is too hobbyist and in a price band too far. That’s not to say what 3D printers can do isn’t awesome, it just feels too gimmicky for the price tag 5)     Apple Watch No hardware blog looking at the highlights of 2014 would be complete without the announcement of the Apple Watch. Announced in September, this was Apple’s announcement onto the already congested wearable market. Priced at £300 this is certainly not a cheap wearable, but nonetheless we should expect the same quality as previous Apple products. Coming with a new SDK, Watchkit, which allows developers to design apps for the device. The major downside? You have to have an iPhone to be able to use an Apple Watch. We’ve worked out the calculations here and that basically puts you into a commitment of around £1140 for the privilege of remaining locked into an ecosystem (based off of a minimum of £35 a month contract for 24 months with an iPhone) and frankly, I cannot justify that cost, particularly when there are alternatives out there which are far better (for example, the pebble watch is priced at £99.99, the Motorola Moto 360 at £199.99) I’ll probably still get one though, just because the quality of Apple products is so high. So there you have it. My top 5 choices on the year that was, 2014, for hardware. What are your choices? Do you agree?
Read more
  • 0
  • 0
  • 2910

article-image-clustered-container-deployments-alternatives-docker-swarm-part-5
Darwin Corn
03 Jun 2016
5 min read
Save for later

Clustered Container Deployments - Alternatives to Docker Swarm, Part 5

Darwin Corn
03 Jun 2016
5 min read
This is the final installation of a five-part series that began with a guide to container creation [link to part 1] and cloud deployment [link to part 2] and progressed to application layering [link to part 3] and deploying tiered applications to a cluster [link to part 4]. This series will now wrap up by focusing less on the Docker ecosystem to look at other clustered deployment options. This is largely beyond the scope of a series of posts focused on Docker. However, while working on this blog post series, I also worked with CoreOS. The beauty of CoreOS is that it fully integrates with the core Docker Engine (though as of this writing, RKT, their own container runtime, was just declared production ready. However, I'm sure they'll be pushing users in this direction as development continues). The CoreOS ecosystem consists of these primary components: CoreOS: This is the base operating system etcd: This is the distributed key-value store fleet: This is the coordinating systemd across the cluster flannel: This connects it all together There's a lot more than this, but those are the heavy-hitting aspects of the distro. Let's take a look at each of the parts individually in the context of the Taiga application used in Parts 3 and 4. I've once again branched the docker-taiga repo, so run git pull to ensure that you're working with the latest material. Then, run git checkout coreos, and you'll be ready to follow along. Like part four, there's a deploy.sh script in the application root, and this time, if you're running Linux with KVM/libvirt and at least 4 GB of available RAM, then you can kick it off and watch the magic of a highly available Taiga cluster launch on your very own computer. CoreOS CoreOS is a Gentoo-based Linux distro designed for one purpose: container deployments. While it can be run in a solo instance, it really shines in a cluster and integrates with a host of associated projects to make this happen. There's not much of interest in the OS itself, besides that it's deployed in a way that is different from what you're likely used to if you come from the Red Hat / Ubuntu server world. The machine is provisioned with a YAML file called a cloud-config and can be booted up in a number of different ways, from your favorite cloud provider's SDK to a simple install script. etcd etcd is a simple key-value store (think Redis) specifically tailored for use in CoreOS. It's meant to house the data needed across the entirety of the system—database connection information, feature flags, and so on—as well as handle the system coordination itself, including determining the leader machine / container election. It's a versatile tool, and for a full use case outside the one demonstrated in deploy.sh, I highly recommend looking at CoreOS's list of projects using etcd. fleet The fleet tool is probably my favorite of the coordinating distributed systemd bunch. The fleet tool is where CoreOS shines in it's ease of achieving high availability because it lets you use the now-familiar unit files of systemd to schedule processes on one, some, or all of the machines in a cluster. Perhaps its simplest application is in running containers, and that's only scratching the surface in what it can be used to accomplish. One of the cooler use cases is a simplified service discovery, which helps the cluster stay coordinated as well as making it scalable. A form of the sidekick method is demonstrated in some of the unit files that deploy.sh references. flannel Looking at flannel is where I started getting uncomfortable, and then I was immediately assuaged. In a previous job I had to manage firewalls and switches, and at no point did my pretend network engineer alter ego feel comfortable troubleshooting network issues. I used to joke that my job was watching Cisco SmartNet do it for me, but flannel takes all that and simplifies it, running an agent on each node that controls the subnet for containers on this host and records it in etcd to ensure the configuration isn't lost in case this node goes down for whatever reason. Port mapping then becomes the trivial process of letting flannel route it for you given a minimal amount of configuration. Further reading Of course, we covered all this without even touching on Kubernetes, which uses some of the CoreOS toolset itself. In fact, the two pair pretty well together, but you could write a whole book on Kubernetes alone. If you want to read more about Kubernetes, start with this excellent (if brief) guest blog on CoreOS's website about integrating Ansible with CoreOS deployments. After this, feel free to reference this post and the series as a whole as you descend into the rabbit hole that is highly available, containerized, and tiered applications. Good luck! About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 2905

article-image-what-juju
Wayne Witzel
15 Dec 2014
7 min read
Save for later

What is Juju?

Wayne Witzel
15 Dec 2014
7 min read
Juju is a service orchestration system. Juju works with a wide variety of providers, including your own machines. It attempts to abstract away the underlying specifics of a provider and allows you to focus and think about your service deployments and how they relate to each other. Juju does that using something called a charm, which we will talk about in this post. Using a simple configuration file, you set up a provider. For this post, we will be using a local provider that doesn't require any special setup in the configuration file, but does depend on some local system packages being installed. In Part 2 of this two-part blog series, we will be using the EC2 provider, but for now, we will stick with the defaults. Both posts in this series assume you are using Ubuntu 14.04 LTS. What is a charm? A charm is a set of files that tell Juju how a service can be deployed and managed. Charms define properties that a service might have, for example, a MySQL charm knows it provides a SQL database and a WordPress charm knows it needs a SQL database. Since this information is encoded in the charm itself, Juju is able to fully manage this relation between the two charms. Let's take a look at just how this works. Using Juju First, you need to install Juju: sudo add-apt-repository ppa:juju/stable sudo apt-get update && sudo apt-get install juju-core juju-local Once you have Juju installed, you need to create the base configuration for Juju: juju generate-config Now that you have done that, let’s switch to our local environment. The local environment is a convenient way to test out different deployments with Juju without needing to set up any of the actual third-party providers like Amazon, HP, or Microsoft. The local environment will use LXC containers for each of the service deployments. juju switch local Bootstrapping First, we want to prepare our local environment to deploy services to it using charms. To do that, we do what is called bootstrapping. The juju bootstrap command will generate some configuration files and set up and start the Juju state machine. This is the machine that handles all of the orchestration commands for your Juju deployment: juju bootstrap Now that we have our Juju environment up and running, we can start issuing commands. First, let’s take a look at our state machine’s details. We can see this information using the status command: juju status I will be prompting you to use this command throughout this post. This command will help to let you know when services are deployed and ready. Deploying At this point, you can begin deploying services using charms: juju deploy wordpress You can check on the status of your WordPress deployment using the previously mentioned juju statuscommand. Juju also logs details about the creation of machines and deployment of services to those machines. In the case of the local environment, those logs live at /var/log/juju-USERNAME-local. This can be a great place to find detailed information should you encounter any problems, and will provide you with a general overview of commands that have been run for a given command. Continue by installing a database that we will later tell our WordPress installation to use: juju deploy mysql Once your deployments have completed, your juju status command will output something very similar to this. juju status environment: local machines: "0": agent-state: started agent-version: 1.20.6.1 dns-name: localhost instance-id: localhost series: trusty state-server-member-status: has-vote "1": agent-state: started agent-version: 1.20.6.1 dns-name: 10.0.3.196 instance-id: wwitzel3-local-machine-1 series: precise hardware: arch=amd64 "2": agent-state: started agent-version: 1.20.6.1 dns-name: 10.0.3.79 instance-id: wwitzel3-local-machine-2 series: trusty hardware: arch=amd64 services: mysql: charm: cs:trusty/mysql-4 exposed: false relations: cluster: - mysql db: - wordpress units: mysql/0: agent-state: started agent-version: 1.20.6.1 machine: "2" public-address: 10.0.3.79 wordpress: charm: cs:precise/wordpress-25 exposed: true relations: db: - mysql loadbalancer: - wordpress units: wordpress/0: agent-state: started agent-version: 1.20.6.1 machine: "1" open-ports: - 80/tcp public-address: 10.0.3.196 The important detail to note from the status output is that the agent-state parameters for both MySQL and WordPress are reading started. This is generally the sign that the deployment was successful and all is well. Relations Now that we've deployed our instances of WordPress and MySQL, we need to inform WordPress about our MySQL instance so it can use it as its database. In Juju, these are called relations. Charms expose relation types that they provide or that they can use. In the case of our sample deployment, MySQL provides the db relation, and WordPress needs something that provides a db relation. Do this with the following command: juju add-relation mysql wordpress The WordPress charm has what are called hooks. In the case of our add-relation command shown previously, the WordPress instance will run its relation changed hook, which performs the basic setup for WordPress, just like if you had gone through the steps of the install script yourself. Again, you will want to use the juju status command again to check on the status of the operation. You should notice that the WordPress instance status has gone from started back to installed. This is because it is now running the relationchanged hook on the installation. It will return back to the started status once this operation is complete. Exposing Finally, we need to expose our WordPress installation so people can actually visit it. By default, most charms you deploy with Juju will not be exposed. This means they will not be accessible out of the local network they are being deployed to. To expose the WordPress charm, we issue the following command: juju expose wordpress Now that we have exposed WordPress, we can visit our installation and continue the setup process. You can use juju status again to find the public address of your WordPress installation. Enter that IP address into your favorite browser and you should be greeted with the WordPress welcome page asking you to finish your installation. Logging Juju provides you with system logs for all of the machines under its control. For a local provider, you can view these logs in /var/log/juju-username-local/; for example, my username is wwitzel3, so my logs are at /var/log/juju-wwitzel3-local. In this folder, you will see individual log files for each machine as well as a combined all-machines.log file, which is an aggregation of all the machines’ log files. Tailing the all-machines.log file is a good way to get an overview of the actions that Juju is performing after you run a command. It is also great for troubleshooting should you run in to any issues with your Juju deployment. Up next So there you have it, a basic overview of Juju and a simple WordPress deployment. In Part 2 of this two-part blog series, I will cover a production deployment use case using Juju and Amazon EC2 by setting up a typical Node.js application stack. This will include HAProxy, multiple load-balanced Node.js units, a MongoDB cluster, and an ELK stack (Elasticsearch, Logstash, Kibana) for capturing all of the logging. About the author Wayne Witzel III resides in Florida and is currently working for Slashdot Media as a Senior Software Engineer, building backend systems using Python, Pylons, MongoDB, and Solr. He can be reached at @wwitzel3.
Read more
  • 0
  • 0
  • 2893
article-image-disruption-rest-us
Erol Staveley
25 Jun 2015
8 min read
Save for later

Disruption for the Rest of Us

Erol Staveley
25 Jun 2015
8 min read
Whilst the ‘Age of Unicorns’ might sound like some terrible MS-DOS text adventure (oh how I miss you, Hugo), right now there is at least one new $1B startup created every month in the US. That’s all very well and good, but few people actually seem to stop and think about what this huge period of technical innovation means for everyday developers. You know, the guys and girls who actually build the damn stuff. Turns out, there are a lot of us: As startups focus on disruption and big business focuses on how they can take the good parts to implement within their own monolithic organizational structures, both sides are paying through the nose for good talent. Supply is low, demand is high, and that’s a good place to be if you’re a skilled developer. But what does being a skilled developer really mean? Defining Skill Foregoing the cliché definition of ‘skill’ from some random dictionary, skill in a development sense is almost another way of saying flexibility. Many (but not all) of the ‘good’ developers I’ve met actually associate strongly with calling themselves engineers – we like to understand how things work, we like to solve problems. Anything between a problem and a solution is a means to an end, and it’s not always just about using what we already know or are most comfortable with. At the best of times this can be wonderfully creative and rewarding, and yet it can be soul-crushingly irritating when you hit a brick wall. Brick walls are why tutorials exist. It’s why StackOverflow exists. It’s why many find it hard to initially switch to functional programming, and why seamlessly moving from one framework to another is the underlying promise that most books use in their promotional copy. However you spin it, flexibility is an essential part of being a good developer (or engineer, if that’s what you prefer). The problem is that this mental flexibility is actually an incredibly rare competency to have by default. The skill of learning in and of itself takes practice. Your ability to absorb new information entirely depends on your level of exposure to new thoughts and ideas. Paul Graham’s piece on Why You Weren’t Meant to Have a Boss articulates this better than I can (alongside some other key themes about personal development): I was talking recently to a founder who considered starting a startup right out of college, but went to work for Google instead because he thought he'd learn more there. He didn't learn as much as he expected. Programmers learn by doing, and most of the things he wanted to do, he couldn't—sometimes because the company wouldn't let him, but often because the company's code wouldn't let him. Between the drag of legacy code, the overhead of doing development in such a large organization, and the restrictions imposed by interfaces owned by other groups, he could only try a fraction of the things he would have liked to. He said he has learned much more in his own startup, despite the fact that he has to do all the company's errands as well as programming, because at least when he's programming he can do whatever he wants. This isn’t to say that working in a big organization entirely limits your openness to new ideas, it just makes it harder to express them 9-to-5 in the context of your on-paper role. There are always exceptions to the rule though - the BBC is a great example of a large organization that embraces new technologies and frameworks at a pace that would put many startups to shame. Staying Updated It’s hard to keep up with every framework-of-the-month, but in doing so you’re making a commitment to stay at the top of your game. Recruiters exploit this to full effect – they’ll frequently take a list of up-and-coming technologies used by an employer and scour LinkedIn and GitHub to identify leads. But we don’t just use new frameworks and languages for the sake of it. Adding an arbitrary marker on LinkedIn doesn’t prove that I deeply understand the benefits or downsides of a particular technology, or when it might be best not to use it at all. That understanding comes from experimentation, from doing. So why experiment with something new in the first place? It’s likely that there will be something to them – they might be technically impressive, or help us get from point A to point B faster in a more efficient manner. They help us achieve something. It’s not enough to just passively be aware of what’s hot and then skim the documentation. It’s actually really hard to really stay motivated and generate real personal value doing that. To keep up with the rate of technical innovation you need a real interest in your field, and a passion for solving complex problems. After all, software development really is about creative problem solving. That individual drive and creativity is what employers want to see above all else, hands down. Funnily enough, we also want to see this sort of thinking from our employers: Turns out we care about how much we’re paid - after all, Apple won’t just give us free iPhones (not yet anyway, Taylor). It just so happens that because supply is low, we can also afford to put making a difference as a priority. Even if we assume ‘making a difference’ is an aspiration you’d align with whilst taking a survey, the relatively minimal gap from salary is a significant indicator of how picky we can afford to be. Startups want to change the world, disrupt how established businesses work, whilst having a strong cross-functional alignment towards a legitimate, emotionally coherent vision. That fundamental passion aligns very well with developers who want to ‘make a difference’, who also have a strong level of individual drive and creativity. Larger businesses that don’t predominantly operate in the technology sector will have a much harder time cultivating that image. This way of thinking is just another part of the harsh disconnect between startup culture and the rest of society. It’s hard if you’re not working in technology to understand the private buses, the beanbag chairs, the unlimited holiday policies – all things intended to set startups apart and attract talent that’s in high demand. All those perks exist specifically to attract talented engineers. If JavaScript, Python, or lets even say C++ were common everyday ‘second languages’, things would be very different. Change is Coming It’s not hard to identify this deficit in technical skills. You can see it starting to be addressed in government schemes like Code for America. In the UK, England is about to make programming a required part of the curriculum from the ages of 5-16 (with services delivered by Codecademy). In a decade the number of people out there in the job market with strong programming skills will have grown exponentially, specifically because we all collectively recognize the shortage of good engineering talent. As the pool of readily talented developers increases, recruitment will be less about the on-paper qualification or just having a computer science background - it’ll be about what you’ve built, what excites you or what your OS contributions look like. You can already see these questions emerging as the staple of many technical interviews. Personal growth and learning will be expected in order to stay current, not just as a nice-to-have on top of your Java repertoire. And we won’t be able to be as picky, because there will be more of us around :). Skilling Up If that sounded a little like scaremongering, then good. We’re in a job market bubble right now, but the pop will be slow and gradual, not immediate (so maybe we’re in a deflating balloon?). Like any market where demand is high and supply is low, there will eventually be a period where things normalize. The educational infrastructure to support this is being built rapidly, and the increasing availability of great learning content (both free and premium) is only going one way. Development is more accessible than ever before, and you can pretty much learn the basics of most languages now without spending a penny. When we’re talking about being a skilled developer in a professional market it’s not going to be about what technologies you’re comfortable with or what books you’ve read. It’s going to be about what you’ve built using those technologies and resources. There will always be a market for creative problem solvers, the trick is becoming one of them. So the key to keeping on top of the job market? Dust off that Raspberry Pi you’ve had in your desk drawer, get back into that side project you’ve let atrophy on GitHub - just get out there and build things. Learn by doing, and flex those creative, problem solving neurons. And if you happen to need a hand? We’ll probably have a Packt book on it. Shameless plug, right? During June we surveyed over 20,000 IT professionals to find out what technologies they are currently using and plan to learn in the next 12 months. Find out more in our Skill Up industry reports.
Read more
  • 0
  • 0
  • 2883

article-image-smart-learning-is-medium-agnostic
Richard Gall
22 Dec 2015
4 min read
Save for later

Smart Learning is Medium Agnostic

Richard Gall
22 Dec 2015
4 min read
What’s the best way to learn? This is a question that’s been plaguing people for centuries. Currently educationalists and publishers are the ones responsible for the troublesome waters of this age-old problem, but it’s worth remembering it’s nothing new. What’s more, this question is bound up with technological development – just as the Gutenberg printing press made knowledge more accessible and shareable, changing the way people learn, back in the 15th century, many saw it as something satanic, its ability to reproduce identical copies of text regarded as witchcraft. Today, witchcraft is something we can only dream of – we’re all searching for the next Gutenberg Press, trying to find the best way to reconfigure and package the expansive and disorienting effects of the internet into something that is simple, accessible and ‘sticky’ (we really need a better word). But in all this effort to find the solution to our problems we keep forgetting that there’s no single answer – what we really want are options. The metaphor ‘tailor-made’ is perhaps a little misleading – when you buy a tailor made suit, it has been created to fit you. But we can’t think of learning like that. However tempting it is to say ‘I’m a visual learner’, it’s never entirely true. It depends on what you’re learning, what mood you’re in, how you’re approaching a topic and a whole range of other reasons. When it comes to software and tech, the different ways in which we might want to learn are more pronounced. Some days we might really want to watch a video tutorial to see code in action; maybe ‘visual learners’ will most appreciate video courses, but even those that favour a classic textbook sometimes want to be shown rather than told what to do. Similarly, if we know we need a comprehensive journey through a new idea, a new technology then we might invest in a course (if you can’t convince your boss to cough up the cash, that is…). In particular, in terms of strategic and leadership skills, people are still willing to invest big money to undergo training, even in an age of online courses and immediate information – indeed, these face-to-face, ‘irl’ courses are more important than ever before. But we also know that a book is a reliable resource that you can constantly return to. You can navigate it however you like – read it from back to front, start in the middle, tear out the pages and stick them on your ceiling so you can read them while you’re lying down – the choice is yours. But what about eBooks? True, you probably shouldn’t stick it to your ceiling, but you can still navigate it, annotate it, and share snippets on social media. You can also carry loads of them around – so if, like me, you’re indecisive, you can remain confident that an entire library of books is safe in your bag should you ever need one – whether you need a quick programming solution or an amusing anecdote. But even then, that’s not the end of it. ‘Learning’ is often seen as a very specific thing. It sounds like LinkedIn wisdom, but it’s true that learning happens everywhere – from the quick blogs that give you an insight on what’s important to the weighty tomes that contain everything you need to know about Node.js Design Patterns and using Python to build machine learning models. Today, then, it’s all about navigating these different ways of learning. It’s about being aware of what’s important and making sure you’re using the resources that are going to help you not only solve a problem or get the job done, but also to think better and to become more creative. There’s no one ‘right way’ to learn – smart learners are always open to new experiences and are medium agnostic.  If you don’t know where to start download and read our free Skill Up Year in Review report. Covering the trends that defined tech in 2015, it also looks ahead to the future, providing you with a useful learning roadmap.
Read more
  • 0
  • 0
  • 2865
Modal Close icon
Modal Close icon