Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-stack-structure-managing-game-state
Ryan Roden-Corrent
05 Dec 2016
8 min read
Save for later

Stack Structure for Managing Game State

Ryan Roden-Corrent
05 Dec 2016
8 min read
Structuring the flow of logic in a game can be challenging. If you're not careful, you quickly end up with a scattered collection of state variables and conditionals that is difficult to wrap your head around. In my past two game projects, I found it helpful to structure my game flow as a stack of states. In this article, I'll give a quick overview of this technique and some examples of what makes it useful. The example code is written in D, but it should be pretty easy to apply in any language. Stacking States for Isolation Stacking states provides a nice way to isolate chunks of game logic from one another. I leveraged this while making damage_control, a game reminiscent of the Arcade/SNES title Rampart. In it, a match is divided into rounds, and each round passes through a series of phases. First you place some turrets in your territory, then you fire at your opponent, and then you try to repair the damage done during the firing phase. Before each phase, a banner scrolls across the screen telling the player what phase they are in. Here's the logic that sets up a new round (simplified from the original source for clarity): game.states.push( new ShowBanner("Place Turrets", game), new PlaceTurrets(game), new ShowBanner("Fire!", game), new Fire(game, _currentRound), new ShowBanner("Rebuild!", game), new PlaceWalls(game), new StatsSummary(game)); Because all of the states can be stacked up at once within a single function, none of the states have to be aware what state comes next. For example, PlaceWalls doesn't have to know to show a stats summary when it ends; it just pops itself off the stack when done and lets the next state kick in. The code shown above resides in the StartRound state, which sits at the bottom of the state stack. Once all the phases for the current round are popped, we once again enter StartRound and push a new set of states. The flow of states looks like this (the right side represents the top of the stack, or the active state): StartRound StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner | PlaceTurrets | ShowBanner StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner | PlaceTurrets StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire StartRound | StatsSummary | PlaceWalls | ShowBanner StartRound | StatsSummary | PlaceWalls StartRound | StatsSummary StartRound StartRound | StatsSummary | PlaceWalls | ShowBanner | Fire | ShowBanner | PlaceTurrets | ShowBanner ... and so on ... I'll provide another example at the end of the article, but first I'll discuss the implementation. The State interface State(T) { void enter(T); void exit(T); void run(T); } T is a generic type here, and represents whatever kind of object the states will operate on. For example, it might be a Game object that provides access to game entities, resources, input devices, and more. At any given time, you have a single active state;run is executed once for each update loop of the game. enter is called whenever a state becomes active, before the first call to run. This allows the state to perform any preparation it needs before it begins its normal flow of logic. Similarly, exit allows a state to perform some sort of tear-down before it becomes inactive. Note that enter and exit are not equivalent to a constructor and destructor; we will see later that a single state may enter and exit multiple times during its life. As an example, let's take the PlaceWalls state from earlier. enter might start a timer for how long the state should last, run would process input from the player to move and place pieces, and exit would mark off areas that the player had enclosed. The Stack The StateStack itself is pretty straightforward as well. It only needs to support three operations: push : place a state on top of the stack. pop : remove the state on top of the stack. run : cause the state on top of the stack to process its object. The only bit of trickiness comes in managing those enter and exit states mentioned earlier. The state stack must ensure that the following happens during a state transition: call enter once before calling run on a state that was previously inactive. callexit on a state that becomes inactive. enter and exitshould be called an equal number of times during a state's life. struct StateStack(T) { private { bool _entered; SList!State _stack; T _object; } void push(State!T[] states ...) { if (_entered) { _stack.top.exit(_object); _entered = false; } // Note that we push the new states, but do _not_ call enter() yet // If push is called again before run, we only want to enter the top state foreach_reverse(state ; states) { _stack.insertFront(state); } } void pop() { // get ref to current state, top may change during exit auto popped = _stack.top; _stack.removeFront; if (_entered) { // the state we are popping had been entered, so we need to exit it _entered = false; popped.exit(_object); } } void run(T obj) { // cache obj for calls to exit() that are triggered by pop(). _object = obj; // top.enter() could push/pop, so keep going until the top state is entered while(!_entered) { _entered = true; top.enter(obj); } // finally, our stack has stabilized top.run(obj); } } The implementation is mostly straightforward, but there are a few caveats. It is valid (and useful) for a state to push() and pop() states during its enter. In the previous example, StartRound pushes a number of states during enter. Therefore, implementing StateStack.run like so would be incorrect: if (!_entered) { _entered = true; top.enter(obj); } top.run(obj); After pushing StartRound and calling StateStack.run, it could call StartRound.enter, which would push more states onto the stack. It would then call top.run(obj) on whatever state was last pushed, which hasn't been entered yet! For this reason, run uses the while (!_entered) loop to call enter until the stack 'stabilizes'. Similarly, a state may push or pop states during its exit call. To support this, we need to cache the object that get passed in to run so it can be used by pop. Dissolving Complex Logic Flows In Terra Arcana, the StateStack, a turn-based strategy game I developed, made the flow of combat manageable. Here's a quick description of the rules regarding attacks: The attacker launches one or more strikes against the defender. Each strike may hit (dealing damage or some effect) or miss. If the defender's health has dropped to 0, they are destroyed. The defender may get a chance to counter-attack if: They were not destroyed by the initial attack. They have an attack that is in range of the attacker. They have enough AP (action points) to use said attack. The counter-attack, like the initial attack, may have multiple strikes. The counter-attack may destroy the attacker. You cannot counter-attack a counter-attack. Now consider that an AOE attack may hit multiple defenders, each of which gets a chance to counter-attack! Now, computing the result of this isn't so bad -- you can probably imagine a series of if/else statements that could do the job in a single pass. The difficulty depicting the result to the player. We need to play animations for attacks and unit destruction, pop up text to indicate damage and status effects (or lack thereof), manipulate health/AP bars on the UI, and play sound effects at various points throughout the process. This all happens over the course of multiple update cycles rather than a single function call, so managing it with a single function would involve a whole mess of state variables (attackCount, isAnimating, isDefenderDestroyed, isCounterAttackInProgress, ect.). With a StateStack we can separate chunks of logic like applying damage or status effects, destroying a unit, and initiating a counter attack into its own independent state. When an attack begins, you push a whole bunch of these onto the stack at once, and then let everything play out. Here's an excerpt of code that initiates an attack: battle.states.popState(); foreach(unit ; unitsAffected) { battle.states.push(newPerformCounter(unit, _actor)); } foreach(unit ; unitsAffected) { battle.states.push(new CheckUnitDestruction(unit)); for(int i = 0 ; i < _action.hits ; i++) { battle.states.push(new ApplyEffect(_action, unit)); } } Remember that we are dealing with a stack, so states pushed later end up at the top (ApplyEffect happens before CheckUnitDestruction). This logic resides in the PerformAction state, so the first call removes this state from the stack before pushing the rest on. To understand this a bit better, consider the following scenario: A unit launches an attack that hits twice. The target is not destroyed, and is capable of countering with an attack that hits three times. The states on the stack would progress like so (where the right side represents the top of the stack): PerformAction PerformCounter | CheckUnitDestruction | ApplyEffect | ApplyEffect PerformCounter | CheckUnitDestruction | ApplyEffect PerformCounter | CheckUnitDestruction PerformCounter CheckUnitDestruction | ApplyEffect | ApplyEffect | ApplyEffect CheckUnitDestruction | ApplyEffect | ApplyEffect CheckUnitDestruction | ApplyEffect CheckUnitDestruction Note that when PerformCounter becomes the active state, it replaces itself with three ApplyEffects and a CheckUnitDestruction. The states nicely encapsulate specific chunks of game logic, so we get to reuse the same states in PerformAction and PerformCounter. Author: Ryan Roden-Corrent is a software developer by trade and hobby. He is an active contributor in the free/open-source software community and has a passion for simple but effective tools. He started gaming at a young age and dabbles in all aspects of game development, from coding to art and music. He's also an aspiring musician and yoga teacher. You can find his open source work here and Creative Commons art here.
Read more
  • 0
  • 0
  • 5034

article-image-2015-year-deep-learning
Akram Hussain
18 Mar 2015
4 min read
Save for later

Is 2015 the Year of Deep Learning?

Akram Hussain
18 Mar 2015
4 min read
The new phenomenon to hit the world of ‘Big Data’ seems to be ‘Deep Learning’. I’ve read many articles and papers where people question whether there’s a future for it, or if it’s just a buzzword that will die out like many a term before it. Likewise I have seen people who are genuinely excited and truly believe it is the future of Artificial intelligence; the one solution that can greatly improve the accuracy of our data and development of systems. Deep learning is currently a very active research area, by no means is it established as an industry standard, but rather one which is picking up pace and brings a strong promise of being a game changer when dealing with raw, unstructured data. So what is Deep Learning? Deep learning is a concept conceived from machine learning. In very simple terms, we think of machine learning as a method of teaching machines (using complex algorithms to form neural networks) to make improved predictions of outcomes based on patterns and behaviour from initial data sets.   The concept goes a step further however. The idea is based around a set of techniques used to train machines (Neural Networks) in processing information that can generate levels of accuracy nearly equivalent to that of a human eye. Deep learning is currently one of the best providers of solutions regarding problems in image recognition, speech recognition, object recognition and natural language processing. There are a growing number of libraries that are available, in a wide range of different languages (Python, R, Java) and frameworks such as: Caffe,Theanodarch, H20, Deeplearning4j, DeepDist etc.   How does Deep Learning work? The central idea is around ‘Deep Neural Networks’. Deep Neural Networks take traditional neural networks (or artificial neural networks) and build them on top of one another to form layers that are represented in a hierarchy. Deep learning allows each layer in the hierarchy to learn more about the qualities of the initial data. To put this in perspective; the output of data in level one is then the input of data in level 2. The same process of filtering is used a number of times until the level of accuracy allows the machine to identify its goal as accurately as possible. It’s essentially a repeat process that keeps refining the initial dataset. Here is a simple example of Deep learning. Imagine a face, we as humans are very good at making sense of what our eyes show us, all the while doing it without even realising. We can easily make out ones: face shape, eyes, ears, nose, mouth etc. We take this for granted and don’t fully appreciate how difficult (and complex) it can get whilst writing programs for machines to do what comes naturally to us. The difficulty for machines in this case is pattern recognition - identifying edges, shapes, objects etc. The aim is to develop these ‘deep neural networks’ by increasing and improving the number of layers - training each network to learn more about the data to the point where (in our example) it’s equal to human accuracy. What is the future of Deep Learning? Deep learning seems to have a bright future for sure, not that it is a new concept, I would actually argue it’s now practical rather than theoretical. We can expect to see the development of new tools, libraries and platforms, even improvements on current technologies such as Hadoop to accommodate the growth of Deep Learning. However it may not be all smooth sailing. It is still by far very difficult and time consuming task to understand, especially when trying to optimise networks as datasets grow larger and larger, surely they will be prone to errors? Additionally, the hierarchy of networks formed would surely have to be scaled for larger complex and data intensive AI problems.     Nonetheless, the popularity around Deep learning has seen large organisations invest heavily, such as: Yahoo, Facebook, Googles acquisition of Deepmind for $400 million and Twitter’s purchase of Madbits. They are just few of the high profile investments amongst many. 2015 really does seem like the year Deep learning will show its true potential. Prepare for the advent of deep learning by ensuring you know all there is to know about machine learning with our article. Read 'How to do Machine Learning with Python' now. Discover more Machine Learning tutorials and content on our dedicated page. Find it here.
Read more
  • 0
  • 0
  • 5016

article-image-transparency-and-nwjs
Adam Lynch
07 Jan 2015
3 min read
Save for later

Transparency and NW.js

Adam Lynch
07 Jan 2015
3 min read
Yes, NW.js does support transparency, albeit it is disabled by default. One way to enable transparency is to use the transparency property to your application's manifest like this: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true } } Transparency will then be enabled for the main window of your application from the start. Now, it's play time. Try giving a page's body a transparent or semi-transparent background color and any children an opaque background color in your CSS like this: body { background:transparent;//orbackground:rgba(255, 255, 255, 0.5); } body > * { background:#fff; } I could spend all day doing this. Programmatically enabling transparency The transparent option can also be passed when creating a new window: var gui = require('nw.gui'); var newWindow = newgui.Window.open('other.html', { position: 'center', width: 600, height: 800, transparent: true }); newWindow.show(); Whether you're working with the current window or another window you've just spawned, transparency can be toggled programmatically per window on the fly thanks to the Window API: newWindow.setTransparent(true); console.log(newWindow.isTransparent); // true The window's setTransparent method allows you to enable or disable transparency and its isTransparent property contains a Boolean indicating if it's enabled right now. Support Unfortunately, there are always exceptions. Transparency isn't supported at all on Windows XP or earlier. In some cases it might not work on later Windows versions, including when accessing the machine via Microsoft Remote Desktop or with some unusual themes or configurations. On Linux, transparency is supported if the window manager supports compositing. Aside from this, you'll also need to start your application with a couple of arguments. These can be set in your app's manifest under chromium-args: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true }, "chromium-args":"--enable-transparent-visuals --disable-gpu" } Tips and noteworthy side-effects It's best to make your app frameless if it will be semi-transparent. Otherwise it will look a bit strange. This would depend on your use case of course. Strangely, enabling transparency for a window on Mac OS X will make its frame and toolbar transparent: Screenshot of a transparent window frame on Mac OS X Between the community and developers behind NW.js, there isn't certainty whether or not windows with transparency enabled should have a shadow like typically windows do. At the time of writing, if transparency is enabled in your manifest for example, your window will not have a shadow, even if all its content is completely opaque. Click-through NW.js even supports clicking through your transparent app to stuff behind it on your desktop, for example. This is enabled by adding a couple of runtime arguments to your chromium-args in your manifest. Namely --disable-gpu and --force-cpu-draw: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true }, "chromium-args":"--disable-gpu --force-cpu-draw" } As of right now, this is only supported on Mac OS X and Windows. It only works with non-resizable frameless windows, although there may be exceptions depending on the operating system. One other thing to note is that click-through will only be possible on areas of your app that are completely transparent. If the target element of the click or an ancestor has a background color, even if it's 1% opaque in the alpha channel, the click will not go through your application to whatever is behind it. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 4927

article-image-5-more-2d-game-dev-engines
Ed Bowkett
09 Jan 2015
4 min read
Save for later

5 more 2D Game Engines I didn’t consider

Ed Bowkett
09 Jan 2015
4 min read
In my recent blog, I covered 5 game engines that you can use to create 2D games. The response in the comments and on other social media websites was encouraging but also pointed out other 2D game engines. Having briefly looked at these, I thought it would be a good idea to list alternatives down. In this blog we will cover 5 game engines you can use to create 2D games. 2D games are very appealing for a wide range of reasons. They’re great for the indie game scene, they’re great to learn the fundamentals of game development, and it’s a great place to start coding and you have fun doing it. I’ve thrown in some odd ones that you might not have considered before and remember, this isn’t a definitive list! Just my thoughts! LÖVE2D LÖVE2D Platform game LÖVE2D is a 2D game framework that you can use to make 2D games using Lua, a lightweight scripting language. It can be used across Windows, Linux and Mac and costs nothing to use. The code is easy enough to use, though it might be useful to learn Lua as well. Once you get over that, games can be created with ease, and clones of Mario and Snake have become ever popular with this engine.  It has support for Box2D implementation, networking abilities and user created plugins. However, the possible downside to LÖVE is that it’s only for desktops, however to learn how to programme games this is a good starting point. Libgdx A puzzle game in Libgdx Libgdx is a game development framework written in Java. It’s cross platform which is a major plus when developing games and can be deployed across Windows, Linux and Mac. It’s also free which a benefit is to aspiring game developers. It has multiple third party support for other tools such as Spine and Nextpeer, whilst also allowing BOX2d physics and rendering capabilities through openGL. Example projects include puzzle games, tower defense games and platformers. Extremely fun for the indie developers and hobbyists. Just learn Java…… Gamesalad Creating a game using GameSalad Similar to its rivals, Construct 2 and Gamemaker, GameSalad is a game engine aimed at non-programmers. It uses a drag and drop system, similar to its competitors. Further benefits of GameSalad are that it doesn’t require any programming knowledge; instead you use Actors that defines the rules and behaviors of certain game objects. It’s cross platform which is another big plus, however to unlock the full capabilities of cross platform development you need to pay $299 a year, which is excessive for a game engine, that, whilst is good for hobbyists and beginner game developers, for what it does, the cost-value of the product isn’t that great. Still, you can try it for free and it has the same qualities as other engines. Stencyl Stencyl Stencyl is a game engine that is free (free for Flash, other platforms need to be paid for) and again is a great alternative to the other drag and drop game engines that they have. Again supporting multiple platforms, support for shaders, follows an actor system, animations and support for iOS 8. The cost isn’t too bad either, for the cheaper option, with the ability to publish on web and desktop priced at $99 a year and studio priced at $199 a year.  V-Play A basic 2D game using V-Play A game development tool I actually know very little about but was pitched by our customers as a good alternative. V-Play appears to be component based, written in Javascript and QML (QT markup language) cross platform; it even appears to have plugins for monetizing your game and game analytics for assessing your game. It also allows for touch, includes level design, and includes Box2D as the physics engine. Whilst I know this is brief about what V-Play does and offers, as I’ve not really come across it, I don’t know too much about it, possibly one to write a blog on for the future! This blog was to show off the other frameworks I had not considered in my previous blog, proposed by our readers. It shows that there are always more options out there; it all depends on what you want, how much you want to spend and the quality you expect from it. These are all valid choices and have opened me up to a game development tool I’d not tinkered with before, so Christmas should be fun for me!
Read more
  • 0
  • 0
  • 4775

article-image-brickcoin-might-change-your-mind-about-cryptocurrencies
Savia Lobo
11 Apr 2018
3 min read
Save for later

BrickCoin might just change your mind about cryptocurrencies

Savia Lobo
11 Apr 2018
3 min read
At the start of 2018, the cryptocurrency boom seemed to be at an end. Bitcoin's price plunged in just a matter of weeks from a mid-December 2017 high of $20,000 to less than $10,000. Suddenly everything seemed unpredictable and volatile. The cryptocurrency honeymoon was at an end. However, while many are starting to feel cautious about investing, a new cryptocurrency might change the game. BrickCoin might well be the cryptocurrency to reinvigorate a world that's shifted from optimism to pessimism in just a couple of months. But what is BrickCoin? And how is it different from other cryptocurrencies? Most importantly, why might you be confident in its success? What is BrickCoin? This one’s also a Blockchain based currency, but one backed with real estate(REITs). BrickCoin aims to be the first regulated, KYC, and AML compliant real estate crypto. The real estate is comprehensive to regulators and is accepted by many as a robust asset class. This is a major distinguishing point between BrickCoin and other cryptocurrencies. Traditional money saving methods - savings account and fixed deposits - are not inflation-proof. They also have very low levels of interest at the moment. On the other hand, complex investment options such as hedge funds are typically only available to wealthy individuals as they require large initial investments. These also do not offer ready liquidity and are vulnerable to bankruptcy. BrickCoin comes to the rescue here, as it claims to be Inflation proof. Find out more about BrickCoin here. The key features of BrickCoin It is a savings token which can be bought with traditional currency or digital currency. It represents an investment in a piece of commercial debt-free real estate. The real estate is held as part of a very secure, high-value, debt-free REIT. BrickCoins are kept in a mobile digital wallet. All transactions are fully-managed, validated and trackable by blockchain technology. BrickCoins can be converted into FIAT currency instantly. [box type="note" align="" class="" width=""]Also read about CryptoML, a machine learning powered cryptocurrency platform.[/box] BrickCoin is essentially the next step in the evolution of cryptocurrency. It is a savings scheme that is backed by a non-inflationary asset - commercial debt-free real estate - to deliver stable capital preservation. As a cryptocurrency, it allows savers to convert their money to and from BrickCoin tokens using the full security and convenience of blockchain technology. BrickCoin will be the first cryptocurrency that bridges the gap between the necessary reliance on the FIAT currencies and the asset-backed wealth-creation opportunities that are often out of reach for many ordinary savers. Crypto News Cryptojacking is a growing cybersecurity threat, report warns Coinbase Commerce API launches Crypto-ML, a machine learning powered cryptocurrency platform Crypto Op-Ed There and back again: Decrypting Bitcoin`s 2017 journey from $1000 to $20000 Will Ethereum eclipse Bitcoin? Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief Cryptocurrency Tutorials Predicting Bitcoin price from historical and live data How to mine bitcoin with your Raspberry Pi Protecting Your Bitcoins
Read more
  • 0
  • 0
  • 4707

article-image-who-are-set-be-biggest-players-iot
Raka Mahesa
07 Aug 2017
5 min read
Save for later

Who are set to be the biggest players in IoT?

Raka Mahesa
07 Aug 2017
5 min read
The Internet of Things, also known as IoT, may sound like some technological buzzword, but in reality it's a phenomenon that's taking place right now, as more and more devices get connected to the Internet. It's an ecosystem with $6.7 billion in revenue in 2015 alone and is projected to grow even more in the future. So, with those kinds of numbers, who are the biggest players in an ecosystem with such high value?  Let's clear up one thing before we go further. How exactly do we define "the biggest players" in a technological ecosystem? After all, there are many, many ways to measure the size of an industry player. The quickest way is probably to simply check the their revenues or their market share number. Another way to do that, and also the way that we'll use in this post, is to see how much influence a company has on the ecosystem.  Whatever action they take, the biggest players in an ecosystem will have an impact that can be felt throughout the industry. For example, when Apple unveiled that the latest iPhone had no headphone jack, many smartphone manufacturers followed suit, and a lot of audio hardware vendors introduced new wireless headsets. Or imagine if Samsung, the company with the biggest smartphone market share, suddenly stopped using Android and instead usedtheir own mobile platform, the impact would be massive. The bigger the size of the player, the bigger the impact it will have on the ecosystem.  IoT companies  So, with that part cleared up, let's talk about IoT companies. Companies that dabble in the IoT ecosystem can be segregated into two categories: those that focus on consumer products like Amazon and Apple, and those that focus on enterprise products like Cisco, Oracle, and SalesForce. As for companies that offer solutions for both segments like Samsung, they tend to fall into the consumer-focused category.  Companies that focus on enterprise products, with a few exceptions, are more driven by their sales performance instead of their technology innovation. Because of that, those companies tend to not have as much impact on the ecosystem as their consumer-focused counterparts. And that's why we'll focus on companies that focus on consumer products when we're talking about the biggest players in IoT. Big players: ARM and Amazon  Well, it's finally time for the big reveal on who the biggest players in the Internet of Things are. The IoT ecosystem is pretty interesting; it has so many components that make it quite difficult for one single company to tackle the whole thing. And it has not matured yet, which means there are still many segments with empty leading position, ready to be taken by any company who can rise up to the challenge.  That said, there is actually one company that drives the whole ecosystem: ARM, a.k.a. the company whose chipset infrastructure became the basis of the entire smartphone technology. If you have a smart device that can process information and do calculation, there is a high chance that it's powered by an ARM-based chipset. With such widespread usage, any technological progress made by the company will increase the capability of IoT technology as a whole.  While ARM has the advantage in market share on the hardware side, it's Amazon who has the market share advantage on the software side with AWS. Similar to how Google has a hand in every aspect of the web, Amazon also seems to have a hand in every part of IoT. They provide the services to connect smart devices to the Internet, as well as the platform for developers to host their cloud apps. And for mainstream consumers, Amazon directly sells smart devices like Amazon Dash and Amazon Echo, the latter, which also serves as a platform for developers to create home applications. In short, wherever you look in the IoT ecosystem, Amazon usually has a part in it.  Wearables If there is one segment of IoT that Amazon doesn't seem to have an interest in, it is probably the wearable segment. It was predicted that this segment of the market was to be dominated by smart watches, but instead, the fitness tracker devices from Fitbit won this category. With wearable devices being much more personal than smartphones, if Fitbit can expand beyond fitness tracking, they'll become the dominant force in the IoT ecosystem.  The smart home  Surprisingly, no one seems to have conquered the most obvious space for Internet of Things, the smart home segment. The leading companies in this segment seem to be Amazon, Apple, and Google, but none of them is the dominant force yet. Apple plays with their HomeKit library and doesn't seem to be catching much interest, though maybe they'll have better luck with Apple HomePod. Google is actually the one with the most potential here, with their Google Home, Google Cloud IoT service, and Android Embedded version. However, other than Google Home, these projects are still in beta and not ready for launch yet.  Those are the biggest players in the still-evolving ecosystem of the Internet of Things. It's still pretty early however, a lot of thing can still change, and what is true right now may not be true in a couple of years. After all, before the iPhone, no one expected Apple to be the biggest player in the mobile phone industry.  About the author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 4647
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-jamstack-and-why-should-i-care
Antonio Cucciniello
07 May 2017
4 min read
Save for later

What is JAMstack and why should I care?

Antonio Cucciniello
07 May 2017
4 min read
What is JAMstack? JAMstack, according to the project site, offers you a "modern web development architecture based on client-side JavaScript, reusable APIs, and prebuild Markup." As you can tell by the acronym, it utilizes JavaScript, APIs and Markup as core the components for the development stack. It can be used in any website or web application that does not depend on tight coupling between the client and the server. It sounds simple, but let's dive a little deeper into the three parts. JavaScript The JavaScript is basically any form of client-side JavaScript. It can be used to handle requests and responses, front-end frameworks such as React and Angular, any client side libraries, or plain old JavaScript. APIs The APIs consists of any and all server-side processes or database commands that your web app needs to handle. These APIs can originate from third-party APIs out there, or from a custom API that you created for this application. The APIs communicate with JavaScript through HTTP calls. Markup This is Markup that is templated and is built at deploy time. This is done using a build tool such as grunt, or a static site generator. Now that you know the individual parts of how this works, let's discuss how you could optimize this stack with a few of the best practices. JAMstack best practices Hosted on Content Delivery Network It is a good idea to distribute all the code to CDNs to reduce the load time of each page. The JAMStack websites do not rely on server-side code so they can be distributed on CDNs much easier. All code in Git In order to increase development speed and have others contribute to your site, all of the code should be in source control. If using git, should be able to simply clone the repository and install the third party packages that your project requires. From there, developers should be smooth sailing to making changes. Build tools to automate builds Use tools like Babel, Webpack and Browserify to automate repetitive tasks to reduce development time. You want your builds to be automatic in order to run in order for users to see your changes Use atomic deploying Atomic Deploying allows you to deploy all of your changes at once, when all of your files are built. This allows the changes to be displayed after all files are uploaded and built. Instant cache purge The cache on your CDN may hold the old assets after you create a new build and deploy your changes. In order to make sure the client sees the changes that you implemented, you want to be able to clear the cache of CDNs that host your web application. Enough about best practices already, how about the benefits? Why should YOU care? There are a couple of benefits for you as a developer to building your next application using JAMstack. Let's discuss them. Security Here we are removing server-side parts that would normally be closely working with the client-side components. When we remove the server-side components we are reducing the complexity of the application. That makes the client-side components easier to build and maintain. Making for easier development process, and therefore increasing the security and reliability of your website. Cost Since we are removing the server-side parts, we do not need as many servers to host the application and we do not need as many backend engineers to handle the server-side functionality. Thus reducing your overall cost significantly. Speed Since we are using prebuilt Markup that is built at deploy time, we reduce the amount of work that needs to get completed at runtime. That will, in hand, increase the speed of your site because the page will be built already. JAMstack - key takeaways In the end, JAMstack is just a web development architecture that makes its web apps using JavaScript, APIs and prebuilt Markup. It has several advantages such as increased security, reduced cost, and faster speed. Here is a link to some examples of web apps that are built using JAMStack. Under each one, they list the tools that were used to make the app, including the front-end frameworks, static site builders, build tools and various APIs that were utilized. If you enjoyed this post share it on twitter! Leave a comment down low and let me know your thoughts on JAMStack and how you will use it in your future applications! Possible Resources Check out my GitHub View my personal blog Check out my YouTube Channel This is a great talk on JAMStack
Read more
  • 0
  • 0
  • 4621

article-image-5-alternatives-to-raspberry-pi
Ed Bowkett
30 Oct 2014
4 min read
Save for later

5 Alternative Microboards to Raspberry Pi

Ed Bowkett
30 Oct 2014
4 min read
This blog will show you five alternative boards to Raspberry Pi that are currently on the market and which you might not have considered before, as they aren’t as well known. There are others out there, but these are the ones that I’ve either dabbled in or have researched and am excited for. Hummingboard Figure 1: Hummingboard The Hummingboard has been argued to be more powerful than a Raspberry Pi, and certainly the numbers do seem to support this: 1 GHz vs 700 MHz, and more RAM in the available models, varying from 512MB to 1GB. What’s even better with the Hummingboard is the ability to take out the CPU and memory module should you need to upgrade them in the future. It also allows you to run on many open source operating systems such as Debian, XBMC, and Android. However, it is also more costly than a Raspberry Pi, coming in at $55 for the 512MB model and a pricey $100 for the 1GB model. However, I feel that the performance per cost is worth it, and it will be interesting to see what the community does with the Hummingboard. Banana Pi Figure 2: Banana Pi Whilst some people can look at the name of the Banana Pi and assume that it is a clone of the famous Raspberry Pi, it’s actually even better. With 1GB of RAM and a dual core processor running at 1 GHz, it’s even more powerful than its namesake (albeit still a fruit). It includes an Ethernet port, micro-USB port, and a DSI for graphics, and can also run Android, Ubuntu, and Debian, as well as Raspberry Pi Image and Cubieboard Image. If you are seeking to upgrade from a Raspberry Pi, this is quite possibly the board to go for. It will set you back around $50, but again, when you think about the performance you get for the price, this is a great deal. Cubieboard Figure 3: Cubieboard The Cubieboard has been around for a couple of years now, so can be considered an early-adoption board. Nonetheless, the Cubieboard is very powerful, runs a 1 GHz processer, has an extra infrared sensor, which is good for using as a media center, and also comes with a SATA port. One compelling point that the Cubieboard has, along with its performance, is its cost. It comes in at just $49. Considering the Raspberry Pi sells at $35, this is not that much of a price leap and gives you much more zing for your bucks. Initially, of course, Arduino and Raspberry Pi had huge communities, whereas Cubieboard didn’t. However, this is changing, and hence the Cubieboard deserves a mention. Intel Galileo Figure 4: Intel Galileo Arduino was one of the first boards to be sold to the mass market. Intel took this and developed their own boards, which led to the birth of the Intel Galileo. Arduino-certified, this board combines Intel technology with their ready-made expansion cards (shields) as well as Arduino libraries. The Galileo can be programmable with OS X, Windows, and Linux. However, a real negative to the Galileo is the performance, coming in at just 400 MHz. This, combined with the cost, $70, means it’s one of the weakest in terms of price-performance on this list. However, if you want to develop on Windows with the relative safety of Arduino libraries, this is probably the board for you. Raspberry Pi Pad OK, OK. I know this isn’t strictly a microboard. However, the Raspberry Pi Pad was announced on the 21st October, and it’s a pretty big deal. Essentially, it’s a touchscreen display that will run on Raspberry Pi. So, you can essentially build a Raspberry Pi tablet. That’s pretty impressive, and awesome at the same time. I think this will be the thing to watch out for in 2015, and it will be cool to see what the community makes of it. This blog covered alternative microboards that you might not have considered before. It’s thrown a curveball at the end and generally tried to provide different boards other than the usual Raspberry Pi, Beaglebone, and Arduino. About the author Ed Bowkett is Category Manager of Game Development and Hardware at Packt Publishing. When not imagining what the future of games will be in 5 years’ time, he is usually researching up on how to further automate his home using the latest ARM boards.
Read more
  • 0
  • 0
  • 4574

article-image-keras
Janu Verma
13 Jan 2017
6 min read
Save for later

Introduction to Keras

Janu Verma
13 Jan 2017
6 min read
Keras is a high-level library for deep learning, which is built on top of theano and tensorflow. It is written in Python and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical methods. The key idea behind keras is to facilitate fast prototyping and experimentation. In the words of Francois Chollet, creator of keras, “Being able to go from idea to result with the least possible delay is the key to doing good research.” Key features of keras: Any one of the theano and tensorflow backends can be used. Supports both CPU and GPU. Keras is modular in nature in the sense that each component of a neural network model is a separate, standalone module, and these modules can be combined to create new models. New modules are easy to add. Write only Python code. Installation: Keras has the following dependencies: numpy - scipy - pyyaml - hdf5 (for saving/loading models) - theano (for theano backend) - tensorflow (for tensorflow backend). The easiest way to install keras is using Python Project Index (PyPI): sudo pip install keras Example: MNIST digits classification using keras We will learn about the basic functionality of keras using an example. We will build a simple neural network for classifying hand-written digits from the MNIST dataset. Classification of hand-written digits was the first big problem where deep learning outshone all the other known methods and this paved the way for deep learning on a successful track. Let's start by importing data; we will use the sample of hand-written digits provided with the scikit-learn base package: from sklearn import datasets mnist = datasets.load_digits() X = mnist.data Y = mnist.target Let's examine the data: print X.shape, Y.shape print X[0] print Y[0] Since we are working with numpy arrays, let's import numpy: import numpy # set seed np.random.seed(1234) Now, we'll split the data into training and test sets by randomly picking 70% of the data points as a training set and the remaining for validation: from sklearn.cross_validation import train_test_split train_X, test_X, train_y, test_y = train_test_split(X, Y, train_size=0.7, random_state=0) Keras requires the labels to be one-hot-encoded, i.e., the labels 1, 2, 3,..,etc., need to be converted to vectors like [1,0,0,...], [0,1,0,0...], [0,0,1,0,0...], respectively: def one_hot_encode_object_array(arr): '''One hot encode a numpy array of objects (e.g. strings)''' uniques, ids = np.unique(arr, return_inverse=True) return np_utils.to_categorical(ids, len(uniques)) # One hot encode labels for training and test sets. train_y_ohe = one_hot_encode_object_array(train_y) test_y_ohe = one_hot_encode_object_array(test_y) We are now ready to build a neural network model. Start by importing the relevant classes from keras: from keras.models import Sequential from keras.layers import Dense, Activation from keras.utils import np_utils In keras, we have to specify the structure of the model before we can use it. A Sequential model is a linear stack of layers. There are other alternatives in keras, but we will with sequential for simplicity: model = Sequential() This creates an instance of the constructor; we don't have anything in the model as yet. As stated previously, keras is modular and we can add different components to the model via modules. Let's add a fully connected layer with 32 units. Each unit receives an input from every unit in the input layer, and since the number of units in the input is equal to the dimension (64) of the input vectors, we need the input shape to be 64. Keras uses a Dense module to create a fully connected layer: model.add(Dense(32, input_shape=(64,))) Next, we add an activation function after the first layer. We will use sigmoid activation. Other choices like relu, etc., are also possible: model.add(Activation('sigmoid')) We can add any number of layers this way. But for simplicity, we will restrict to only one hidden layer. Add the output layer. Since the output is a 10-dimensional vector, we require the output layer to have 10 units: model.add(Dense(10)) Add activation for the output layer. In classification tasks, we use softmax activation. This provides a probilistic interpretation for the output labels: model.add(Activation('softmax')) Next, we need to configure the model. There are some more choices we need to make before we can run the model, e.g., choose an optimization method, loss function, and metric of evaluation: model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) The compile method configures the model, and the model is now ready to be trained on data. Similar to sklearn, keras has a fit method for training: model.fit(train_X, train_y_ohe, nb_epoch=10, batch_size=30) Training neural networks often involves the concept of minibatching, which means showing the network a subset of the data, adjusting the weights, and then showing it another subset of the data. When the network has seen all the data once, that's called an "epoch". Tuning the minibatch/epoch strategy is a somewhat problem-specific issue. After the model has trained, we can compute its accuracy on the validation set: loss, accuracy = model.evaluate(test_X, test_y_ohe) print accuracy Conclusion We have seen how a neural network can be built using keras, and how easy and intuitive the keras API is. This is just an introduction, a hello-world program, if you will. There is a lot more functionality in keras, including convolutional neural networks, recurrent neural networks, language modeling, deep dream, etc. About the author Janu Verma is a Researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals, etc. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area; email to schedule a meeting.
Read more
  • 0
  • 0
  • 4529

article-image-do-you-have-technical-skills-command-90000-salary
Packt Publishing
24 Jul 2015
2 min read
Save for later

Do you have the technical skills to command a $90,000 salary?

Packt Publishing
24 Jul 2015
2 min read
In the 2015 Skill Up Survey Packt talked to more than 20,000 people who work in IT globally to identify what skills are valued in technical roles and what trends are changing and emerging. Responses from the App Development community provided us with some great insight into how skills are rated across multiple industries, job roles and experience levels. The world of App Development is highly varied, and can be super competitive too, so we wanted to find out what industries are best for those just entering the market. We also discovered which technologies are proving to be most popular and where can you earn the best salaries. We also had some very specific questions for which we wanted answers. How relevant are desktop developer skills? Which is the most popular platform for mobile development? Is functional programming the way of the future? What is the essential software choice for professional game development? Some of the results were surprising! Here’s a taster of our findings... If you are looking for your first role in App Development, the Government sector pays well for those with less experience. But competition is fierce - with under 5% of those working in this sector having less than 3 years experience. Unsurprisingly, Game Developers reported the lowest average salaries across all industries. It’s clear that Game Developers work for love, not money! However, which type of Developer earns the most and which industry sector is the most lucrative? Experienced Developers can find out who pays the most to utilize all of your expertise and experience. Also discover what Developers are building and what technologies they are using….here’s a sneak peek: And finally, what about the future? What is going to be the Next Big Thing? Wearables or Big Data perhaps? Is there a place for desktop in the mobile age? Read the rest of the report to see what skills you need to build on and which technologies are poised to take the App Development world by storm so you can get ahead of the competition! Click here to download the full report
Read more
  • 0
  • 0
  • 4488
article-image-rxswift-operators
Darren Sapalo
22 Apr 2016
6 min read
Save for later

RxSwift Operators

Darren Sapalo
22 Apr 2016
6 min read
In the previous article, we talked about how the Rx framework for Swift could help in performing asynchronous tasks, creating an observable from a network request, dealing with streams of data, and handling errors and displaying successfully retrieved data elegantly on the main thread coming from the background thread. This article will talk about how to take advantage of the operators on observables to transform data. Hot and Cold Observables There are different ways to create observables, and we saw an example of it previously using the Observable.create method. Conveniently, RxSwift provides extensions to arrays: the Array.toObservable method. var data = ["alpha" : ["title":"Doctor Who"], "beta" : ["title":"One Punch Man"]] var dataObservable = data.toObservable() Note however, that code inside the Observer.create method does not run when you call it. This is because it is a Cold Observable, meaning that it requires an observer to be subscribed on the observable before it will run the code segment defined in the Observable.create method. In the previous article, this means that running Observer.create won’t trigger the network query until an observer is subscribed to the Observable. IntroToRx provides a better explanation of Hot and Cold Observables in their article. Rx Operators When you begin to work with observables, you’ll realize that RxSwift provides numerous functions that encourages you to think of processing data as streams or sequences. For example, you might want to filter an array of numbers to only get the even numbers. You can do this using the filter operation on an observable. var data = [1, 2, 3, 4, 5, 6, 7, 8] var dataObservable = data.toObservable().filter{elem: Int -> Bool in return elem % 2 == 0 } dataObservable.subscribeNext { elem: Int in print(“Element value: (elem)”) } Chaining Operators These operators can be chained together and is actually much more readable (and easier to debug) than a lot of nested code caused by numerous callbacks. For example, I might want to query a list of news articles, get only the ones above a certain date, and only take three to be displayed at a time. API.rxGetAllNews() .filter{elem: News -> Bool in return elem.date.compare(dateParam) == NSOrderedDescending } .take(3) .subscribe( onNext: { elem: News in print(elem.description) } } Elegantly Handling Errors Rx gives you the control over your data streams so that you can handle errors easier. For example, your network call might fail because you don’t have any network connection. Some applications would then work better if they default to the data available in their local device. You can check the type of error (e.g. no server response) and use an Rx Observable as a replacement for the stream and still proceed to do the same observer code. API.rxGetAllNews() .filter{elem: News -> Bool in return elem.date.compare(dateParam) == NSOrderedDescending } .take(3) .catchError{ e: ErrorType -> Observable<Int> in return LocalData.rxGetAllNewsFromCache() } .subscribe( onNext: { elem: News in print(elem.description) } } Cleaning up Your Data One of my experiences wherein Rx was useful was when I was retrieving JSON data from a server but the JSON data had some items that needed to be merged. The data looked something like below: [ [“name”: “apple”, “count”: 4], [“name”: “orange”, “count”: 6], [“name”: “grapes”, “count”: 4], [“name”: “flour”, “count”: 2], [“name”: “apple”, “count”: 7], [“name”: “flour”, “count”: 1.3] ] The problem is, I need to update my local data based on the total of these quantities, not create multiple rows/instances in my database! What I did was first transform the JSON array entries into an observable, emitting each element. class func dictToObservable(dict: [NSDictionary]) -> Observable<NSDictionary> { return Observable.create{ observer in dict.forEach({ (e:NSDictionary) -> () in observer.onNext(e) }) observer.onCompleted() return NopDisposable.instance } } Afterwards, I called the observable, and performed a reduce function to merge the data. class func mergeDuplicates(dict: [NSDictionary]) -> Observable<[NSMutableDictionary]>{ let observable = dictToObservable(dict) as Observable<NSDictionary> return observable.reduce([], accumulator: { (var result, elem: NSDictionary) -> [NSMutableDictionary] in let filteredSet = result.filter({ (filteredElem: NSDictionary) -> Bool in return filteredElem.valueForKey("name") as! String == elem.valueForKey("name") as! String }) if filteredSet.count > 0 { if let element = filteredSet.first { let a = NSDecimalNumber(decimal: (element.valueForKey("count") as! NSNumber).decimalValue) let b = NSDecimalNumber(decimal: (elem.valueForKey("count") as! NSNumber).decimalValue) element.setValue(a.decimalNumberByAdding(b), forKey: "count") } } else { let m = NSMutableDictionary(dictionary: elem) m.setValue(NSDecimalNumber(decimal: (elem.valueForKey("count") as! NSNumber).decimalValue), forKey: "count") result.append(m) } return result }) } I created an accumulator variable, which I initialized to be [], an empty array. Then, for each element emitted by the observable, I checked if the name already exists in the accumulator (result) by filtering through the result to see if a name exists already. If the filteredSet returns a value greater than zero that means it already exists. That means that ‘element’ is the instance inside the result whose count should be updated, which ultimately updates my accumulator (result). If it doesn’t exist, then a new entry is added to the result. Once all entries are finished, the accumulator (result) is returned to be used by the next emission, or the final result after processing the data sequence. Where Do I Go From Here? The Rx community is slowly growing with more and more people contributing to the documentation and bringing it to their languages and platforms. I highly suggest you go straight to their website and documentation for a more thorough introduction to their framework. This gentle introduction to Rx was meant to prepare you for the wealth of knowledge and great design patterns they have provided in the documentation! If you’re having difficulty understanding streams, sequences, and what the operators do, RxMarbles.com provides interactive diagrams for some of the Rx operators. It’s an intuitive way of playing with Rx without touching code with only a higher level of understanding. Go check them out! RxMarbles is also available on the Android platform. About the Author Darren Sapalo is a software developer, an advocate for UX, and a student taking up his Master's degree in Computer Science. He enjoyed developing games on his free time when he was twelve. Finally finished with his undergraduate thesis on computer vision, he took up some industry work with Apollo Technologies Inc. developing for both the Android and iOS platforms.
Read more
  • 0
  • 0
  • 4425

article-image-when-buy-shelf-software-and-when-build-it-yourself
Hari Vignesh
12 Jun 2017
5 min read
Save for later

When to buy off-the-shelf software and when to build it yourself

Hari Vignesh
12 Jun 2017
5 min read
Balancing your company’s needs with respect to profitability, productivity, and scalability are both paramount and challenging, especially if your business is a startup. There will always be a two-road situation where you will be put in a position to pick one — whether to develop the software by yourselves or buy it. Well, let me make it simple for you. Both of these actions have their own pros and cons and it is entirely up to you to compromise a few parameters and jump to a conclusion. When to buy off-the-shelf software? Buying software is quite useful for small-scale startups when technology dependency is not tightly coupled. When you don't need to worry about the dynamic changes of the business and if it’s just like another utility for the span of 5+ years, buying is a great idea. But let’s also discuss a few other circumstances. Budget limitations Building new software and maintaining them, costs more than buying the production ready software. Canned solutions are cheaper, than building on your own, and therefore can make much more financial sense for a company with a smaller budget. Lack of technical proficiency If you don’t have an engineering team to construct software in the first place, hiring them again and crafting software will cost you a lot, if you need a quality outcome. So, it would be wise to pass on the opportunity — buy it, until you have such a team in place. Time constraints Time is a crucial factor for all businesses. You need to validate whether you have a sufficient time window for creating proprietary software. If not, preferring tailor made software is not a good idea, considering the design, development, and the maintenance time period. Businesses that do not have this time available should not immediately pursue it. Open source If the tool or software that you’re looking for is already in the open source market, then it will be very cost efficient to buy it, if there is a licensing fee. Open source software is great, handy, and can be tailored or customized according to your business needs; although, you cannot sell them though. If productivity alone matters for the new software, using the open source product will really benefit you. Not reinventing the wheel If your business case software is already production ready somewhere else, reinventing the wheel again is a waste of time and money. If you have a common business, like a restaurant, there are generally canned software solutions available that are already proven to be effective for your organization’s purpose. Buying and using them is far more effective than tailoring it yourself. Business case and competition In the case of your business being a retail furniture store, building amazing technology would unlikely be a factor that sets you apart from your competition. Recognizing the actual needs of your business case is also important before spending money and effort on custom software. When to build it yourself? Building software will cost you time and money. All we need to do is to decide whether it is worth it or not. Let’s discuss this in detail. Not meeting your expectations Even if there is canned software available for purchasing, if you strongly feel that those are not meeting your needs, you will be pushed to the only option of creating it yourself. Try customizing open source software first — if any. If your business has specialized needs, custom software may be better qualified to meet them. Not blending with existing system If you already have a system in place, you need to ensure whether or not the new system or software can work with it or take over from where the existing system left — it can be the database transition, architecture blending, etc. If the two programs do not communicate effectively, they may hinder your efficiency. If you build your own software, you can integrate with a wider set of APIs from different software and data partners. More productivity When you have enough money to invest and your focus is completely on the productivity aspect, building your custom software can aid your team to be flexible and work smarter and faster, because you clearly know what you want. Training people in canned software will cost more time and there is also the issue of human error. You can create one comprehensive technology platform as opposed to using multiple different programs. An integrated platform can yield major efficiency gains since all the data is in one place and users do not have to switch between different websites as part of their workflow. Competitive advantage When you rely on the same canned software as your rival does, it is more difficult to outperform them (outperforming doesn’t depend entirely on this, but it will create an impact). By designing your own software that is ideally suited for your specific business operations, you can garner a competitive advantage relative to your competitors. That advantage grows as you invest more heavily in your proprietary systems.  As mentioned, deciding whether to buy the software or tailoring it is entirely up to you. At the end of the day, you’re looking for software to help grow your business, so the goal should be measurable ROI. Focus on the ROI and it will help you in narrowing things down to a conclusion.  About the author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 4393

article-image-intent-based-networking-systems-ibns-machine-learning-sprinkles-intelligence-in-networks
Sugandha Lahoti
27 Nov 2017
6 min read
Save for later

Intent-based Networking Systems (IBNS): Machine Learning sprinkles intelligence in networks

Sugandha Lahoti
27 Nov 2017
6 min read
Machine Learning is entangling the world in the web of smart, intelligent and automated algorithms. The next bug caught in the mesh is Networking. The paradigm of network management is all set to take a pretty big shift with machine learning sprinkled networks. Termed as Intent-based network, it is a network management system focussed on improving network availability and agility through means of automation.   Exploring an Intent-based Network Essentially, an Intent-based networking software (IBNS), like any other networking solution helps to plan, plot, execute, and operate networks. A distinguishable fact is, that the intent based network can translate the “intent” of a user. The user may be the end-user or the network administrator. The “intent” may be through a series of commands or through APIs. How does it work?— a network admin describes the state of a network, or the policy they want a network to follow, the IBNS validates whether such a request is feasible and then executes it. The second differentiating feature of an IBNS is the ability to perform automated implementation of the network state and policies described above. They can also control the security level to be applied to applications based on user role, time, and device conditions. Additionally, the vLANs, mandate firewalls and other network technologies required within the network can be automated in an IBNS.   An IBNS can monitor a network in real-time and take necessary actions to assess and improve the network. The ultimate goal of an IBNS is, to ensure that the desired condition of a network is maintained while making sure an automated reparative action is taken in case of any discrepancy. Why networks need machine learning to create intent? Traditional Networks cannot be integrated into networking environments after a certain number of layers. They require manual integration which can be a daunting process demanding advanced skill-set. Traditional network systems are also prone to failover and capacity augment as they involve managing individual devices. This resulted in developing a new set of centrally globalized network system, controlled from one place that permeates through the network. Machine learning was considered a possible contender for aiding the development of such networks and hence IBNS came to be. The idea of IBNS was proposed long ago, but requirements for developing such a system were not available.  In the recent times, advances in ML have made it possible to develop such a system which can automate tasks. By automation we mean, automating the entire network policy when the network administrator simply defines the desired state of the network.   With the presence of ML, IBNS can do real time analysis of network conditions, which  involves monitoring, identifying and reacting to real time data using algorithmic validations. Why IBNS could be the future of networking? Central control system The intent based network uses a central, cloud based control system to program the routes to be taken by the network elements to deliver required information. The control system can also reprogram them in case of a network path failure. Automation of network elements Intent Based networks are all about reducing human efforts. A network is built on millions of connected devices. Managing a huge number of devices will require a lot of manpower. Also, the current networking system will have difficulty in managing enormous amount of data exploding from IoT devices. With an IBNS, human intervention required to build these network elements across services and circuits is reduced by a considerable amount. IBNS provides an automated network management system which mathematically validates the desired state of networks as specified by network administrators. Real-time decision making The network systems continuously gather information about the network architecture while adapting themselves to the desired condition. If a network is congested at a particular time frame, it can modify the route. For example, the GPS system can re-route a driver if he/she encounters some sort of traffic jam or a closed lane.  IBNS can also gather information about the current status of the network by consistent ingestion of real-time state pointers in the networking system. The real time data, so gathered can in turn make it smarter and more predictive.   Business benefits gained Intent-based software will open better opportunities for IT employees.  Due to elimination of mundane operational tasks, IT admins can increase their productivity levels while improving their skill set. For an organization, it would mean lower cost of operation, reduced fallacy, better reliability & security, resource management, increased optimization, and multi-vendor device management. What do we see around the globe in the IBNS domain? The rising potential of IBNS is giving startups and MNCs higher opportunities to invest in this technology.   Apstra have launched their Apstra Operating system which provides deployment of an IBN system to a variety of customers including Cisco, Arista, Juniper, and other white box users. The solution focuses on a single data center. It prevents and repairs network outages to improve infrastructure agility. The AOS also provides network admins full transparency to intent with the ability to ask any question about the network, amend any aspect of the intent after deployment, while allowing complete freedom for users to perform customizations of the AOS. Anuta networks, the pioneer in Network service orchestration have announced their award winning NCX intent-based orchestration platform to automate network services, which has enabled DevOps for networking. They provide more than 35 industry leading vendors with orchestrated devices. They also own detailed and exhaustive REST API for network integration. One of the top pioneers in the networking domain, Cisco has launched itself in the intent-based networking space with the solution—Software Defined Access (SDA).  The software enables easy to perform operational tasks and build powerful networks by automating the network. It also boasts of features like: Multi-site management Improved visibility with simple topological views Integration with Kubernetes Zero -trust security With tech-innovation taking place all around, networking organizations are racing towards breaking the clutter by bringing their intent based networks to the networking space. According to Gartner analyst, Andrew Lerner, “Intent-based networking is nascent, but could be the next big thing in networking, as it promises to improve network availability and agility, which are key, as organizations transition to digital business. I&O leaders responsible for networking need to determine if and when to pilot this technology.”
Read more
  • 0
  • 0
  • 4340
article-image-arduino-yun-welcome-to-the-internet-things
Michael Ang
26 Sep 2014
6 min read
Save for later

Arduino Yún - Welcome to the Internet of Things

Michael Ang
26 Sep 2014
6 min read
Arduino is an open source electronics platform that makes it easy to interface with sensors, lights, motors, and much more with a small standalone board. Arduino Yún combines a standard Arduino micro-controller with a tiny Linux computer, all on the same board! The Arduino micro-controller is perfectly suited to interfacing with hardware like sensors and motors, and the Linux computer makes it easy to get online to the Internet and perform more intensive tasks. The combination really is the best of both worlds. This post will introduce Arduino Yún and give you some ideas of the possibilities that it opens up. The key to the Yún is that it has two separate processors on the board. The first provides the normal Arduino functions using an ATmega32u4 micro-controller. This processor is perfect for running "low-level" operations like driving timing-sensitive LED light strips or interfacing with sensors. The second processor is an Atheros AR9331 "system on a chip" that is typically used in WiFi access points and routers. The Atheros processor runs a version of Linux derived from OpenWRT and has built-in WiFi that lets it connect to a WiFi network or act as an access point. The Atheros is pretty wimpy by desktop standards (400MHz processor and 64MB RAM) but it has no problem downloading webpages or accessing an SD card, for example—two tasks that would otherwise require extra hardware and be challenging for a standard Arduino board. One selling point for the Arduino Yún is that the integration between the two processors is quite good and you program the Yún using the standard Arduino IDE (currently you need the latest beta version). You can program the Yún by connecting it by a USB cable to your computer, but much more exciting is to program it over the air, via WiFi! When you plug the Yún into a USB power supply it will create a WiFi network with a name like "Arduino Yun-90A2DAF3022E". Connect to this network with your computer and you will be connected to the Yún! You'll be able to access the Yún's configuration page by going to http://arduino.local in your web browser and you should be able to reprogram the Yún from the Arduino IDE by selecting the network connection in Tools-> Port. There's a new access point in town Being able to reprogram the board over WiFi is already worth the price of admission for certain projects. I made a sound-reactive hanging light sculpture and it was invaluable to adjust and "dial in" the program inside the sculpture while it was hanging in the air. Look ma, no wires! Programming over the air The Bridge library for Arduino Yún is used to communicate between the processors. A number of examples using Bridge are provided with the Arduino IDE. With Bridge you can do things like controlling the pins on the Arduino from a webpage. For example, loading http://myArduinoYun.local/arduino/digital/13/1 in your browser could turn on the built-in LED. You can also use Bridge to download web pages, or run custom scripts on the Linux processor. Since the Linux processor is a full-blown computer with an SD card reader and USB, this can be really powerful. For example, you can write a Python script that runs on the Linux processor, and trigger that script from your Arduino sketch. The Yún is ideally suited for the "Internet of Things". Want to receive an e-mail when your cat comes home? Attach a switch to your pet door and have your Yún e-mail you when it sees the door open. Want to change the color of an LED based on the current weather? Just have the Linux processor download the current weather from Yahoo! Weather and the ATMega micro-controller can handle driving the LEDs. Temboo provides library code and examples for connecting to a large variety of web services. The Yún doesn't include audio hardware, but because the Linux processor supports USB peripherals, it's easy to attach a low-cost USB sound card. This tutorial has the details on adding a sound card and playing an mp3 file in response to a button press. I used this technique for my piece Forward Thinking Sound at the Art Hack Day in Paris that used a Yún to play modem sounds while controlling an LED strip. With only 48 hours to complete a new work from scratch, being able to get an mp3 playing from the Yún in less than an hour was amazing! Yún with USB sound card, speakers and LED strip. Forward Thinking Sound at Art Hack Day Paris. The Arduino Yún is a different beast than the Raspberry Pi and BeagleBone Black. Where the other boards are best thought of as small computers (with video output, built-in audio, and so on) the Arduino Yún is best thought of as the combination of an Arduino board and WiFi router than can run some basic scripts. The Yún is unfortunately quite a bit more expensive than a standard Arduino board, so you may not want to dedicate one to each project. The experience of programming the Yún is generally quite good—the Arduino IDE and Bridge library make it easy to use the Yún as a "regular" Arduino and ease into the network/Linux features as you need them. Once you can program your Arduino over WiFi and connect to the Internet, it's a little hard to go back! About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 4325

article-image-tensorflow-next-gen-machine-learning
Ariel Scarpinelli
01 Jun 2016
7 min read
Save for later

Tensorflow: Next Gen Machine Learning

Ariel Scarpinelli
01 Jun 2016
7 min read
Last November, Google open sourced its shiny Machine Intelligence package, promising a simpler way to develop deep learning algorithms that can be deployed anywhere, from your phone to a big cluster without a hassle. They even take advantage of running over GPUs for better performance. Let's Give It a Shot! First things first, let's install it: # Ubuntu/Linux 64-bit, CPU only (GPU enabled version requires more deps): $ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl # Mac OS X, CPU only:$ sudo easy_install --upgrade six$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl We are going to play with the old-known iris dataset, where we will train a neural network to take dimensions of the sepals and petals of an iris plant and classify it between three different types of iris plants: Iris setosa, Iris versicolour, and Iris virginica. You can download the training CSV dataset from here. Reading the Training Data Because TensorFlow is prepared for cluster-sized data, it allows you to define an input by feeding it with a queue of filenames to process (think of MapReduce output shards). In our simple case, we are going to just hardcode the path to our only file: import tensorflow as tf def inputs(): filename_queue = tf.train.string_input_producer(["iris.data"]) We then need to set up the Reader, which will work with the file contents. In our case, it's a TextLineReader that will produce a tensor for each line of text in the dataset: reader = tf.TextLineReader()key, value = reader.read(filename_queue) Then we are going to parse each line into the feature tensor of each sample in the dataset, specifying the data types (in our case, they are all floats except the iris class, which is a string). # decode_csv will convert a Tensor from type string (the text line) in # a tuple of tensor columns with the specified defaults, which also # sets the data type for each column sepal_length, sepal_width, petal_length, petal_width, label = tf.decode_csv(value, record_defaults=[[0.0], [0.0], [0.0], [0.0], [""]]) # we could work with each column separately if we want; but here we # simply want to process a single feature vector containing all the # data for each sample. features = tf.pack([sepal_length, sepal_width, petal_length, petal_width]) Finally, in our data file, the samples are actually sorted by iris type. This would lead to bad performance of the model and make it inconvenient for splitting between training and evaluation sets, so we are going to shuffle the data before returning it by using a tensor queue designed for it. All the buffering parameters can be set to 1500 because that is the exact number of samples in the data, so will store it completely in memory. The batch size will also set the number of rows we pack in a single tensor for applying operations in parallel: return tf.train.shuffle_batch([features, label], batch_size=100, capacity=1500, min_after_dequeue=100)   Converting the Data Our label field on the training dataset is a string that holds the three possible values of the Iris class. To make it friendly with the neural network output, we need to convert this data to a three-column vector, one for each class, where the value should be 1 (100% probability) when the sample belongs to that class. This is a typical transformation you may need to do with input data. def string_label_as_probability_tensor(label): is_setosa = tf.equal(label, ["Iris-setosa"]) is_versicolor = tf.equal(label, ["Iris-versicolor"]) is_virginica = tf.equal(label, ["Iris-virginica"]) return tf.to_float(tf.pack([is_setosa, is_versicolor, is_virginica]))   The Inference Model (Where the Magic Happens) We are going to use a single neuron network with a Softmax activation function. The variables (learned parameters of our model) will only be the matrix weights applied to the different features for each sample of input data. # model: inferred_label = softmax(Wx + b) # where x is the features vector of each data example W = tf.Variable(tf.zeros([4, 3])) b = tf.Variable(tf.zeros([3])) def inference(features): # we need x as a single column matrix for the multiplication x = tf.reshape(features, [1, 4]) inferred_label = tf.nn.softmax(tf.matmul(x, W) + b) return inferred_label Notice that we left the model parameters as variables outside of the scope of the function. That is because we want to use those same variables both while training and when evaluating and using the model. Training the Model We train the model using backpropagation, trying to minimize cross entropy, which is the usual way to train a Softmax network. At a high level, this means that for each data sample, we compare the output of the inference with the real value and calculate the error (how far we are). Then we use the error value to adjust the learning parameters in a way that minimizes that error. We also have to set the learning factor; it means for each sample, how much of the computed error we will apply to correct the parameters. There has to be a balance between the learning factor, the number of learning loop cycles, and the number of samples we pack tighter in the same tensor in batch; the bigger the batch, the smaller the factor and the higher the number of cycles. def train(features, tensor_label): inferred_label = inference(features) cross_entropy = -tf.reduce_sum(tensor_label*tf.log(inferred_label)) train_step = tf.train.GradientDescentOptimizer(0.001) .minimize(cross_entropy) return train_step Evaluating the Model We are going to evaluate our model using accuracy, which is the ratio of cases where our network identifies the right iris class over the total evaluation samples. def evaluate(evaluation_features, evaluation_labels): inferred_label = inference(evaluation_features) correct_prediction = tf.equal(tf.argmax(inferred_label, 1), tf.argmax(evaluation_labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) return accuracy Running the Model We are only left to connect our graph and run it in a session, where the defined operations are actually going to use the data. We also split our input data between training and evaluation around 70%:30%, and run a training loop with it 1,000 times. features, label = inputs() tensor_label = string_label_as_probability_tensor(label) train_step = train(features[0:69, 0:4], tensor_label[0:69, 0:3]) evaluate_step = evaluate(features[70:99, 0:4], tensor_label[70:99, 0:3]) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(1000): sess.run(train_step) print sess.run(evaluate_step) # should print 0 => setosa print sess.run(tf.argmax(inference([[5.0, 3.6, 1.4, 0.2]]), 1)) # should be 1 => versicolor print sess.run(tf.argmax(inference([[5.5, 2.4, 3.8, 1.1]]), 1)) # should be 2 => virginica print sess.run(tf.argmax(inference([[6.9, 3.1, 5.1, 2.3]]), 1)) coord.request_stop() coord.join(threads) sess.closs() If you run this, it should print an accuracy value close to 1. This means our network correctly classifies the samples in almost 100% of the cases, and also we are providing the right answers for the manual samples to the model. Conclusion Our example was very simple, but TensorFlow actually allows you to do much more complicated things with similar ease, such as working with voice recognition and computer vision. It may not look much different than using any other deep learning or math packages, but the key is the ability to run the expressed model in parallel. Google is willing to create a mainstream DSL to express data algorithms focused on machine learning, and they may succeed in doing so. For instance, although Google has not yet open sourced the distributed version of the engine, a tool capable of running Tensorflow-modeled graphs directly over an Apache Spark cluster was just presented at the Spark Summit, which shows that the community is interested in expanding its usage. About the author Ariel Scarpinelli is a senior Java developer in VirtualMind and is a passionate developer with more than 15 years of professional experience. He can be found on Twitter at @ triforcexp.
Read more
  • 0
  • 0
  • 4319
Modal Close icon
Modal Close icon