Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-raspberry-pi-v-arduino-which-ones-right-me
Raka Mahesa
20 Sep 2017
5 min read
Save for later

Raspberry Pi v Arduino - which one's right for me?

Raka Mahesa
20 Sep 2017
5 min read
Okay, so you’ve decided to be a maker and you’ve created a little electronic project for yourself. Maybe an automatic garage door opener, or maybe a simple media server for your home theatre. As you learn your way further into the DIY world, you realize that you need to decide on the hardware that will be the basis of your project. You’ve checked the Internet for help, and found out the two popular hardware choices for DIY projects: the Raspberry Pi and the Arduino. Since you're just starting out, it seems both hardware choices serve the same functionality. They both are able to run the program needed for your project and they both have a big community that can help you. So, which hardware should you choose? Before we can make that decision, we need to understand which hardware is best. Let's start with the Raspberry Pi. To put it simply, the Raspberry Pi is a computer with a very, very small physical size. Despite its small size, the Raspberry Pi is actually a full-fledged computer capable of running an operating system and executing various programs. By connecting the mini-computer to a screen via an HDMI cable, and to an input device like a keyboard or a mouse, people will be able to use the Raspberry Pi just like any other computer out there. The latest version even has wireless connectivity built right into the device, making it very easy for the hardware to be connected to the Internet. So, what about the Arduino? The Arduino is a microcontroller board--an integrated circuit with a computing chipset capable of running a simple program. If smart devices are run by computer processors, then "dumb devices" are run by microcontrollers. These dumb devices include things like a TV remote, air conditioner, calculator, and other simple devices. Okay, so now we have completed our crash course for both platforms, let's actually compare them, starting from the hardware aspect. Raspberry Pi is a full-blown computer, so it has most of the stuff you'd expect from a computer system. It has a quad-core ARM-based CPU running at 1,200 MHz, 1 GB of RAM, microSD card slot for storage, 4 USB 2.0 ports, and it even has a GPU to drive the display output via an HDMI port. The Raspberry Pi is also equipped with a variety of modules that enables the hardware to easily connect to other devices like camera and touchscreen. Meanwhile, the Arduino is a simple microcontroller board. It has a processor running at 16 MHz, a built-in LED, and a bunch of digital and analog pins to interface with other devices. The hardware also has a USB port that's used to upload a custom program into the board. Just from the hardware specification alone we can see that both are on a totally different level. The Raspberry Pi has a processor running at 1,200 MHz CPU clock, which is roughly similar to a low-end smartphone, whereas the processor in Arduino only runs at 16 MHz CPU clock. This means an Arduino board is only capable of running a simple program, while a Raspberry Pi can handle a much more complex one. So far it seems that Raspberry Pi is a much better choice for DIY projects. But well, we all know that a smartphone is also much more limited and slower than a desktop computer, yet no one is going to say that smartphone is useless. To understand the strength of the Arduino, we need to look at and compare the software running the hardware we're discussing. Since Raspberry Pi is a computer, the device requires an operating system to be able to function. An operating system offers many benefits, like a built-in file system and multitasking system, but it also has disadvantages like needing to be booted up first and programs requiring additional configuration so they can run automatically. On the other hand, an Arduino is running its own firmware that will execute a custom, user-uploaded program as soon as the device is turned on. The software on Arduino is more much limited, but it also means using it is pretty simple and straightforward. This theme of simplicity and complexity also extends to the software development for both platforms. Developing a software for Raspberry Pi is complex, just like developing any computer software. Meanwhile, Arduino provides a development tool that allows you to quickly develop a program in your desktop computer and easily upload it to the Arduino board via USB cable. So with all that said, which hardware platform is the right choice? Well, it depends on your project. If your project is simply about reading sensor data and processing that, then the simplicity of Arduino will help the development of your project immensely. If your project includes a lot of task and processes, like uploading data to the internet, sending you e-mail, reading image data, and other stuff, then the power of Raspberry Pi will help your project successfully do all those tasks. And, if you're just starting out and haven't really decided on your future project though, I'd suggest you to go with Arduino. The simplicity and ease-of-use of an Arduino board makes it a really great learning tool where you can focus on making new stuff instead of making your things work together. About the Author RakaMahesa is a game developer at Chocoarts, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 31042

article-image-building-better-bundles-why-processenvnodeenv-matters-optimized-builds
Mark Erikson
14 Nov 2016
5 min read
Save for later

Building Better Bundles: Why process.env.NODE_ENV Matters for Optimized Builds

Mark Erikson
14 Nov 2016
5 min read
JavaScript developers are keenly aware of the need to reduce the size of deployed assets, especially in today's world of single-page apps. This usually means running increasingly complex JavaScript codebases through build steps that produce a minified bundle for deployment. However, if you read a typical tutorial on setting up a build tool like Browserify or Webpack, you'll see numerous references to a variable called process.env.NODE_ENV. Tutorials always talk about how this needs to be set to a value like "production" in order to produce a properly optimized bundle, but most articles never really spell out why this value matters and how it relates to build optimization. Here's an explanation of why process.env.NODE_ENV is used and how it fits into the typical build process. Operating system environment variables are widely used as a method of configuring applications, especially as a way to activate behavior based on different deployment environments (such as development vs testing vs production). Node.js exposes the current process's environment variables to the script as an object called process.env. From there, the Express web server framework popularized using an environment variable called NODE_ENV as a flag to indicate whether the server should be running in "development" mode vs "production" mode. At runtime, the script looks up that value by checking process.env.NODE_ENV. Because it was used within the Node ecosystem, browser-focused libraries also started using it to determine what environment they were running in, and using it to control optimizations and debug mode behavior. For example, React uses it as the equivalent of a C preprocessor #ifdef to act as conditional checking for debug logging and perf tracking, roughly like this: function someInternalReactFunction() { // do actual work part 1 if(process.env.NODE_ENV === "development") { // do debug-only work, like recording perf stats } // do actual work part 2 } If process.env.NODE_ENV is set to "production", all those if clauses will evaluate to false, and the potentially expensive debug code won't run. In addition, in conjunction with a tool like UglifyJS that does minification and removal of dead code blocks, a clause that is surrounded with if(process.env.NODE_ENV === "development") will become dead code in a production build and be stripped out, thus reducing bundled code size and execution time. However, because the NODE_ENV environment variable and the corresponding process.env.NODE_ENV runtime field are normally server-only concepts, by default those values do not exist in client-side code. This is where build tools such as Webpack's DefinePlugin or the Browserify Envify transform come in, which perform search-and-replace operations on the original source code. Since these build tools are doing transformation of your code anyway, they can force the existence of global values such as process.env.NODE_ENV. (It's also important to note that because DefinePlugin in particular does a direct text replacement, the value given to DefinePlugin must include actual quotes inside of the string itself. Typically, this is done either with alternate quotes, such as '"production"', or by using JSON.stringify("production")). Here's the key: the build tool could set that value to anything, based on any condition that you want, as you're defining your build configuration. For example, I could have a webpack.production.config.js Webpack config file that always uses the DefinePlugin to set that value to "production" throughout the client-side bundle. It wouldn't have to be checking the actual current value of the "real" process.env.NODE_ENV variable while generating the Webpack config, because as the developer I would know that any time I'm doing a "production" build, I would want to set that value in the client code to "production'. This is where the "code I'm running as part of my build process" and "code I'm outputting from my build process" worlds come together. Because your build script is itself most likely to be JavaScript code running under Node, it's going to have process.env.NODE_ENV available to it as it runs. Because so many tools and libraries already share the convention of using that field's value to determine their dev-vs-production status, the common convention is to use the current value of that field inside the build script as it's running to also determine the value of that field as applied to the client code being transformed. Ultimately, it all comes down to a few key points: NODE_ENV is a system environment variable that Node exposes into running scripts. It's used by convention to determine dev-vs-prod behavior, by both server tools, build scripts, and client-side libraries. It's commonly used inside of build scripts (such as Webpack config generation) as both an input value and an output value, but the tie between the two is still just convention. Build tools generally do a transform step on the client-side code, replace any references to process.env.NODE_ENV with the desired value, and the resulting code will contain dead code blocks as debug-only code is now inside of an if(false)-type condition, ensuring that code doesn't execute at runtime. Minifier tools such as UglifyJS will strip out the dead code blocks, leaving the production bundle smaller. So, the next time you see process.env.NODE_ENV mentioned in a build script, hopefully you'll have a much better idea why it's there. About the author Mark Erikson is a software engineer living in southwest Ohio, USA, where he patiently awaits the annual heartbreak from the Reds and the Bengals. Mark is author of the Redux FAQ, maintains the React/Redux Links list and Redux Addons Catalog, and occasionally tweets at @acemarke. He can be usually found in the Reactiflux chat channels, answering questions about React and Redux. He is also slightly disturbed by the number of third-person references he has written in this bio!
Read more
  • 0
  • 0
  • 30990

article-image-top-5-nosql-databases
Akram Hussain
31 Oct 2014
4 min read
Save for later

Top 5 NoSQL Databases

Akram Hussain
31 Oct 2014
4 min read
NoSQL has seen a sharp rise in both adoption and migration from the tried and tested relational database management systems. The open source world has accepted it with open arms, which wasn’t the case with large enterprise organisations that still prefer and require ACID-compliant databases. However, as there are so many NoSQL databases, it’s difficult to keep track of them all! Let’s explore the most popular and different ones available to us: 1 - Apache Cassandra Apache Cassandra is an open source NoSQL database. Cassandra is a distributed database management system that is massively scalable. An advantage of using Cassandra is its ability to manage large amounts of structured, semi-structured, and unstructured data. What makes Cassandra more appealing as a database system is its ability to ‘Scale Horizontally’, and it’s one of the few database systems that can process data in real time and generate high performance and maintain high availability. The mixture of a column-oriented database with a key-value store means not all rows require a column, but the columns are grouped, which is what makes them look like tables. Cassandra is perfect for ‘mission critical’ big data projects, as Cassandra offers ‘no single point of failure’ if a data node goes down. 2 - MongoDB MongoDBis an open source schemaless NoSQL database system; its unique appeal is that it’s a ‘Document database’ as opposed to a relational database. This basically means it’s a ‘data dumpster’ that’s free for all. The added benefit in using MongoDB is that it provides high performance, high availability, and easy scalability (auto-sharding) for large sets of unstructured data in JSON-like files. MongoDB is the ultimate opposite to the popular MySQL. MySQL data has to be read in rows and columns, which has its own set of benefits with smaller sets of data. 3 - Neo4j Neo4j is an open source NoSQL ‘graph-based database’. Neo4j is the frontrunner of the graph-based model. As a graph database, it manages and queries highly connected data reliably and efficiently. It allows developers to store data more naturally from domains such as social networks and recommendation engines. The data collected from sites and applications are initially stored in nodes that are then represented as graphs. 4 - Hadoop Hadoop is easy to look over as a NoSQL database due to its ecosystem of tools for big data. It is a framework for distributed data storage and processing, designed to help with huge amounts of data while limiting financial and processing-time overheads. Hadoop includes a database known as HBase, which runs on top of HDFS and is a distributed, column-oriented data store. HBase is also better known as a distributed storage system for Hadoop nodes, which are then used to run analytics with the use of MapReduce V2, also known as Yarn. 5 - OrientDB OrientDB has been included as a wildcard! It’s a very interesting database and one that has everything going for it, but has always been in the shadows of Neo4j. Orient is an open source NoSQL hybrid graph-document database that was developed to combine the flexibility of a document database with the complexity of a graph database (Mongo and Neo4j all in one!). With the growth of complex and unstructured data (such as social media), relational databases were not able to handle the demands of storing and querying this type of data. Document databases were developed as one solution and visualizing them through nodes was another solution. Orient has combined both into one, which sounds awesome in theory but might be very different in practice! Whether the Hybrid approach works and is adopted remains to be seen.
Read more
  • 0
  • 0
  • 30950

article-image-ai-rescue-5-ways-machine-learning-can-assist-emergency-situations
Sugandha Lahoti
16 Jan 2018
9 min read
Save for later

AI to the rescue: 5 ways machine learning can assist during emergency situations

Sugandha Lahoti
16 Jan 2018
9 min read
At the wee hours of the night, on January 4 this year, over 9.8 million people experienced a magnitude 4.4 earthquake that rumbled across the San Francisco Bay Area. This was followed by a magnitude 7.6 earthquake in the Caribbean sea on January 9, following which a tsunami advisory was in effect for Puerto Rico and the U.S. and British Virgin Islands. In the past 6 months, the United States alone has witnessed four back-to-back storms from one brutal hurricane season, and a massive wildfire with almost 2 million acres of land ablaze. Natural Disasters across the globe are increasingly becoming more damaging and frequent. Since 1970, the number of disasters worldwide has more than quadrupled to around 400 a year. These series of natural disasters have strained the emergency services and disaster relief operations beyond capacity. We now need to look for newer ways to assist the affected people and automate the recovery process. Artificial intelligence and machine learning have advanced to the state where they are highly proficient in making predictions, and in identification and classification tasks. These use cases of AI can also be applied to prevent disasters or respond quickly, in case of an emergency.   Here are 5 ways how AI can lend a helping hand during emergency situations. 1. Machine Learning for targeted disaster relief management In case of any disaster, the first step is to formulate a critical response team to help those in distress. Before the team goes into action, it is important to analyze and assess the extent of damage and to ensure that the right aid goes first to those who need it the most. AI techniques such as image recognition and classification can be quite helpful in assessing the damage as they can analyze and observe images from the satellites. They can immediately and efficiently filter these images, which would have required months to be sorted manually. AI can identify objects and features such as damaged buildings, flooding, blocked roads from these images. They can also identify temporary settlements which may indicate that people are homeless, and so the first care could be directed towards them.   Artificial intelligence and machine learning tools can also aggregate and crunch data from multiple resources such as crowd-sourced mapping materials or Google maps. Machine learning approaches then combine all this data together, remove unreliable data, and identify informative sources to generate heat maps. These heat maps can identify areas in need of urgent assistance and direct relief efforts to those areas. Heat maps are also helpful for government and other humanitarian agencies in deciding where to conduct aerial assessments. DigitalGlobe provides space imagery and geospatial content. Their Open Data Program is a special program for disaster response. The software learns how to recognize buildings on satellite photos by learning from the crowd. DigitalGlobe releases pre- and post-event imagery for select natural disasters each year, and their crowdsourcing platform, Tomnod, will prioritize micro-tasking to accelerate damage assessments. Following the Nepal Earthquakes in 2015, Rescue Global and academicians from the Orchid Project used machine learning to carry out rescue activities. They took pre and post-disaster imagery and utilized crowd-sourced data analysis and machine learning to identify locations affected by the quakes that had not yet been assessed or received aid. This information was then shared with relief workforces to facilitate their activities. 2. Next Generation 911 911 is the first source of contact during any emergency situation. 911 dispatch centers are already overloaded with calls on a regular day. In case of a disaster or calamity, the number gets quadrupled, or even more. This calls for augmenting traditional 911 emergency centers with newer technologies for better management. Traditional 911 centers rely on voice-based calls alone. Next-gen dispatch services are upgrading their emergency dispatch technology with machine learning to receive more types of data. So now they can ingest the data from not just calls but also from text, video, audio, and pictures, to analyze them to make quick assessments. The insights gained from all this information can be passed on to the emergency response teams out in the field to efficiently carry out critical tasks. The Association of Public-Safety Communications (APCO) have employed IBM’s Watson to listen to 911 calls. This initiative is to help emergency call centers improve operations and public safety by using Watson’s speech-to-text and analytics programs. Using Watson's speech-to-text function, the context of each call is fed into the AI's analytics program allowing improvements in how call centers respond to emergencies. It also helps in reducing call times, provide accurate information, and help accelerate time-sensitive emergency services. 3. Sentiment analysis on social media data for disaster management and recovery Social media channels are a major source of news in present times. Some of the most actionable information, during a disaster, comes from social media users. Real-time images and comments from Facebook, Twitter, Instagram, and YouTube can be analyzed and validated by AI to filter real information from fake ones. These vital stats can help on-the-ground aid workers to reach the point of crisis sooner and direct their efforts to the needy. This data can also help rescue workers in reducing the time needed to find victims. In addition, AI and predictive analytics software can analyze digital content from Twitter, Facebook, and Youtube to provide early warnings, ground-level location data, and real-time report verification. In fact, AI could also be used to view the unstructured data and background of pictures and videos posted to social channels and compare them to find missing people. AI-powered chatbots can help residents affected by a calamity. The chatbot can interact with the victim, or other citizens in the vicinity via popular social media channels and ask them to upload information such as location, a photo, and some description. The AI can then validate and check this information from other sources and pass on the relevant details to the disaster relief committee. This type of information can assist them with assessing damage in real time and help prioritize response efforts. AI for Digital Response (AIDR) is a free and open platform which uses machine intelligence to automatically filter and classify social media messages related to emergencies, disasters, and humanitarian crises. For this, it uses a Collector and a Tagger. The Collector helps in collecting and filtering tweets using keywords and hashtags such as "cyclone" and "#Irma," for example. The Collector works as a word-filter. The Tagger is a topic-filter which classifies tweets by topics of interest, such as "Infrastructure Damage," and "Donations," for example. The Tagger automatically applies the classifier to incoming tweets collected in real-time using the Collector. 4. AI answers distress and help-calls Emergency relief services are flooded with distress and help calls in the event of any emergency situation. Managing such a huge amount of calls is time-consuming and expensive when done manually. The chances of a critical information being lost or unobserved is also a possibility. In such cases, AI can work as a 24/7 dispatcher. AI systems and voice assistants can analyze massive amounts of calls, determine what type of incident occurred and verify the location. They can not only interact with callers naturally and process those calls, but can also instantly transcribe and translate languages. AI systems can analyze the tone of voice for urgency, filtering redundant or less urgent calls and prioritizing them based on the emergency. Blueworx is a powerful IVR platform which uses AI to replace call center officials. Using AI technology is especially useful when unexpected events such as natural disasters drive up call volume. Their AI engine is well suited to respond to emergency calls as unlike a call center agent, it can know who a customer is even before they call. It also provides intelligent call routing, proactive outbound notifications, unified messaging, and Interactive Voice Response. 5. Predictive analytics for proactive disaster management Machine learning and other data science approaches are not limited to assisting the on-ground relief teams or assisting only after the actual emergency. Machine learning approaches such as predictive analytics can also analyze past events to identify and extract patterns and populations vulnerable to natural calamities. A large number of supervised and unsupervised learning approaches are used to identify at-risk areas and improve predictions of future events. For instance, clustering algorithms can classify disaster data on the basis of severity. They can identify and segregate climatic patterns which may cause local storms with the cloud conditions which may lead to a widespread cyclone. Predictive machine learning models can also help officials distribute supplies to where people are going, rather than where they were by analyzing real-time behavior and movement of people. In addition, predictive analytics techniques can also provide insight for understanding the economic and human impact of natural calamities. Artificial neural networks take in information such as region, country, and natural disaster type to predict the potential monetary impact of natural disasters. Recent advances in cloud technologies and numerous open source tools have enabled predictive analytics with almost no initial infrastructure investment. So agencies with limited resources can also build systems based on data science and develop more sophisticated models to analyze disasters.   Optima Predict, a suite of software by Intermedix collects and reads information about disasters such as viral outbreaks or criminal activity in real time. The software spots geographical clusters of reported incidents before humans notice the trend and then alerts key officials about it. The data can also be synced with FirstWatch, which is an online dashboard for an EMS (Emergency Medical Services) personnel. Thanks to the multiple benefits of AI, government agencies and NGOs can start utilizing machine learning to deal with disasters. As AI and allied fields like robotics further develop and expand, we may see a fleet of drone services, equipped with sophisticated machine learning. These advanced drones could expedite access to real-time information at disaster sites using video capturing capabilities and also deliver lightweight physical goods to hard to reach areas. As with every progressing technology, AI will also build on its existing capabilities. It has the potential to eliminate outages before they are detected and give disaster response leaders an informed, clearer picture of the disaster area, ultimately saving lives.
Read more
  • 0
  • 0
  • 30934

article-image-python-data-visualization-myths-you-should-know-about
Savia Lobo
02 Nov 2018
4 min read
Save for later

Python Data Visualization myths you should know about

Savia Lobo
02 Nov 2018
4 min read
In recent years, we have experienced an exponential growth of data. As the amount of data grows, the need for developers with knowledge of data analytics and especially data visualization spikes. Data visualizations help in getting a clear and concise view of the data, making it more tangible for (non-technical) audiences. MATLAB and R are the two available languages that have been traditionally used for data science and data visualization. However, Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data combined with the number of available libraries makes Python the best choice. So Data visualization seems easy, doesn’t it? However, there are a lot of myths surrounding it. Let us have a look at some of them. Myth 1: Data visualizations are just for data scientists Today's data visualization libraries are very convenient, so any person can create meaningful visualizations in just a few minutes. Myth 2: Data visualization technologies are difficult to learn Of course, building and designing sophisticated data visualizations will take some work and learning but with very little knowledge of the libraries and what they are capable of, you can create simple visualizations that will help you get valuable insights into your data. Python is a comparably easy language. The “pythonic” approach is also used when building visualization libraries for Python which makes them easy to understand and use. Myth 3: Data visualization isn’t needed for data insights Imagine having a table of data with 20 columns and several thousand rows. What do you think will give you better insights into this data? Just looking at the table and trying to make sense of all the columns and values in them, or creating some simple plots that visualize the content of this table? Of course, you could force yourself to get insights without visualizations, but the key is to work smarter, not harder. Myth 4: Data visualization takes a lot of time If you have a basic understanding of your data, you can create some basic visualizations in no time. There are a lot of libraries, which will be covered in this course, that allow you to simply import some data and build visualizations in a few lines of code. The more difficult part is creating visualizations which are descriptive and display the concepts you wanted to show but don’t worry, this will be discussed in the course in detail as well. Amidst all the myths, Data visualization in combination with Python is an essential skill when working with data. When properly utilized, it is a powerful combination that not only enables you to get better insights into your data but also gives you the tool to communicate results better. Head over to our course titled ‘Data Visualization with Python’, to use Python with NumPy, Pandas, Matplotlib, and Seaborn to create impactful data visualizations with the real world, public data. About Tim and Mario Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. 4 tips for learning Data Visualization with Python Setting up Apache Druid in Hadoop for Data visualizations [Tutorial] 8 ways to improve your data visualizations  
Read more
  • 0
  • 0
  • 30838

article-image-how-iot-is-going-to-change-tech-teams
Raka Mahesa
31 Jan 2018
5 min read
Save for later

How IoT is going to change tech teams

Raka Mahesa
31 Jan 2018
5 min read
The Internet of Things is going to transform the way we live in the future. It will change how we commute, how we work, even simple day to day activities. But one thing that’s often overlooked when we talk about the internet of things is how it will impact IT teams. We’ve seen a lot of change in the shape of the modern IT team over the last 10 years thanks to things like DevOps, but IoT is going to shape things further in the near future.  To better understand how the Internet of Things will shape IT teams in the future, we first need to understand the application of the Internet of Things, especially in the sector closest to IT teams, the enterprise sector. IoT in the enterprise sector If you look at consumer media, the most common applications of the Internet of Things are the small-scale ones like smart gadgets and smart home systems. Unfortunately, this class of IoT products hasn't really caught up with mainstream consumers; its audience is limited to hobbyists and people in the tech. However, it's a whole different story with the enterprise sector becuse companies all over the world are starting to realize the benefit of applying IoT in their line of business.  Different industries have different applications of IoT. Usually though, IoT is used to either increase efficiency or reduce cost. For example, a shipping service may apply a monitoring system on their vehicles to track their speed and mileage to find ways to reduce fuel usage. Similarly, an airline company could apply sensors on their fleet of airplanes to monitor engine conditions to maintain it properly. A company may also apply IoT to manage its energy consumption so that it can reduce unneeded expenses. What new skills does IoT demand of tech pros All of these applications of IoT are going to require new skills and maybe even new job roles. So while we’ll see efficiencies thanks to these innovations, to really make an impact its still going to need both personal and organizational investment in skills and knowledge to ensure IoT is really helping to drive positive change. IoT and the second data explosion Let’s start with the most obvious change – the growth of data. Yes, the big data explosion has been happening all around us for the last decade, but IoT is bringing with it a second explosion that will be even bigger. This means everyone is going to have to become more data-savvy. That’s not to say that everyone will need to moonlight as a data scientist, but they will need an awareness of how data is stored and processed, who needs access to it and who needs to act on it. Device management will become more important than ever IoT isn’t just about data. It’s also about devices. With more gadgets and sensors connected to a given network, device management and maintenance will be an essential part of the IT team’s work. To tackle this problem, the team will need to grow bigger to handle more work, or they will need to use a more powerful device management tool that can handle a big amount of connected devices. New security risks presented by IoT An increase in the number of connected devices also presents increased security risks. This means pressure will be on IT departments to  IT team will need to tighten up security. Managing networks is one part of that, but a further challenge will be managing the human side of security – ensuring good practice is followed by staff and taking steps to minimize social engineering threats. IT teams will have to customize IoT solutions to meet their needs IoT doesn’t yet have many standards. That means today’s organizations face opportunities and challenges in how they customize solutions and tools for their own needs. This can be daunting, but for people working in IT teams it’s also really exciting – it gives them more control and ownership of the work they are doing. Third party solutions will no doubt remain, but they won’t be quite so important when it comes to IoT. True, companies like IBM will be working on IoT solutions right now to capture the market; however, because these innovations are in their infancy there’s a limit on traditional technology corporations’ ability to shape and define the IoT landscape in the way they have done with innovations in the past.  And that's just a small bit of how the Internet of Things will affect the IT team. When IoT takes off, it will change our lives in the most unimaginable ways possible, so of course there will be even more changes that will happen with the IT teams in charge of this. But then again, the world of technology is ripe with changes and disruptions, so I'm sure we're all used to changes and will be able to adapt. Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work, he enjoys working on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets @legacy99.
Read more
  • 0
  • 0
  • 30779
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ai-for-game-developers
Kunal Chaudhari
25 May 2018
21 min read
Save for later

AI for game developers: 7 ways AI can take your game to the next level

Kunal Chaudhari
25 May 2018
21 min read
Artificial Intelligence (AI) is a rich and complex topic. At first glance, it can seem intimidating. The uses for it are diverse, ranging from robotics to statistics and to (more relevantly for us) entertainment, more specifically, video games. In this article, we will get to know the fundamentals of Artificial intelligence and why it is important for game developers. We have also covered a quick background on AI in academics, traditional domain, and game-specific applications. We will also look at the following: Application and implementation of AI in games is different from other domains Special requirements for AI in games Basic AI patterns used in games This article is an excerpt from a book written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe titled  Unity 2017 Game AI Programming - Third Edition. This book would help you to create fun and unbelievable AI entities in your games with A*, Fuzzy logic and NavMesh with Unity 2017. Leveling up your game with AI AI in games dates back all the way to the earliest games, even as far back as Namco's arcade hit Pac-Man. The AI was rudimentary at best, but even in Pac-Man, each of the enemies—Blinky, Pinky, Inky, and Clyde—had unique behaviors that challenged the player in different ways. Learning those behaviors and reacting to them adds a huge amount of depth to the game and keeps players coming back, even after over 30 years since its release. It's the job of a good game designer to make the game challenging enough to be engaging, but not so difficult that a player can never win. To this end, AI is a fantastic tool that can help abstract the patterns that entities in games follow to make them seem more organic, alive, and real. Much like an animator through each frame or an artist through his brush, a designer or programmer can breathe life into their creations via clever use of the AI techniques covered in this article. The role of AI in games is to make games fun by providing challenging entities to compete with, and interesting non-player characters (NPCs) that behave realistically inside the game world. The objective here is not to replicate the whole thought process of humans or animals, but merely to sell the illusion of life and make NPCs seem intelligent by having them react to the changing situations inside the game world in a way that makes sense to the player. Technology allows us to design and create intricate patterns and behaviors, but we're not yet at the point where AI in games even begins to resemble true human behavior. While smaller, more powerful chips, buckets of memory, and even distributed computing have given programmers a much higher computational ceiling to dedicate to AI, at the end of the day, resources are still shared between other operations such as graphics rendering, physics simulation, audio processing, animation, and others, all in real time. All these systems have to play nice with each other to achieve a steady frame rate throughout the game. Like all the other disciplines in game development, optimizing AI calculations remains a huge challenge for AI developers. Using AI in Unity In this section, we'll walk you through the AI techniques being used in different types of games. Unity is a flexible engine that provides a number of approaches to implement AI patterns. Some are ready to go out of the box, so to speak, while others we'll have to build from scratch. We'll focus on implementing the most essential AI patterns within Unity so that you can get your game's AI entities up and running quickly. Learning and implementing the techniques within this article will serve as a fundamental first step in the vast world of AI. Many of the concepts we will cover in this article, such as pathfinding and Navigation Meshes, are interconnected and build on top of one another. For this reason, it's important to get the fundamentals right first before digging into the high-level APIs that Unity provides. Defining the agent Before jumping into our first technique, we should be clear on a key term—the agent. An agent, as it relates to AI, is our artificially intelligent entity. When we talk about our AI, we're not specifically referring to a character, but an entity that displays complex behavior patterns, which we can refer to as non-random, or in other words, intelligent. This entity can be a character, creature, vehicle, or anything else. The agent is the autonomous entity, executing the patterns and behaviors we'll be covering. With that out of the way, let's jump in. Finite State Machines Finite State Machines (FSM) can be considered one of the simplest AI models, and they are commonly used in games. A state machine basically consists of a set number of states that are connected in a graph by the transitions between them. A game entity starts with an initial state and then looks out for the events and rules that will trigger a transition to another state. A game entity can only be in exactly one state at any given time. For example, let's take a look at an AI guard character in a typical shooting game. Its states could be as simple as patrolling, chasing, and shooting: There are basically four components in a simple FSM: States: This component defines a set of distinct states that a game entity or an NPC can choose from (patrol, chase, and shoot) Transitions: This component defines relations between different states Rules: This component is used to trigger a state transition (player on sight, close enough to attack, and lost/killed player) Events: This is the component that will trigger to check the rules (guard's visible area, distance to the player, and so on) FSMs are commonly used go-to AI patterns in game development because they are relatively easy to implement, visualize, and understand. Using simple if/else statements or switch statements, we can easily implement an FSM. It can get messy as we start to have more states and more transitions. Seeing the world through our agent's eyes In order to make our AI convincing, our agent needs to be able to respond to the events around him, the environment, the player, and even other agents. Much like real living organisms, our agent can rely on sight, sound, and other "physical" stimuli. However, we have the advantage of being able to access much more data within our game than a real organism can from their surroundings, such as the player's location, regardless of whether or not they are in the vicinity, their inventory, the location of items around the world, and any variable you chose to expose to that agent in your code: In the preceding diagram, our agent's field of vision is represented by the cone in front of it, and its hearing range is represented by the grey circle surrounding it: Vision, sound, and other senses can be thought of, at their most essential level, as data. Vision is just light particles, sound is just vibrations, and so on. While we don't need to replicate the complexity of a constant stream of light particles bouncing around and entering our agent's eyes, we can still model the data in a way that produces believable results. As you might imagine, we can similarly model other sensory systems, and not just the ones used for biological beings such as sight, sound, or smell, but even digital and mechanical systems that can be used by enemy robots or towers, for example sonar and radar. If you've ever played Metal Gear Solid, then you've definitely seen these concepts in action—an enemy's field of vision is denoted on the player's mini map as cone-shaped fields of view. Enter the cone and an exclamation mark appears over the enemy's head, followed by an unmistakable chime, letting the player know that they've been spotted. Path following and steering Sometimes, we want our AI characters to roam around in the game world, following a roughly-guided or thoroughly-defined path. For example, in a racing game, the AI opponents need to navigate the road. In an RTS game, your units need to be able to get from wherever they are to the location you tell them navigating through the terrain and around each other. To appear intelligent, our agents need to be able to determine where they are going, and if they can reach that point, they should be able to route the most efficient path and modify that path if an obstacle appears as they navigate.  Even path following and steering can be represented via a finite state machine. You will then see how these systems begin to tie in. In this article, we will cover the primary methods of pathfinding and navigation, starting with our own implementation of an A* Pathfinding System, followed by an overview of Unity's built-in Navigation Mesh (NavMesh) feature. Dijkstra's algorithm While perhaps not quite as popular as A* Pathfinding (which we will cover next), it's crucial to understand Dijkstra's algorithm, as it lays the foundation for other similar approaches to finding the shortest path between two nodes in a graph. The algorithm was published by Edsger W. Dijkstra in 1959. Dijkstra was a computer scientist, and though he may be best known for his namesake algorithm, he also had a hand in developing other important computing concepts, such as the semaphore. It might be fair to say Dijkstra probably didn't have StarCraft in mind when developing his algorithm, but the concepts translate beautifully to game AI programming and remain relevant to this day. So what does the algorithm actually do? In a nutshell, it computes the shortest path between two nodes along a graph by assigning a value to each connected node based on distance. The starting node is given a value of zero. As the algorithm traverses through a list of connected nodes that have not been visited, it calculates the distance to it and assigns the value to that node. If the node had already been assigned a value in a prior iteration of the loop, it keeps the smallest value. The algorithm then selects the connected node with the smallest distance value, and marks the previously selected node as visited, so it will no longer be considered. The process repeats until all nodes have been visited. With this information, you can then calculate the shortest path. Need help wrapping your head around Dijkstra's algorithm? The University of San Francisco has created a handy visualization tool:  ;https://www.cs.usfca.edu/~galles/visualization/Dijkstra.html. While Dijkstra's algorithm is perfectly capable, variants of it have been developed that can solve the problem more efficiently. A* is one such algorithm, and it's one of the most widely used pathfinding algorithms in games, due to its speed advantage over Dijkstra's original version. Using A* Pathfinding There are many games in which you can find monsters or enemies that follow the player, or go to a particular point while avoiding obstacles. For example, let's take a typical RTS game. You can select a group of units and click on a location you want them to move to, or click on the enemy units to attack them. Your units then need to find a way to reach the goal without colliding with the obstacles or avoid them as intelligently as possible. The enemy units also need to be able to do the same. Obstacles could be different for different units, terrain, or other in-game entities. For example, an air force unit might be able to pass over a mountain, while the ground or artillery units need to find a way around it. A* (pronounced "A star") is a pathfinding algorithm that is widely used in games because of its performance and accuracy. Let's take a look at an example to see how it works. Let's say we want our unit to move from point A to point B, but there's a wall in the way and it can't go straight towards the target. So, it needs to find a way to get to point B while avoiding the wall. The following figure illustrates this scenario: In order to find the path from point A to point B, we need to know more about the map, such as the position of the obstacles. To do this, we can split our whole map into small tiles, representing the whole map in a grid format. The tiles can also be other shapes such as hexagons and triangles. Representing the whole map in a grid makes the search area more simplified, and this is an important step in pathfinding. We can now reference our map in a small 2D array: Once our map is represented by a set of tiles, we can start searching for the best path to reach the target by calculating the movement score of each tile adjacent to the starting tile, which is a tile on the map not occupied by an obstacle, and then choosing the tile with the lowest cost. A* Pathfinding calculates the cost to move across the tiles A* is an important pattern to know when it comes to pathfinding, but Unity also gives us a couple of features right out of the box, such as automatic Navigation Mesh generation and the NavMesh agent. These features make implementing pathfinding in your games a walk in the park (no pun intended). Whether you choose to implement your own A* solution or simply go with Unity's built-in NavMesh feature will depend on your project's needs. Each option has its own pros and cons, but ultimately, knowing about both will allow you to make the best possible choice. With that said, let's have a quick look at NavMesh. IDA* Pathfinding IDA* star stands for iterative deepening A*. It is a depth-first permutation of A* with a lower overall memory cost, but is generally considered costlier in terms of time. Whereas A* keeps multiple nodes in memory at a time, IDA* does not since it is a depth-first search. For this reason, IDA* may visit the same node multiple times, leading to a higher time cost. Either solution will give you the shortest path between two nodes. In instances where the graph is too big for A* in terms of memory, IDA* is preferable, but it is generally accepted that A* is good enough for most use cases in games. Using Navigation Mesh Now that we've taken a brief look at A*, let's look at some possible scenarios where we might find NavMesh a fitting approach to calculate the grid. One thing that you might notice is that using a simple grid in A* requires quite a number of computations to get a path that is the shortest to the target and, at the same time, avoids the obstacles. So, to make it cheaper and easier for AI characters to find a path, people came up with the idea of using waypoints as a guide to move AI characters from the start point to the target point. Let's say we want to move our AI character from point A to point B and we've set up three waypoints, as shown in the following figure: All we have to do now is to pick up the nearest waypoint and then follow its connected node leading to the target waypoint. Most games use waypoints for pathfinding because they are simple and quite effective in terms of using less computation resources. However, they do have some issues. What if we want to update the obstacles in our map? We'll also have to place waypoints for the updated map again, as shown in the following figure: Having to manually alter waypoints every time the layout of your level changes can be cumbersome and very time-consuming. In addition, following each node to the target can mean that the AI character moves in a series of straight lines from node to node. Look at the preceding figures; it's quite likely that the AI character will collide with the wall where the path is close to the wall. If that happens, our AI will keep trying to go through the wall to reach the next target, but it won't be able to and will get stuck there. Even though we can smooth out the path by transforming it to a spline and doing some adjustments to avoid such obstacles, the problem is that the waypoints don't give us any information about the environment, other than the spline being connected between the two nodes. What if our smoothed and adjusted path passes the edge of a cliff or bridge? The new path might not be a safe path anymore. So, for our AI entities to be able to effectively traverse the whole level, we're going to need a tremendous number of waypoints, which will be really hard to implement and manage. This is a situation where a NavMesh makes the most sense. NavMesh is another graph structure that can be used to represent our world, similar to the way we did with our square tile-based grid or waypoints graph, as shown in the following diagram: A Navigation Mesh uses convex polygons to represent the areas in the map that an AI entity can travel to. The most important benefit of using a Navigation Mesh is that it gives a lot more information about the environment than a waypoint system. Now we can adjust our path safely because we know the safe region in which our AI entities can travel. Another advantage of using a Navigation Mesh is that we can use the same mesh for different types of AI entities. Different AI entities can have different properties such as size, speed, and movement abilities. A set of waypoints is tailored for humans; AI may not work nicely for flying creatures or AI-controlled vehicles. These might need different sets of waypoints. Using a Navigation Mesh can save a lot of time in such cases. Generating a Navigation Mesh programmatically based on a scene can be a somewhat complicated process. Fortunately, Unity 3.5 introduced a built-in Navigation Mesh generator as a pro-only feature, but is now included for free from the Unity 5 personal edition onwards. Unity's implementation provides a lot of additional functionality out of the box. Not just the generation of the NavMesh itself, but agent collision and pathfinding on the generated graph (via A*, of course) as well. Flocking and crowd dynamics In nature, we can observe what we refer to as flocking behavior in several species. Flocking simply refers to a group moving in unison. Schools of fish, flocks of sheep, and cicada swarms are fantastic examples of this behavior. Modeling this behavior using manual means, such as animation, can be very time-consuming and is not very dynamic. Similarly, crowds of humans, be it on foot or in vehicles, can be modeled by representing the entire crowd as an entity rather than trying to model each individual as its own agent. Each individual in the group only really needs to know where the group is heading and what their nearest neighbor is up to in order to function as part of the system. Behavior trees The behavior tree is another pattern used to represent and control the logic behind AI agents. Behavior trees have become popular for applications in AAA games such as Halo and Spore. Previously, we briefly covered FSMs. They provide a very simple yet efficient way to define the possible behaviors of an agent, based on the different states and transitions between them. However, FSMs are considered difficult to scale as they can get unwieldy fairly quickly and require a fair amount of manual setup. We need to add many states and hardwire many transitions in order to support all the scenarios we want our agent to consider. So, we need a more scalable approach when dealing with large problems. This is where behavior trees come in. Behavior trees are a collection of nodes organized in a hierarchical order, in which nodes are connected to parents rather than states connected to each other, resembling branches on a tree, hence the name. The basic elements of behavior trees are task nodes, whereas states are the main elements for FSMs. There are a few different tasks such as Sequence, Selector, and Parallel Decorator. It can be a bit daunting to track what they all do. The best way to understand this is to look at an example. Let's break the following transitions and states down into tasks, as shown in the following figure: Let's look at a Selector task for this behavior tree. Selector tasks are represented by a circle with a question mark inside. The selector will evaluate each child in order, from left to right. First, it'll choose to attack the player; if the Attack task returns a success, the Selector task is done and will go back to the parent node, if there is one. If the Attack task fails, it'll try the Chase task. If the Chase task fails, it'll try the Patrol task. The following figure shows the basic structure of this tree concept: Test is one of the tasks in the behavior tree. The following diagram shows the use of Sequence tasks, denoted by a rectangle with an arrow inside it. The root selector may choose the first Sequence action. This Sequence action's first task is to check whether the player character is close enough to attack. If this task succeeds, it'll proceed with the next task, which is to attack the player. If the Attack task also returns successfully, the whole sequence will return as a success, and the selector will be done with this behavior and will not continue with other Sequence tasks. If the proximity check task fails, the Sequence action will not proceed to the Attack task, and will return a failed status to the parent selector task. Then the selector will choose the next task in the sequence, Lost or Killed Player? The following figure demonstrates this sequence: The other two common components are parallel tasks and decorators. A parallel task will execute all of its child tasks at the same time, while the Sequence and Selector tasks only execute their child tasks one by one. Decorator is another type of task that has only one child. It can change the behavior of its own child's tasks including whether to run its child's task or not, how many times it should run, and so on. Thinking with fuzzy logic Finally, we arrive at fuzzy logic. Put simply, fuzzy logic refers to approximating outcomes as opposed to arriving at binary conclusions. We can use fuzzy logic and reasoning to add yet another layer of authenticity to our AI. Let's use a generic bad guy soldier in a first person shooter as our agent to illustrate this basic concept. Whether we are using a finite state machine or a behavior tree, our agent needs to make decisions. Should I move to state x, y, or z? Will this task return true or false? Without fuzzy logic, we'd look at a binary value (true or false, or 0 or 1) to determine the answers to those questions. For example, can our soldier see the player? That's a yes/no binary condition. However, if we abstract the decision-making process even further, we can make our soldier behave in much more interesting ways. Once we've determined that our soldier can see the player, the soldier can then "ask" itself whether it has enough ammo to kill the player, or enough health to survive being shot at, or whether there are other allies around it to assist in taking the player down. Suddenly, our AI becomes much more interesting, unpredictable, and more believable. This added layer of decision making is achieved by using fuzzy logic, which in the simplest terms, boils down to seemingly arbitrary or vague terminology that our wonderfully complex brains can easily assign meaning to, such as "hot" versus "warm" or "cool" versus "cold," converting this to a set of values that a computer can easily understand. Game AI and academic AI have different objectives. Academic AI researchers try to solve real-world problems and prove a theory without much limitation in terms of resources. Game AI focuses on building NPCs within limited resources that seem to be intelligent to the player. The objective of AI in games is to provide a challenging opponent that makes the game more fun to play. To summarize, we learned briefly about the different AI techniques that are widely used in games such as FSMs, sensor and input systems, flocking and crowd behaviors, path following and steering behaviors etc. If you enjoyed this excerpt, check out this book Unity 2017 Game AI Programming - Third Edition, to explore brand-new features in Unity 2017 for easier Artificial Intelligence implementation in your games. How to create non-player Characters (NPC) with Unity 2018 Put your game face on! Unity 2018.1 is now available Implementing lighting & camera effects in Unity 2018
Read more
  • 0
  • 0
  • 30762

article-image-a-machine-learning-roadmap-for-web-developers
Sugandha Lahoti
27 Aug 2018
7 min read
Save for later

A Machine learning roadmap for Web Developers

Sugandha Lahoti
27 Aug 2018
7 min read
Now that you’ve opened this article, I’ll assume you’re a web developer who is all excited with the prospect of building a machine learning project. You may be here for one of these reasons. Either you have been in a circle of people who find web development is dying? (Is it really dying or just unwell?). Or maybe you are stagnating in your current trajectory. And so, you want to learn something different, something trending, something like Artificial Intelligence. Or you/your employer/your client is aware of the capabilities of machine learning and want to include it in some part of your web app to make it more powerful. Or like the majority of the folks, you just want to see first hand if all the fuss about artificial intelligence is really worth all the effort to switch gears, by building a side toy ML project. Either way, there are different approaches to fulfill these needs. Learning Machine Learning for the Web with Javascript Learning machine learning coming from a web development background comes with its own constraints. You might worry about having to learn entirely different concepts from scratch - from different algorithms to programming languages like Python to mathematical concepts like linear algebra, calculus, and statistics. However, chances are you can skip learning a new language. You probably know some Javascript in some form or the other thanks to your web development experience. As such, you can learn Machine Learning in JavaScript (You don’t have to learn another programming language from scratch) and take it right to your browsers with WebGL. There are some advantages to using JavaScript for ML. Its popularity is one; while ML in JavaScript is not as popular as Python’s ML ecosystem, at the moment, the language itself is. As demand for ML applications rises, and as hardware becomes faster and cheaper, it's only natural for machine learning to become more prevalent in the JavaScript world. The JavaScript ecosystem offers a rich set of libraries suited for most Machine Learning tasks. Math: math.js Data Analysis: d3.js Server: node.js (express, koa, hapi) Performance: Tensorflow.js (e.g. GPU accelerated via WebGL API in the browser), Keras.js etc. Read also: 5 JavaScript machine learning libraries you need to know BRIIM is a good collection of materials to get you started as web developer or JavaScript enthusiast in machine learning. In case you’re interested in learning Python instead of Javascript, here are the set of libraries you should pick. Math: numpy Data Analysis: Pandas Data Mining: PySpark Server: Flask, Django Performance: TensorFlow (because it is written with a Python API over a C/C++ engine) or Keras (sits on top of TensorFlow). Using Machine Learning as a service If you don’t want to spend your time learning frameworks, tools, and languages suited for machine learning, you can adopt Machine Learning as a service or MLaaS. These services provide machine learning tools as part of cloud computing services. So basically, you can benefit from machine learning without the allied cost, time and risk of establishing an in-house internal machine learning team. All you need is sufficient knowledge of incorporating APIs. All Machine Learning tasks including data pre-processing, model training, model evaluation, and predictions can be completed through MLaaS. Read also: How machine learning as a service is transforming cloud A large number of companies provide Machine Learning as a service. Most prominent ones include: Amazon Machine Learning Amazon ML makes it easy for web developers to build smart applications using simple APIs. This includes applications for fraud detection, demand forecasting, targeted marketing, and click prediction. They provide a Developer Guide, which provides a conceptual overview of Amazon ML and includes detailed instructions for using the service. They also have a API reference, which describes all the API operations and provides sample requests and responses for supported web service protocols. Azure ML web app templates The web app templates available in the Azure Marketplace can build a custom web app that knows your web service's input data and expected results. All you need to do is give the web app access to your web service and data, and the template does the rest. There are two available templates: Azure ML Request-Response Service Web App Template Azure ML Batch Execution Service Web App Template Each template creates a sample ASP.NET application by using the API URI and key for your web service. The template then deploys the application as a website to Azure. No coding is required to use these templates. You just supply the API key and URI, and the template builds the application for you. Google Cloud based APIs Google also provides machine learning services, with pre-trained models and a service to generate your own tailored models. Google’s Cloud AutoML is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their business needs. Cloud AutoML is used by Disney on their website shopDisney to enhance guest experience through more relevant search results, expedited discovery, and product recommendations. Building Conversational Interfaces As a web developer, another thing you might be looking into, is developing conversational interfaces or chatbots to enhance your web apps. Amazon, Google, and Microsoft provide Machine learning powered tools to help developers with building their own chatbots. Amazon Lex You can embed chatbots in your web apps with the Amazon Lex featuring ASR (Automatic Speech Recognition) and NLP (Natural Language Processing) capabilities. The API can recognize written and spoken text and the Lex interface allows you to hook the recognized inputs to various back-end solutions. Lex currently supports deploying chatbots for Facebook Messenger, Slack, and Twilio. Google Dialogflow Google’s Dialogflow can build voice and text-based conversational interfaces, such as voice apps and chatbots, powered by AI. Dialogflow incorporates Google's machine learning expertise and products such as Google Cloud Speech-to-Text. The API can be tweaked and customized for needed intents using Java, Node.js, and Python. It is also available as an enterprise edition. Microsoft Azure Cognitive Services Microsoft Cognitive Services simplify a variety of AI-based tasks, giving you a quick way to add intelligence technologies to your bots with just a few lines of code. It provides tools and APIs for aiding the development of conversational interfaces. These include: Translator Speech API Bing Speech API to convert text into speech and speech into text Speaker Recognition API for voice verification tasks Custom Speech Service to apply Azure NLP capacities using own data and models Language Understanding Intelligent Service (LUIS) is an API that analyzes intentions in text to be recognized as commands Text Analysis API for sentiment analysis and defining topics Bing Spell Check Translator Text API Web Language Model API that estimates probabilities of words combinations and supports word autocompletion Linguistic Analysis API used for sentence separation, tagging the parts of speech, and dividing texts into labeled phrases Read also: Top 4 chatbot development frameworks for developers These tools should be enough to get your feet off the ground quickly and move into the specific area of machine learning. Ultimately your choice of tool relies on the kind of application you want to build, your level of expertise, and how much time and effort you’re willing to put to learn. Obviously, depending on your area of choice, you would have to do more research and develop yourself in those areas. How should web developers learn machine learning? 5 examples of Artificial Intelligence in Web apps The most valuable skills for web developers to learn in 2018
Read more
  • 0
  • 0
  • 30559

article-image-mapreduce-amazon-emr-nodejs
Pedro Narciso
14 Dec 2016
8 min read
Save for later

MapReduce on Amazon EMR with Node.js

Pedro Narciso
14 Dec 2016
8 min read
In this post, you will learn how to write a Node.js MapReduce application and how to run it on Amazon EMR. You don’t need to be familiar with Hadoop or EMR API's. In order to run the examples, you will need a Github account, an Amazon AWS, some money to spend at AWS, and Bash or an equivalent installed on your computer. EMR, BigData, and MapReduce We define BigData as those data sets too large or too complex to be processed by traditional processing applications. BigData is also a relative term: A data set can be too big for your Raspberry PI, while being a piece of cake for your desktop. What is MapReduce? MapReduce is a programming model that allows data sets to be processed in a parallel and distributed fashion. How does it work? You create a cluster and feed it with the data set. Then, you define a mapper and a reducer. MapReduce involves the following three steps: Mapping step: This breaks down the input data into KeyValue pairs Shuffling step: KeyValue pairs are grouped by Key Reducing step: KeyValue pairs are processed by Key in parallel It’s guaranteed that all data belonging to a single key will be processed by a single reducer instance. Our processing job project directory setup Today, we will implement a very simple processing job: counting unique words from a set of text files. The code for this article is hosted at Here. Let's set up a new directory for our project: $ mkdir -p emr-node/bin $ cd emr-node $ npm init --yes $ git init We also need some input data. In our case, we will download some books from project Gutenberg as follows: $ mkdir data $ curl -Lo data/tmohah.txt http://www.gutenberg.org/ebooks/45315.txt.utf-8 $ curl -Lo data/mad.txt http://www.gutenberg.org/ebooks/5616.txt.utf-8 Mapper and Reducer As we stated before, the mapper will break down its input into KeyValue pairs. Since we use the streaming API, we will read the input form stdin. We will then split each line into words, and for each word, we are going to print "word1" to stdout. TAB character is the expected field separator. We will see later the reason for setting "1" as the value. In plain Javascript, our ./bin/mapper can be expressed as: #!/usr/bin/env node const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); rl.on('line', function(line){ line.trim().split(' ').forEach(function(word){ console.log(`${word}t1`); }); }); As you can see, we have used the readline module (a Node built-in module) to parse stdin. Each line is broken down into words, and each word is printed to stdout as we stated before. Time to implement our reducer. The reducer expects a set of KeyValue pairs, sorted by key, as input, such as the following: First<TAB>1 First<TAB>1 Second<TAB>1 Second<TAB>1 Second<TAB>1 We then expect the reducer to output the following: First<TAB>2 Second<TAB>3 Reducer logic is very simple and can be expressed in pseudocode as: IF !previous_key previous_key = current_key counter = value IF previous_key equals current_key counter = counter + value ELSE print previous_key<TAB>counter previous_key = current_key; counter = value; The first statement is necessary to initialize the previous_key and counter variables. Let's see the real JavaScript implementation of ./bin/reducer: #!/usr/bin/env node var previousKey, counter; const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); function print(){ console.log(`${previousKey}t${counter}`); } function countWord(line) { let [currentKey, value] = line.split('t'); value = +value; if(typeof previousKey === 'undefined'){ previousKey = currentKey; counter = value; return; } if(previousKey === currentKey){ counter = counter + value; return; } print(); previousKey = currentKey; counter = value; } process.stdin.on('end',function(){ print(); }); rl.on('line', countWord); Again, we use readline module to parse stdin line by line. The countWord function implements our reducer logic described before. The last thing we need to do is to set execution permissions to those files: chmod +x ./bin/mapper chmod +x ./bin/reducer How do I test it locally? You have two ways to test your code: Install Hadoop and run a job With a simple shell script The second one is my preferred one for its simplicity: ./bin/mapper <<EOF | sort | ./bin/reducer first second first first second first EOF It should print the following: first<TAB>4 second<TAB>2 We are now ready to run our job in EMR! Amazon environment setup Before we run any processing job, we need to perform some setup on the AWS side. If you do not have an S3 bucket, you should create one now. Under that bucket, create the following directory structure: <your bucket> ├── EMR │ └── logs ├── bootstrap ├── input └── output Upload our previously downloaded books from project Gutenberg to the input folder. We also need AWS cli installed on the computer. You can install it with the python package manager. If you do not have AWS cli installed on your computer, then run: $ sudo pip install awscli awscli requires some configuration, so run the following and provide the requested data: $ aws configure You can find this data in your Amazon AWS web console. Be aware that usability is not Amazon’s strongest point. If you do not have your IAM EMR roles yet, it is time to create them: aws emr create-default-roles Good. You are now ready to deploy your first cluster. Check out this (run-cluster.sh) script: #!/bin/bash MACHINE_TYPE='c1.medium' BUCKET='pngr-emr-demo' REGION='eu-west-1' KEY_NAME='pedro@triffid' aws emr create-cluster --release-label 'emr-4.0.0' --enable-debugging --visible-to-all-users --name PNGRDemo --instance-groups InstanceCount=1,InstanceGroupType=CORE,InstanceType=$MACHINE_TYPE InstanceCount=1,InstanceGroupType=MASTER,InstanceType=$MACHINE_TYPE --no-auto-terminate --enable-debugging --log-uri s3://$BUCKET/EMR/logs --bootstrap-actions Path=s3://$BUCKET/bootstrap/bootstrap.sh,Name=Install --ec2-attributes KeyName=$KEY_NAME,InstanceProfile=EMR_EC2_DefaultRole --service-role EMR_DefaultRole --region $REGION The previous script will create a 1 master, 1 core cluster, which is big enough for now. You will need to update this script with your bucket, region, and key name. Remember that your keys are listed at "AWS EC2 console/Key pairs". Running this script will print something like the following: { "ClusterId": "j-1HHM1B0U5DGUM" } That is your cluster ID and you will need it later. Please visit your Amazon AWS EMR console and switch to your region. Your cluster should be listed there. It is possible to add the processing steps with either the UI or aws cli. Let's use a shell script (add-step.sh): #!/bin/bash CLUSTER_ID=$1 BUCKET='pngr-emr-demo' OUTPUT='output/1' aws emr add-steps --cluster-id $CLUSTER_ID --steps Name=CountWords,Type=Streaming,Args=[-input,s3://$BUCKET/input,-output,s3://$BUCKET/$OUTPUT,-mapper,mapper,-reducer,reducer] It is important to point out that our "OUTPUT" directory does not exist at S3 yet. Otherwise, the job will fail. Call ./add-step.sh plus the cluster ID to add our CountWords step: ./add-step j-1HHM1B0U5DGUM Done! So go back to the Amazon UI, reload the cluster page, and check the steps. "CountWords" step should be listed there. You can track job progress from the UI (reload the page) or from the command line interface. Once the job is done, terminate the cluster. You will probably want to configure the cluster to terminate as soon as it finishes or when any step fails. Termination behavior can be specified with the "aws emr create-cluster". Sometimes the bootstrap process can be difficult. You can SSH into the machines, but before that, you will need to modify their security groups, which are listed at "EC2 web console/security groups". Where to go from here? You can (and should) break down your processing jobs into smaller steps because it will simplify your code and add more composability to your steps. You can compose more complex processing jobs by using the output of a step as the input for the next step. Imagine that you have run the "CountWords" processing job several times and now you want to sum the outputs. Well, for that particular case, you just add a new step with an "identity mapper" and your already built reducer, and feed it with all of the previous outputs. Now you can see why we output "WORD1" from the mapper. About the author Pedro Narciso García Revington is a Senior Full Stack Developer with 10+ years of experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD, and polyglot persistence.
Read more
  • 0
  • 0
  • 30495

article-image-types-augmented-reality-targets
Aarthi Kumaraswamy
08 Apr 2018
6 min read
Save for later

Types of Augmented Reality targets

Aarthi Kumaraswamy
08 Apr 2018
6 min read
The essence of Augmented Reality is that your device recognizes objects in the real world and renders the computer graphics registered to the same 3D space, providing the illusion that the virtual objects are in the same physical space with you. Since augmented reality was first invented decades ago, the types of targets the software can recognize has progressed from very simple markers for images and natural feature tracking to full spatial map meshes. There are many AR development toolkits available; some of them are more capable than others of supporting a range of targets. The following is a survey of various Augmented Reality target types. We will go into more detail in later chapters, as we use different targets in different projects. Marker The most basic target is a simple marker with a wide border. The advantage of marker targets is they're readily recognized by the software with very little processing overhead and minimize the risk of the app not working, for example, due to inconsistent ambient lighting or other environmental conditions. The following is the Hiro marker used in example projects in ARToolkit: Coded Markers Taking simple markers to the next level, areas within the border can be reserved for 2D barcode patterns. This way, a single family of markers can be reused to pop up many different virtual objects by changing the encoded pattern. For example, a children's book may have an AR pop up on each page, using the same marker shape, but the bar code directs the app to show only the objects relevant to that page in the book. The following is a set of very simple coded markers from ARToolkit: Vuforia includes a powerful marker system called VuMark that makes it very easy to create branded markers, as illustrated in the following image. As you can see, while the marker styles vary for specific marketing purposes, they share common characteristics, including a reserved area within an outer border for the 2D code: Images The ability to recognize and track arbitrary images is a tremendous boost to AR applications as it avoids the requirement of creating and distributing custom markers paired with specific apps. Image tracking falls into the category of natural feature tracking (NFT). There are characteristics that make a good target image, including having a well-defined border (preferably eight percent of the image width), irregular asymmetrical patterns, and good contrast. When an image is incorporated in your AR app, it's first analyzed and a feature map (2D node mesh) is stored and used to match real-world image captures, say, in frames of video from your phone. Multi-targets It is worth noting that apps may be set up to see not just one marker in view but multiple markers. With multitargets, you can have virtual objects pop up for each marker in the scene simultaneously. Similarly, markers can be printed and folded or pasted on geometric objects, such as product labels or toys. The following is an example cereal box target: Text recognition If a marker can include a 2D bar code, then why not just read text? Some AR SDKs allow you to configure your app (train) to read text in specified fonts. Vuforia goes further with a word list library and the ability to add your own words. Simple shapes Your AR app can be configured to recognize basic shapes such as a cuboid or cylinder with specific relative dimensions. Its not just the shape but its measurements that may distinguish one target from another: Rubik's Cube versus a shoe box, for example. A cuboid may have width, height, and length. A cylinder may have a length and different top and bottom diameters (for example, a cone). In Vuforia's implementation of basic shapes, the texture patterns on the shaped object are not considered, just anything with a similar shape will match. But when you point your app to a real-world object with that shape, it should have enough textured surface for good edge detection; a solid white cube would not be easily recognized. Object recognition The ability to recognize and track complex 3D objects is similar but goes beyond 2D image recognition. While planar images are appropriate for flat surfaces, books or simple product packaging, you may need object recognition for toys or consumer products without their packaging. Vuforia, for example, offers Vuforia Object Scanner to create object data files that can be used in your app for targets. The following is an example of a toy car being scanned by Vuforia Object Scanner: Spatial maps Earlier, we introduced spatial maps and dynamic spatial location via SLAM. SDKs that support spatial maps may implement their own solutions and/or expose access to a device's own support. For example, the HoloLens SDK Unity package supports its native spatial maps, of course. Vuforia's spatial maps (called Smart Terrain) does not use depth sensing like HoloLens; rather, it uses visible light camera to construct the environment mesh using photogrammetry. Apple ARKit and Google ARCore also map your environment using the camera video fused with other sensor data. Geolocation A bit of an outlier, but worth mentioning, AR apps can also use just the device's GPS sensor to identify its location in the environment and use that information to annotate what is in view. I use the word annotate because GPS tracking is not as accurate as any of the techniques we have mentioned, so it wouldn't work for close-up views of objects. But it can work just fine, say, standing atop a mountain and holding your phone up to see the names of other peaks within the view or walking down a street to look up Yelp! reviews of restaurants within range. You can even use it for locating and capturing Pokémon. [box type="note" align="" class="" width=""]You read an excerpt from the book, Augmented Reality for Developers, by Jonathan Linowes, and Krystian Babilinski. To learn how to use these targets and to build a variety of AR apps, check the book now![/box]
Read more
  • 0
  • 0
  • 30467
article-image-denys-vuika-on-building-secure-and-performant-electron-apps-and-more
Bhagyashree R
02 Dec 2019
7 min read
Save for later

Denys Vuika on building secure and performant Electron apps, and more

Bhagyashree R
02 Dec 2019
7 min read
Building cross-platform desktop applications can be difficult. It requires you to have knowledge of specific tools and technologies for each platform you want to target. Wouldn't it be great if you could write and maintain a single codebase and that too with your existing web development skills? Electron helps you do exactly that. It is a framework for building cross-platform desktop apps with JavaScript, HTML, and CSS. Electron was originally not a separate project but was actually built to port the Mac-only Atom text editor to different platforms. The Atom team at GitHub tried out solutions like Chromium Embedded Framework (CEF) and node-webkit (now known as NW.js), but nothing was working right. This is when Cheng Zhao, a GitHub engineer started a new project and rewrote node-webkit from scratch. This project was Atom Shell, that we now know as Electron. It was open-sourced in 2014 and was renamed to Electron in May 2015. To get an insight into why so many companies are adopting Electron, we interviewed Denys Vuika, a veteran programmer and author of the book Electron Projects. He also talked about when you should choose Electron, best practices for building secure Electron apps, and more. Electron Projects is a project-based guide that will help you explore the components of the Electron framework and its integration with other JS libraries to build 12 real-world desktop apps with an increasing level of complexity. When is Electron the best bet and when is it not  Many popular applications are built using Electron including VSCode, GitHub Desktop, and Slack. It enables developers to deliver new features fast, while also maintaining consistency with all platforms. Vuika says, “The cost and speed of the development, code reuse are the main reasons I believe. The companies can effectively reuse existing code to build desktop applications that look and behave exactly the same across the platforms. No need to have separate developer teams for various platforms.” When we asked Vuika about why he chose Electron, he said, “Historically, I got into the Electron app development to build applications that run on macOS and Linux, alongside traditional Windows platform. I didn't want to study another stack just to build for macOS, so Electron shell with the web-based content was extremely appealing.” Sharing when you should choose Electron, he said, "Electron is the best bet when you want to have a single codebase and single developer team working with all major platforms. Web developers should have a very minimal learning curve to get started with Electron development. And the desktop application codebase can also be shared with the website counterpart. That saves a huge amount of time and money. Also, the Node.js integration with millions of useful packages to cover all possible scenarios." The case when it is not a good choice is, “if you are just trying to wrap the website functionality into a desktop shell. The biggest benefit of Electron applications is access to the local file system and hardware.” Building Electron application using Angular, React, Vue Electron integrates with all the three most popular JavaScript frameworks: React, Vue, and Angular. All these three have their own pros and cons. If you are coming from a JavaScript background, React could probably be a good option as it has much less abstraction away from vanilla JS. Other advantages are it is very flexible, you can extend its core functionality by adding libraries, and it is backed by a great community. Vue is a lightweight framework that’s easier to learn and get productive. Angular has exceptional TypeScript support and includes dependency injections, Http services, internationalization, formatting pipes, server-side rendering, a CLI, animations and much more. When it comes to Electron, choosing one of them depends on which framework you are comfortable with and what fits your needs. Vuika recommends, "There are three pretty much big developer camps out there: React, Angular and Vue. All of them focus on web components and client applications, so it’s a matter of personal preferences or historical decisions if speaking about companies. Also, each JavaScript framework has more than one set of mature UI libraries and design systems, so there are always options to choose from. For novice developers he recommends, “keep in mind it is still a web stack. Pick whatever you are comfortable to build a web application with." Vuika's book, Electron Projects, has a dedicated chapter, Integrating Electron applications with Angular, React, and Vue to help you learn how to integrate them with your Electron apps. Tips on building performant and secure apps Electron’s core components are Chromium, more specifically the libchromiumcontent library, Node.js, and of Chromium Google V8 javascript engine. Each Electron app ships with its own isolated copy of Chromium, which could affect their memory footprint as well as the bundle size. Sharing other reasons Vuika said, “It has some memory footprint but, based on my personal experience, most of the memory issues are usually related to the application implementation and resource management rather than the Electron shell itself.” Some of the best practices that the Electron team recommends are: examining modules and their dependencies before adding to your applications, ensuring the main process is not blocked, among others. You can find the full checklist on Electron’s official site. Vuika suggests, “Electron developers have all the development toolset they use for web development: Chrome Developer Tools with debuggers, profilers, and many other great features. There are also build tools for each frontend framework that allow minification, code splitting, and tree shaking. Routing libraries allow loading only the content the user needs at a particular point. Many areas to improve memory and resource consumption.” More recently, some developers have also started using Rust and also recommend using WebAssembly with Electron to minimize the Electron pain points while enjoying its benefits.  Coming to security, Vuika says, “With Electron, a web application can have nearly full access to the local file system and operating system resources by means of the Node.js process. Developers should be very careful trusting web content, especially if using remotely served HTML content.”   “Electron team has recently published a very good article on the security that I strongly recommend to read and keep in the bookmarks. The article dwells on explaining major security pitfalls, as well as ways to harden your applications,” he recommends. Meanwhile, Electron is also improving with its every subsequent release. Starting with Electron 6.0 the team has started laying “the groundwork for a future requirement that native Node modules loaded in the renderer process be either N-API or Context Aware.” This update is expected to come in Electron 11.0.  “Also, keep in mind that Electron keeps improving and evolving all the time. It is getting more secure and faster with each next release. For developers, it is more important to build the knowledge of creating and debugging applications, as for me,” he adds. About the author Denys Vuika is an Applications Platform Developer and Tech Lead at Alfresco Software, Inc. He is a full-stack developer and a constant open source contributor, with more than 16 years of programming experience, including ten years of front-end development with AngularJS, Angular, ASP.NET, React.js and other modern web technologies, more than three years of Node.js development. Denys works with web technologies on a daily basis, has a good understanding of Cloud development, and containerization of the web applications. Denys Vuika is a frequent Medium blogger and the author of the "Developing with Angular" book on Angular, Javascript, and Typescript development. He also maintains a series of Angular-based open source projects. Check out Vuika’s latest book, Electron Projects on PacktPub. This book is a project-based guide to help you create, package, and deploy desktop applications on multiple platforms using modern JavaScript frameworks Follow Denys Vuika on Twitter: @DenysVuika. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more The Electron team publicly shares the release timeline for Electron 5.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 30386

article-image-key-trends-in-software-development-in-2019-cloud-native-and-the-shrinking-stack
Richard Gall
18 Dec 2018
8 min read
Save for later

Key trends in software development in 2019: cloud native and the shrinking stack

Richard Gall
18 Dec 2018
8 min read
Bill Gates is quoted as saying that we tend to overestimate the pace of change over a period of 2 years, but underestimate change over a decade. It’s an astute observation: much of what will matter in 2019 actually looks a lot like what we said will be important in development this year. But if you look back 10 years, the change in the types of applications and websites we build - as well as how we build them - is astonishing. The web as we understood it in 2008 is almost unrecognisable. Today, we are in the midst of the app and API economy. Notions of surfing the web sound almost as archaic as a dial up tone. Similarly, the JavaScript framework boom now feels old hat - building for browsers just sounds weird... So, as we move into 2019, progressive web apps, artificial intelligence, and native app development remain at the top of development agenda. But this doesn’t mean these changes are to be ignored as empty hype. If anything, as adoption increases and new tools emerge, we will begin to see more radical shifts in ways of working. The cutting edge will need to sharpen itself elsewhere. What will it mean to be a web developer in 2019? But these changes are enforcing wider changes in the industry. Arguably, it’s transforming what it means to be a web developer. As applications become increasingly lightweight (thanks to libraries and frameworks like React and Vue), and data becomes more intensive, thanks to the range of services upon which applications and websites depend, developers need to expand across the stack. You can see this in some of the latest Packt titles - in Modern JavaScript Web Development Cookbook, for example, you’ll learn microservices and native app development - topics that have typically fallen outside of the strict remit of web development. The simplification of many aspects of development has, ironically, forced developers to look more closely at how these aspects fit together. As you move further into layers of abstraction, the way things interact and work alongside each other become vital. For the most part, it’s no longer a case of writing the requisite code to make something run on the specific part of the application you’re working on, it’s rather about understanding how the various pieces - from the backend to the front end - fit together. This means, in 2019, you need to dive deeper and get to know your software systems inside out. Get comfortable with the backend. Dive into cloud. Start playing with microservices. Rethink and revisit languages you thought you knew. Get to know your infrastructure: tackling the challenges of API development It might sound strange, but as the stack shrinks and the responsibilities of developers - web and otherwise - shift, understanding the architectural components within the software their building is essential. You could blame some of this on DevOps - essentially, it has made developers responsible for how their code runs once it hits production. Because of this important change, the requisite skills and toolchain for the modern developer is also expanding. There are a range of routes into software architecture, but exploring API design is a good place to begin. Hands on RESTful API Design offers a practical way into the topic. While REST is the standard for API design, the diverse range of tools and approaches is making managing the client a potentially complex but interesting area. GraphQL, a query language developed by Facebook is said to have killed off REST (although we wouldn’t be so hasty), while Redux and Relay, two libraries for managing data in React applications, have seen a lot of interest over the last 12 months as two key tools for working with APIs. Want to get started with GraphQL? Try Beginning GraphQL. Learn Redux with Learning Redux.       Microservices: take responsibility for your infrastructure The reason that we’re seeing so many tools offering ways of managing APIs is that microservices are becoming the dominant architectural mode. This requires developer attention too. That’s not to say that you need to implement microservices now (in fact, there are probably many reasons not to), but if you want to be building software in 5 years time, getting to grips with the principles behind microservices and the tools that can help you use them. Perhaps one of the central technologies driving microservices are containers. You could run microservices in a virtual machine, but because they’re harder to scale than containers, you probably wouldn’t be seeing the benefits you’d be expecting from a microservices architecture. This means getting to grips with core container technologies is vital. Docker is the obvious place to start. There are varying degrees to which developers need to understand it, but even if you don’t think you’ll be using it immediately it does give you a nice real-world foundation in containers if you don’t already have one. Watch and learn how to put Docker to work with the Hands on Docker for Microservices video.  But beyond Docker, Kubernetes is the go to tool that allows you to scale and orchestrate containers. This gives you control over how you scale application services in a way that you probably couldn’t have imagined a decade ago. Get a grounding in Kubernetes with Getting Started with Kubernetes - Third Edition, or follow a 7 day learning plan with Kubernetes in 7 Days. If you want to learn how Docker and Kubernetes come together as part of a fully integrated approach to development, check out Hands on Microservices with Node.js. It's time for developers to embrace cloud It should come as no surprise that, if the general trend is towards full stack, where everything is everyone’s problem, that developers simply can’t afford to ignore cloud. And why would you want to - the levels of abstraction it offers, and the various services and integrations that come with the leading cloud services can make many elements of the development process much easier. Issues surrounding scale, hardware, setup and maintenance almost disappear when you use cloud. That’s not to say that cloud platforms don’t bring their own set of challenges, but they do allow you to focus on more interesting problems. But more importantly, they open up new opportunities. Serverless becomes a possibility - allowing you to scale incredibly quickly by running everything on your cloud provider, but there are other advantages too. Want to get started with serverless? Check out some of these titles… JavaScript Cloud Native Development Cookbook Hands-on Serverless Architecture with AWS Lambda [Video] Serverless Computing with Azure [Video] For example, when you use cloud you can bring advanced features like artificial intelligence into your applications. AWS has a whole suite of machine learning tools - AWS Lex can help you build conversational interfaces, while AWS Polly turns text into speech. Similarly, Azure Cognitive Services has a diverse range of features for vision, speech, language, and search. What cloud brings you, as a developer, is a way of increasing the complexity of applications and processes, while maintaining agility. Adding in features and optimizations previously might have felt sluggish - maybe even impossible. But by leveraging AWS and Azure (among others), you can do much more than you previously realised. Back to basics: New languages, and fresh approaches With all of this ostensible complexity in contemporary software development, you’d be forgiven for thinking that languages simply don’t matter. That’s obviously nonsense. There’s an argument that gaining a deeper understanding of how languages work, what they offer, and where they may be weak, can make you a much more accomplished developer. Be prepared is sage advice for a world where everything is unpredictable - both in the real world and inside our software systems too. So, you have two options - and both are smart. Either go back to a language you know and explore a new paradigm or learn a new language from scratch. Learn a new language: Kotlin Quick Start Guide Hands-On Go Programming Mastering Go Learning TypeScript 2.x - Second Edition     Explore a new programming paradigm: Functional Programming in Go [Video] Mastering Functional Programming Hands-On Functional Programming in RUST Hands-On Object-Oriented Programming with Kotlin     2019: the same, but different, basically... It's not what you should be saying if you work for a tech publisher, but I'll be honest: software development in 2019 will look a lot like it has in 2018.  But that doesn't mean you have time to be complacent. In just a matter of years, much of what feels new or ‘emerging’ today will be the norm. You don’t have to look hard to see the set of skills many full stack developer job postings are asking for - the demands are so diverse that adaptability is clearly immensely valuable both for your immediate projects and future career prospects. So, as 2019 begins, commit to developing yourself sharpening your skill set.
Read more
  • 0
  • 0
  • 30385

article-image-uses-of-machine-learning-in-gaming
Natasha Mathur
22 Oct 2018
5 min read
Save for later

Uses of Machine Learning in Gaming

Natasha Mathur
22 Oct 2018
5 min read
All around us, our perception of learning and intellect is being challenged daily with the advent of new and emerging technologies. From self-driving cars, playing Go and Chess, to computers being able to beat humans at classic Atari games, the advent of a group of technologies we colloquially call Machine Learning have come to dominate a new era in technological growth – a new era of growth that has been compared with the same importance as the discovery of electricity and has already been categorized as the next human technological age. Games and simulations are no stranger to AI technologies and there are numerous assets available to the Unity developer in order to provide simulated machine intelligence. These technologies include content like Behavior Trees, Finite State Machine, navigation meshes, A*, and other heuristic ways game developers use to simulate intelligence. So, why Machine Learning and why now? The reason is due in large part to the OpenAI initiative, an initiative that encourages research across academia and the industry to share ideas and research on AI and ML. This has resulted in an explosion of growth in new ideas, methods, and areas for research. This means for games and simulations that we no longer have to fake or simulate intelligence. Now, we can build agents that learn from their environment and even learn to beat their human builders. This article is an excerpt taken from the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'  by Micheal Lanham. In this article, we look at the role that machine learning plays in game development. Machine Learning is an implementation of Artificial Intelligence. It is a way for a computer to assimilate data or state and provide a learned solution or response. We often think of AI now as a broader term to reflect a "smart" system. A full game AI system, for instance, may incorporate ML tools combined with more classic AIs like Behavior Trees in order to simulate a richer, more unpredictable AI. We will use AI to describe a system and ML to describe the implementation. How Machine Learning is useful in gaming Game engines have embraced the idea of incorporating ML into all aspects of its product and not just for use as a game AI. While most developers may try to use ML for gaming, it certainly helps game development in the following areas: Map/Level Generation: There are already plenty of examples where developers have used ML to auto-generate everything from dungeons to the realistic terrain. Getting this right can provide a game with endless replayability, but it can be some of the most challenging ML to develop. Texture/Shader Generation: Another area that is getting the attention of ML is texture and shader generation. These technologies are getting a boost brought on by the attention of advanced generative adversarial networks, or GAN. There are plenty of great and fun examples of this tech in action; just do a search for DEEP FAKES in your favorite search engine. Model Generation: There are a few projects coming to fruition in this area that could greatly simplify 3D object construction through enhanced scanning and/or auto-generation. Imagine being able to textually describe a simple model and having ML build it for you, in real-time, in a game or other AR/VR/MR app, for example. Audio Generation: Being able to generate audio sound effects or music on the fly is already being worked on for other areas, not just games. Yet, just imagine being able to have a custom designed soundtrack for your game developed by ML. Artificial Players: This encompasses many uses from the gamer themselves using ML to play the game on their behalf to the developer using artificial players as enhanced test agents or as a way to engage players during low activity. If your game is simple enough, this could also be a way of auto testing levels. NPCs or Game AI: Currently, there are better patterns out there to model basic behavioral intelligence in the form of Behavior Trees. While it's unlikely that BTs or other similar patterns will go away any time soon, imagine being able to model an NPC that may actually do an unpredictable, but rather cool behavior. This opens all sorts of possibilities that excite not only developers but players as well. So, we learned about different areas of the gaming world such as model generation, artificial players, NPCs, level generation, etc, where Machine learning can be extensively used. If you found this post useful, be sure to check out the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning' to learn more machine learning concepts in gaming. 5 Ways Artificial Intelligence is Transforming the Gaming Industry How should web developers learn machine learning? Deep Learning in games – Neural Networks set to design virtual worlds
Read more
  • 0
  • 0
  • 30368
article-image-analyzing-the-chief-data-officer-role
Aaron Lazar
13 Apr 2018
9 min read
Save for later

GDPR is pushing the Chief Data Officer role center stage

Aaron Lazar
13 Apr 2018
9 min read
Gartner predicted that by 2020 90% of large organizations in regulated industries will have a Chief Data Officer role. With the recent heat around Facebook, Mark Zuckerberg, and the fast-approaching GDPR compliance deadline, it’s quite likely that 2018 will be the year of the Chief Data Officer. This article was first published in October, 2017 and has been updated to keep up with the latest trends in GDPR. In 2014 around 400 CEOs and top business execs were asked how they recognize data as a corporate asset. They responded with a mixed set of reactions and viewed the worth of data in their organization in varied ways. Now, in 2018 these reactions have drastically changed – more and more organizations have realized the importance of Data-as-an-Asset. More importantly, the European Union (EU) has made it mandatory that General Data Protection Regulation (GDPR) compliance be sought by the 25th of May, 2018. The primary reason for creating the Chief Data Officer role was to connect the ends between functional management and IT teams in an organization. But now, it looks like the CDO will also be primarily focusing on setting up and driving GDPR compliance, to avoid a fine up to €20 million or 4% of the annual global turnover of the previous year, whichever is higher. We’re going to spend a few minutes breaking down the Chief Data Officer role for you, revealing several interesting insights along the way. Let’s start with the obvious question. What might the Chief Data Officer’s responsibilities be? Like other C-suite execs, a Chief Data Officer is expected to have a well-blended mix of technical know-how and business acumen. Their role is very diverse and sometimes comes across as a pain to define the scope. Here are some of the key responsibilities of a Chief Data Officer: Data Policies and GDPR Compliance Data security is one of the most important elements that any business must consider. It needs to comply with regulatory standards and requirements of the country where it operates. A Chief Data Officer is be responsible for ensuring the compliance of policies across all branches of business and the associated compliance requirement taxonomies, on a global level. What is GDPR? If you’re thinking there were no data protection laws before the General Data Protection Regulation 2016/679, that’s not true. The major difference, however, is that GDPR focuses more on customer data privacy and protection. GDPR requirements will change the way organizations store, process and protect their customers’ personal data. They will need solutions for assessing, implementing and maintaining GDPR compliance, and that’s where Chief Data Officers fit in. Using data to gain a competitive edge Chief Data Officers need to have sound knowledge of the business’ customers, markets where they operate, and strong analytical skills to leverage the right data, at the right time, at the right place. This would eventually give the business an edge over its competitors in the market. For example, a ferry service could use data to identify the rates that customers would be willing to pay at a certain time of the day. Setting in motion the best practices of data governance Organizations span across the globe these days and often employees from different parts of the world work on the same data. This can often result in data moving through unconnected systems, and ending up as inefficient or disjointed pieces of business information. A Chief Data Officer needs to ensure that this information is aggregated and maintained in such a way that clear information ownership across the organization is established. Architecting future-proof data solutions A Chief Data Officer often acts like a Data Architect. They will sometimes take on responsibility for planning, designing, and building Big Data systems and ensuring successful integration with other systems in an organization. Designing systems that can provide answers to the user’s problems now and in the future is vital. Chief Data Officers are often found asking themselves key questions like how to generate data with maximum reusability, while also making sure it’s as accurate and relevant as possible. Defining Information Management tools Different business units across the globe tend to use different tools, technologies, and systems to work on, store, and share information on an enterprise level. This greatly affects a company’s ability to access and leverage data for effective decision making and various other duties. A Chief Data Officer is responsible for establishing data-oriented standards across the business and for driving all arms of the business to comply with the standards and embrace change, to ensure the integrity of data. Spotting new opportunities A Chief Data Officer is responsible for spotting new opportunities where the business can venture into through careful analysis of data and past records. For example, a motor company would leverage certain sales information to make an informed decision on what age group to target with their new SUV in-the-making, to maximize sales. These are just the tip of the iceberg when it comes to a Chief Data Officer’s responsibilities. Responsibilities go hand-in-hand with skill and traits required to execute those responsibilities. Below are some key capabilities sought after in a CDO. Key Chief Data Officer Skills The right person for the job is expected to possess impeccable leadership and C-Suite level communication skills, as well as strong business acumen. They are expected to have strong knowledge of GDPR software tools and solutions to enable the organizations hiring them to swiftly transform and adopt the new regulations. They are also expected to possess knowledge of IT architecture, including a familiarity with leading architectural standards such as TOGAF or the Zachman framework. They need to be experienced in driving data governance as well as data quality and integrity, while also possessing a strong knowledge of data analytics, visualization, and storytelling. Familiar with Big Data solutions like Hadoop, MapReduce, and HBase is a plus. How much do Chief Data Officers earn?  Now, let’s take a look at what kind of compensation a Chief Data Officer is likely to be offered. To tell you the truth, the answer to this question is still a bit hazy, but it’s sure to pick up speed with the recent developments in the regulatory and legal areas related to data. About a year ago, a blog post from careeraddict revealed that the salary for a CDO in the US was around $112,000 annually. A job listing seen on Indeed quoted $200,000 as the annual salary. Indeed shows 7 jobs posted for a CDO in the last 15 days. We took the estimated salaries of CDOs and compared them with those of CIOs and CTOs in the same company. It turns out that most were on par, with a few CDO compensations falling slightly short of the CIO and CTO salary. These are just basic salary figures. Bonuses add on the side, amounting up to 50% in some cases. However, please note that these salary figures vary heavily based on the type of organization and the industry. Do businesses even need a Chief Data Officer? One might argue that some of the skills expected of a Chief Data Officer would also be held by the CIO or the Chief Digital Officer of the organization or the Data Protection Officer (if they have one). Then, why have a Chief Data Officer at all and incur an extra significant cost to the company? With the rapid change in tech and the rate at which data is generated, used, and discarded, most data pointers, point in the direction of having a separate Chief Data Officer, working alongside the CIO. It’s critical to have a clearly defined need for both roles to co-exist. Blurring the boundaries of the two roles can be detrimental and organizations must, therefore, be painstakingly mindful of the defined KRAs. The organisation should clearly define the two roles to keep the business structure running smoothly. A Chief Data Officer’s main focus will be on the latest data-centric technological innovations, their compliance to the new standards while also boosting customer engagement, privacy and in turn, loyalty and the business’s competitive advantage. The CIO, on the other hand, focuses on improving the bottom line by owning business productivity metrics, cost-cutting initiatives, making IT investments etc. – i.e., an inward facing data-management and architecture role. The CIO is the person who is therefore responsible for leading digital initiatives at a board level. In addition to managing data and governing information, if the CIO’s responsibilities were to also include implementing analytics in fresh ways to generate value for the business, it is going to be a tall order. To put it simply, it is more practical for the CIO to own the systems and the CDO to oversee all the bits and bytes that flow through these systems. Moreover, in several cases, the Chief Data Officer will act as a liaison between the business and IT. Thus, Chief Data Officers and CIOs both need to work together and support each other for a better functioning business. The bottom line: a Chief Data Officer is essential For an organization dealing with a lot of data, a Chief Data Officer is a must. Failure to have one on board can result in being fined €10 million euros or 2% of the organization’s worldwide turnover (depending on which is higher). Here are the criteria for an organization to have a dedicated personnel managing Data Protection. The organization’s core activities should: Have data processing operations which require regular and systematic monitoring of data subjects on a large scale or monitoring of individuals Be processing a large scale of special categories of data (i.e. sensitive data such as health, religion, race, sexual orientation etc.) Have data processing carried out by a public authority or a body processing personal data, except for courts operating in their judicial capacity Apart from this mandate, a Chief Data Officer can add immense value by aligning data-driven insights with its vision and goals. A CDO can bridge the gap between the CMO and the CIO, by focusing on meeting customer requirements through data-driven products. For those in data and insights centric roles such as data scientists, data engineers, data analysts and others, the CDO is a natural destination for their career progression journey. The Chief Data Officer role is highly attractive in terms of the scope of responsibilities, the capabilities and of course, the pay. Certification courses like this one are popping up to help individuals shape themselves for the role. All-in-all, this new C-suite position in most organizations, is the perfect pivot between old and new, bridging silos, and making a future where data privacy is intact.
Read more
  • 0
  • 0
  • 30368

article-image-top-7-modern-virtual-reality-hardware-systems
Sugandha Lahoti
20 Apr 2018
7 min read
Save for later

Top 7 modern Virtual Reality hardware systems

Sugandha Lahoti
20 Apr 2018
7 min read
Since its early inception, virtual reality has offered an escape. Donning a headset can transport you to a brand new world, full of wonderment and excitement. Or it can let you explore a location too dangerous for human existence.  Or it can even just present the real world to you in a new manner. And now that we have moved past the era of bulky goggles and clumsy helmets, the hardware is making the aim of unfettered escapism a reality. In this article, we present a roundup of the modern VR hardware systems. Each product is presented giving an overview of the device, and its price as of February 2018. Use this information to compare systems and find the device which best suits your personal needs. There has been an explosion of VR hardware in the last three years. They range from cheaply made housings around a pair of lens to full headsets with embedded screens creating a 110-degree field of view. Each device offers distinct advantages and use cases. Many have even dropped significantly in price over the past 12 months making them more accessible to a wider audience of users. Following is a brief overview of each device, ranked in terms of price and complexity. Google Cardboard Cardboard VR is compatible with a wide range of contemporary smartphones. Google Cardboard's biggest advantage is its low cost, broad hardware support, and portability. As a bonus, it is wireless. Using the phone's gyroscopes, the VR applications can track the user in 360 degrees of rotation. While modern phones are very powerful, they are not as powerful as desktop PCs. But the user is untethered and the systems are lightweight: Cost: $5-20 (plus an iOS or Android smartphone) [box type="shadow" align="" class="" width=""]Check out this post to Build Virtual Reality Solar System in Unity for Google Cardboard[/box] Google Daydream Rather than plastic, the Daydream is built from a fabric-like material and is bundled with a Wii-like motion controller with a trackpad and buttons. It does have superior optics compared to a Cardboard but is not as nice as the higher end VR systems. Just as with the Gear VR, it works only with a very specific list of phones: Cost: $79 (plus a Google or Android Smartphone) Gear VR Gear VR is part of the Oculus ecosystem. While it still uses a smartphone (Samsung only), the Gear VR Head-Mounted Display (HMD) includes some of the same circuitry from the Oculus Rift PC solution. This results in far more responsive and superior tracking compared to Google Cardboard, although it still only tracks rotation: Cost: $99 (plus Samsung Android Smartphone) Oculus Rift The Oculus Rift is the platform that reignited the VR renaissance through its successful Kickstarter campaign. The Rift uses a PC and external cameras that allow not only rotational tracking but also positional tracking, allowing the user a full VR experience. The Samsung relationship allows Oculus to use Samsung screens in their HMDs. While the Oculus no longer demands that the user remain seated, it does want the user to move within a smaller 3 m x 3 m area. The Rift HMD is wired to the PC. The user can interact with the VR world with the included Xbox gamepad, mouse, and keyboard, a one-button clicker, or proprietary wireless controllers: Cost: $399 plus $800 for a VR-ready PC Vive The HTC Vive from Valve uses smartphone panels from HTC. The Vive has its own proprietary wireless controllers, of a different design than Oculus (but it can also work with gamepads, joysticks, mouse/keyboards). The most distinguishing characteristic is that the Vive encourages users to explore and walk within a 4 m x 4 m, or larger, cube: Cost: $599 plus an $800 VR-ready PC Sony PSVR While there are persistent rumors of an Xbox VR HMD, Sony is currently the only video game console with a VR HMD. It is easier to install and set up than a PC-based VR system, and while the library of titles is much smaller, the quality of the titles is higher overall on average. It is also the most affordable of the positional tracking VR options. But, it is also the only one that cannot be developed on by the average hobbyist developer: Cost: $400, plus Sony Playstation 4 console Microsoft's HoloLens Microsoft's HoloLens provides a unique AR experience in several ways. The user is not blocked off from the real world; they can still see the world around them (other people, desks, chairs, and so on) through the HMD's semitransparent optics. The HoloLens scans the user's environment and creates a 3D representation of that space. This allows the Holograms from the HoloLens to interact with objects in the room. Holographic characters can sit on couches in the room, fish can avoid table legs, screens can be placed on walls in the room, and so on. The system is completely wireless. It's the only commercially available positional tracking device that is wireless. The computer is built into HMD with the processing power that sits in between a smartphone and a VR-ready PC. The user can walk, untethered, in areas as large as 30 m x 30 m. While an Xbox controller and a proprietary single-button controller can be used, the main interaction with the HoloLens is through voice commands and two gestures from the user's hand (Select and Go back). The final difference is that the holograms only appear in a relatively narrow field of view. Because the user can still see other people, either sharing the same Holographic projections or not, the users can interact with each other in a more natural manner: Cost: Development Edition: $3000; Commercial Suite: $5000 Headset costs and comparison across various features The following chart is a sampling of VR headset prices, accurate as of February 1, 2018. VR/AR hardware is rapidly advancing and prices and specs are going to change annually, sometimes quarterly. As of now, the price of the Oculus has dropped by $200: Google Cardboard Gear VR Google Daydream Oculus Rift HTC Vive Sony PS VR HoloLens Complete cost for HMD, trackers, default controllers $5 $99 $79 $399 $599 $299 $3000 Total cost with CPU: phone, PC, PS4 $200 $650 $650 $1,400 $1,500 $600 $3000 Built-in headphones NO No No Yes No No Yes Platform Apple Android Samsung Galaxy Google Pixel PC PC Sony PS4 Proprietary PC Enhanced rotational tracking No Yes No Yes Yes Yes yes Positional tracking No No No Yes Yes Yes Yes Built-in touch panel No* Yes No No No No no Motion controls No No No Yes Yes Yes Yes Tracking system No No No Optical Lighthouse Optical Laser True 360 tracking No No No Yes Yes No Yes Room scale and size No No No Yes Yes Yes Yes Remote No No Yes Yes No No Yes Gamepad support No Yes No Yes 2m x 2m Yes 4m x 4m Yes 3m x 3m Yes 10mX10m Resolution per eye Varies 1440 x1280 1440 x1280 1200 x1080 1200 x1080 1080 x960 1268 X720 Field of view Varies 100 90 110 110 100 30 Refresh rate 60 Hz 60 Hz 60 Hz 90 Hz 90 Hz 90-120 Hz 60 Hz Wireless Yes Yes Yes No No No Yes Optics adjustment No Focus No IPD IPD IPD IPD Operating system iOS Android Android Oculus Android Daydream Win 10 Oculus Win 10 Steam Sony PS4 Win 10 Built-in Camera Yes Yes Yes* No Yes* No Yes AR/VR VR* VR* VR VR VR* VR AR Natural user Interface No No No No No Yes Choosing which HMD to support comes down to a wide range of issues: cost, access to hardware, use cases, image fidelity/processing power, and more. The previous chart is provided to help the user understand the strengths and weaknesses of each platform. There are many HMDs not included in this overview. Some are not commercially available at the time of this writing (Magic Leap, the Win 10 HMD licensed from Microsoft, the Starbreeze/IMAX HMD, and others) and some are not yet widely available or differentiated enough: Razer's Open Source HMD. You enjoyed an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create immersive 3D games and applications with Cardboard VR, Gear VR, OculusVR, and HTC Vive. The hype behind Magic Leap’s New Augmented Reality Headsets Create Your First Augmented Reality Experience Using the Programming Language You Already Know  
Read more
  • 0
  • 0
  • 30271
Modal Close icon
Modal Close icon