Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-ian-goodfellow-et-al-better-text-generation-via-filling-blanks-using-maskgans
Savia Lobo
19 Feb 2018
5 min read
Save for later

Ian Goodfellow et al on better text generation via filling in the blanks using MaskGANs

Savia Lobo
19 Feb 2018
5 min read
In the paper, “MaskGAN: Better Text Generation via Filling in the ______”, Ian Goodfellow, along with William Fedus and Andrew M. Dai have proposed a way to improve sample quality using Generative Adversarial Networks (GANs), which explicitly trains the generator to produce high quality samples and have also shown a lot of success in image generation.  Ian Goodfellow is a Research scientist at Google Brain. His research interests lies in the fields of deep learning, machine learning security and privacy, and particularly in generative models. Ian Goodfellow is known as the father of Generative Adversarial Networks. He runs the Self-Organizing Conference on Machine Learning, which was founded at OpenAI in 2016. Generative Adversarial Networks (GANs) is an architecture for training generative models in an adversarial setup, with a generator generating images that is trying to fool a discriminator that is trained to discriminate between real and synthetic images. GANs have had a lot of success in producing more realistic images than other approaches but they have only seen limited use for text sequences. They were originally designed to output differentiable values, as such discrete language generation is challenging for them. The team of researchers, introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. The paper also shows that this GAN produces more realistic text samples compared to a maximum likelihood trained model. MaskGAN: Better Text Generation via Filling in the _______ What problem is the paper attempting to solve? This paper highlights how text classification was traditionally done using Recurrent Neural Network models by sampling from a distribution that is conditioned on the previous word and a hidden state that consists of a representation of the words generated so far. These are typically trained with maximum likelihood in an approach known as teacher forcing. However, this method causes problems when, during sample generation, the model is often forced to condition on sequences that were never conditioned on at training time, which leads to unpredictable dynamics in the hidden state of the RNN. Also, methods such as Professor Forcing and Scheduled Sampling have been proposed to solve this issue, which work indirectly by either causing the hidden state dynamics to become predictable (Professor Forcing) or by randomly conditioning on sampled words at training time, however, they do not directly specify a cost function on the output of the RNN encouraging high sample quality. The method proposed in the paper is trying to solve problem of text generation with GANs, by a sensible combination of novel approaches. MaskGANs Paper summary This paper proposes to improve sample quality using Generative Adversarial Network (GANs), which explicitly trains the generator to produce high quality samples. The model is trained on a text fill-in-the-blank or in-filling task. In this task, portions of a body of text are deleted or redacted. The goal of the model is to then infill the missing portions of text so that it is indistinguishable from the original data. While in-filling text, the model operates autoregressively over the tokens it has thus far filled in, as in standard language modeling, while conditioning on the true known context. If the entire body of text is redacted, then this reduces to language modeling. The paper also shows qualitatively and quantitatively, evidence that this new proposed method produces more realistic text samples compared to a maximum likelihood trained model. Key Takeaways One can have a hold about what MaskGANs are, as this paper introduces a text generation model trained on in-filling (MaskGAN). The paper considers the actor-critic architecture in extremely large action spaces, new evaluation metrics, and the generation of synthetic training data. The proposed contiguous in-filling task i.e. MASKGAN, is a good approach to reduce mode collapse and help with training stability for textual GANs. The paper shows that MaskGAN samples on a larger dataset (IMDB reviews) is significantly better than the corresponding tuned MaskMLE model as shown by human evaluation. One can produce high-quality samples despite the MaskGAN model having much higher perplexity on the ground-truth test set Reviewer feedback summary/takeaways Overall Score: 21/30 Average Score: 7/10. Reviewers liked the overall idea behind the paper. They appreciated the benefits they received from context (left context and right context) by solving a "fill-in-the-blank" task at training time and translating this into text generation at test time. A reviewer also stated that experiments were well carried through and very thorough. A reviewer also commented that the importance of the MaskGAN mechanism has been highlighted and the description of the reinforcement learning training part has been clarified. However, with pros, the paper has also received some cons stating, There is a lot of pre-training required for the proposed architecture Generated texts are generally locally valid but not always valid globally It was not made very clear whether the discriminator also conditions on the unmasked sequence. A reviewer also stated that there were some unanswered questions such as Was pre-training done for the baseline as well? How was the masking done? How did you decide on the words to mask? Was this at random? Is it actually usable in place of ordinary LSTM (or RNN)-based generation?
Read more
  • 0
  • 0
  • 12961

article-image-how-devops-can-improve-software-security
Hari Vignesh
11 Jun 2017
7 min read
Save for later

How DevOps can improve software security

Hari Vignesh
11 Jun 2017
7 min read
The term “security” often evokes negative feelings among software developers because it is associated with additional programming effort, uncertainty and roadblocks to fast development and release cycles. To secure software, developers must follow numerous guidelines that; while intended to satisfy some regulation or other, can be very restrictive and hard to understand. As a result, a lot of fear, uncertaintyand doubt can surround software security.  First, let’s consider the survey conducted by SpiceWorks, in which IT pros were asked to rank a set of threats in order of risk to IT security. According to the report, the respondents ranked the following threats as their organization’s three biggest risks to IT security as follows:  Human error Lack of process External threats  DevOps can positively impact all three of these major risk factors, without negatively impacting stability or reliability of the core business network. Let’s discuss how security in DevOps attempts to combat the toxic environment surrounding software security; by shifting the paradigm from following rules and guidelines to creatively determining solutions for tough security problems. Human error We’ve all fat-fingered configurations and code before. Usually we catch them, but once in a while they sneak into production and wreak havoc on security. A number of “big names” have been caught in this situation, where a simple typo introduced a security risk. Often these occur because we’re so familiar with what we’re typing that we see what we expect to see, rather than what we actually typed.  To reduce risk from human error via DevOps you can: Use templates to standardize common service configurations Automate common tasks to avoid simple typographical errors Read twice, execute once Lack of process First, there’s the fact that there’s almost no review of the scripts that folks already use to configure, change, shutdown, and start up services across the production network. Don’t let anyone tell you they don’t use scripts to eliminate the yak shaving that exists in networking and infrastructure, too. They do. But they aren’t necessarily reviewed and they certainly aren’t versioned like the code artifacts they are;they rarely are reused. The other problem is simply there’s no governed process. It’s tribal knowledge.  To reduce risk from a lack of process via DevOps: Define the deployment processclearly. Understand prerequisites, dependencies and eliminate redundancies or unnecessary steps. Move toward the use of orchestration as the ultimate executor of the deployment process, employing manual steps only when necessary. Review and manage any scripts used to assist in the process. External threats At first glance, this one seems to be the least likely candidate for being addressed with DevOps. Given that malware and multi-layered DDoS attacks are the most existential threats to businesses today, that’s understandable. There are entire classes of vulnerabilities that can only be detected manually by developers or experts reviewing the code. But it doesn’t really extend to production, where risks becomes reality when it’s exploited. One way that DevOps can reduce potential risk is, more extensive testing and development of web app security policies during development that can then be deployed in production.  Adopting a DevOps approach to developing those policies — and treating them like code too — provides a faster and a more likely, thorough policy that does a better job overall of preventing the existential threats from being all-too-real nightmares.  To reduce the risk of threats becoming reality via DevOps: Shift web app security policy development and testing left, into the app development life cycle. Treat web app security policies like code. Review and standardize. Test often, even in production. Automate using technology such as dynamic application security testing (DAST) and when possible, integrate results into the development life cycle for faster remediation that reduces risk earlier. Best DevOps practices Below is a list of the top five DevOps practices and tooling that can help improve overall security when incorporated directly into your end-to-end continuous integration/continuous delivery (CI/CD) pipeline: Collaboration Security test automation Configuration and patch management Continuous monitoring Identity management Collaboration and understanding your security requirements Many of us are required to follow a security policy. It may be in the form of a corporate security policy, a customer security policy, and/or a set of compliance standards (ex. SOX, HIPAA, etc). Even if you are not mandated to use a specific policy or regulating standard, we all still want to ensure we follow the best practices in securing our systems and applications. The key is to identify your sources of information for security expertise, collaborate early, and understand your security requirements early so they can be incorporated into the overall solution. Security test automation Whether you’re building a brand new solution or upgrading an existing solution, there likely are several security considerations to incorporate. Due to the nature of quick and iterative agile development, tackling all security at once in a “big bang” approach likely will result in project delays. To ensure that projects keep moving, a layered approach often can be helpful to ensure you are continuously building additional security layers into your pipeline as you progress from development to a live product. Security test automation can ensure you have quality gates throughout your deployment pipeline giving immediate feedback to stakeholders on security posture and allowing for quick remediation early in the pipeline. Configuration management In traditional development, servers/instances are provisioned and developers are able to work on the systems. To ensure servers are provisioned and managed using consistent, repeatable and reliable patternsit’s critical to ensure you have a strategy for configuration management. The key is ensuring you can reliably guarantee and manage consistent settings across your environments. Patch management Similar to the concerns with configuration management, you need to ensure you have a method to quickly and reliably patch your systems. Missing patches is a common cause of exploited vulnerabilities including malware attacks. Being able to quickly deliver a patch across a large number of systems can drastically reduce your overall security exposures. Continuous monitoring Ensuring you have monitoring in place across all environments with transparent feedback is vital so it can alert you quickly of potential breaches or security issues. It’s important to identify your monitoring needs across the infrastructure and applicationand then take advantage of some of the tooling that exists to quickly identify, isolate, shut down, and remediate potential issues before they happen or before they become exploited. Part of your monitoring strategy also should include the ability to automatically collect and analyze logs. The analysis of running logs can help identify exposures quickly. Compliance activities can become extremely expensive if they are not automated early. Identity management DevOps practices help allow us to collaborate early with security experts, increase the level of security tests and automation to enforce quality gates for security and provide better mechanisms for ongoing security management and compliance activities. While painful to some, it has to be important to all if we don’t want to make headlines.  About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 12879

article-image-challenges-training-gans-generative-adversarial-networks
Shoaib Dabir
14 Dec 2017
5 min read
Save for later

Why training a Generative Adversarial Network (GAN) Model is no piece of cake

Shoaib Dabir
14 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks. The book gives a complete coverage of Generative adversarial networks. [/box] The article highlights some of the common challenges that a developer might face while using GAN models. Common challenges faced while working with GAN models Training a GAN is basically about two networks, generator G(z) and discriminator D(z) trying to race against each other and trying to reach an optimum, more specifically a nash equilibrium. The definition of nash equilibrium as per Wikipedia: (in economics and game theory) a stable state of a system involving the interaction of different participants, in which no participant can gain by a unilateral change of strategy if the strategies of the others remain unchanged. 1. Setting up failure and bad initialization If you think about it, this is exactly what a GAN is trying to do; the generator and discriminator reach a state where they cannot improve further given the other is kept unchanged. Now the setup of gradient descent is to take a step in a direction that reduces the loss measure defined on the problem—but we are by no means enforcing the networks to reach Nash equilibrium in GAN, which have non-convex objective with continuous high dimensional parameters. The networks try to take successive steps to minimize a non-convex objective and end up in an oscillating process rather than decreasing the underlying true objective. In most cases, when your discriminator attains a loss very close to zero, then right away you can figure out something is wrong with your model. But the biggest pain-point is figuring out what is wrong. Another practical thing done during the training of GAN is to purposefully make one of the networks stall or learn slower, so that the other network can catch up. And in most scenarios, it's the generator that lags behind so we usually let the discriminator wait. This might be fine to some extent, but remember that for the generator to get better, it requires a good discriminator and vice versa. Ideally the system would want both the networks to learn at a rate where both get better over time. The ideal minimum loss for the discriminator is close to 0.5— this is where the generated images are indistinguishable from the real images from the perspective of the discriminator. 2. Mode collapse One of the main failure modes with training a generative adversarial network is called mode collapse or sometimes the helvetica scenario. The basic idea is that the generator can accidentally start to produce several copies of exactly the same image, so the reason is related to the game theory setup we can think of the way that we train generative adversarial networks as first maximizing with respect to the discriminator and then minimizing with respect to the generator. If we fully maximize with respect to the discriminator before we start to minimize with respect to the generator everything works out just fine. But if we go the other way around and we minimize with respect to the generator and then maximize with respect to the discriminator, everything will actually break and the reason is that if we hold the discriminator constant it will describe a single region in space as being the point that is most likely to be real rather than fake and then the generator will choose to map all noise input values to that same most likely to be real point. 3. Problem with counting GANs can sometimes be far-sighted and fail to differentiate the number of particular objects that should occur at a location. As we can see, it gives more numbers of eyes in the head than originally present: 4. Problems with perspective GANs sometime are not capable of differentiating between front and back view and hence fail to adapt well with 3D objects while generating 2D representation from it as follows: 5. Problems with global structures GANs do not understand a holistic structure similar to problems with perspective. For example, in the bottom left image, it generates an image of a quadruple cow, that is, a cow standing on its hind legs and simultaneously on all four legs. That is definitely unrealistic and not possible in real life! It is very important when its comes to train GAN models towards the execution and there would be some common challenges that can come ahead. The major challenge that arises is the failure of the setup and also the one that is mainly faced in training GAN model is mode collapse or sometimes the helvetica scenario. It highlights some of the common problems like with counting, perspective or be global structure. The above listings are some of the major issues faced while training a GAN model. To read more on solutions with real world examples, you will need to check out this book Learning Generative Adversarial Networks.  
Read more
  • 0
  • 0
  • 12841

article-image-7-tips-python-performance
Gabriel Marcondes
24 Jun 2016
7 min read
Save for later

7 Tips For Python Performance

Gabriel Marcondes
24 Jun 2016
7 min read
When you begin using Python after using other languages, it's easy to bring a lot of idioms with you. Though they may work, they are not the best, most beautiful, or fastest ways to get things done with Python—they're not pythonic. I've put together some tips on basic things that can provide big performance improvements, and I hope they'll serve as a starting point for you as you develop with Python. Use comprehensions Comprehensions are great. Python knows how to make lists, tuples, sets, and dicts from single statements, so you don't need to declare, initialize, and append things to your sequences as you do in Java. It helps not only in readability but also on performance; if you delegate something to the interpreter, it will make it faster. def do_something_with(value): return value * 2 # this is an anti-pattern my_list = [] for value in range(10): my_list.append(do_something_with(value)) # this is beautiful and faster my_list = [do_something_with(value) for value in range(10)] # and you can even plug some validation def some_validation(value): return value % 2 my_list = [do_something_with(value) for value in range(10) if some_validation(value)] my_list [2, 6, 10, 14, 18] And it looks the same for other types. You just need to change the appropriate surrounding symbols to get what you want. my_tuple = tuple(do_something_with(value) for value in range(10)) my_tuple (0, 2, 4, 6, 8, 10, 12, 14, 16, 18) my_set = {do_something_with(value) for value in range(10)} my_set {0, 2, 4, 6, 8, 10, 12, 14, 16, 18} my_dict = {value: do_something_with(value) for value in range(10)} my_dict {0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18} Use Generators Generators are objects that generate sequences one item at a time. This provides a great gain in performance when working with large data, because it won't generate the whole sequence unless needed, and it’s also a memory saver. A simple way to use generators is very similar to the comprehensions we saw above, but you encase the sentence with () instead of [] for example: my_generator = (do_something_with(value) for value in range(10)) my_generator <generator object <genexpr> at 0x7f0d31c207e0> The range function itself returns a generator (unless you're using legacy Python 2, in which case you need to use xrange). Once you have a generator, call next to iterate over its items, or use it as a parameter to a sequence constructor if you really need all the values: next(my_generator) 0 next(my_generator) 2 To create your own generators, use the yield keyword inside a loop instead of a regular return at the end of a function or method. Each time you call next on it, your code will run until it reaches a yield statement, and it saves the state for the next time you ask for a value. # this is a generator that infinitely returns a sequence of numbers, adding 1 to the previous def my_generator_creator(start_value): while True: yield start_value start_value += 1 my_integer_generator = my_generator_creator(0) my_integer_generator <generator object my_generator_creator at 0x7f0d31c20708> next(my_integer_generator) 0 next(my_integer_generator) 1 The benefits of generators in this case are obvious—you would never end up generating numbers if you were to create the whole sequence before using it. A great use of this, for example, is for reading a file stream. Use sets for membership checks It's wonderful how we can use the in keyword to check for membership on any type of sequence. But sets are special. They're of a mapping kind, an unordered set of values where an item's positions are calculated rather than searched. When you search for a value inside a list, the interpreter searches the entire list to see whether the value is there. If you are lucky, the value is the first of the sequence; on the other hand, it could even be the last in a long list. When working with sets, the membership check always takes the same time because the positions are calculated and the interpreter knows where to search for the value. If you're using long sets or loops, the performance gain is sensible. You can create a set from any iterable object as long as the values are hashable. my_list_of_fruits = ['apple', 'banana', 'coconut', 'damascus'] my_set_of_fruits = set(my_list_of_fruits) my_set_of_fruits {'apple', 'banana', 'coconut', 'damascus'} 'apple' in my_set_of_fruits True 'watermelon' in my_set_of_fruits False Deal with strings the right way You've probably done or read something like this before: # this is an anti-pattern "some string, " + "some other string, " + "and yet another one" 'some string, some other string, and yet another one' It might look easy and fast to write, but it's terrible for your performance. str objects are immutable, so each time you add strings, trying to append them, you're actually creating new strings. There are a handful of methods to deal with strings in a faster and optimal way. To join strings, use the join method on a separator with a sequence of strings. The separator can be an empty string if you just want to concatenate. # join a sequence of strings, based on the separator you want ", ".join(["some string", "some other string", "and yet another one"]) 'some string, some other string, and yet another one' ''.join(["just", "concatenate"]) 'justconcatenate' To merge strings, for example, to insert information in templates, we have a classical way; it resembles the C language: # the classical way "hello %s, my name is %s" % ("everyone", "Gabriel") 'hello everyone, my name is Gabriel' And then there is the modern way, with the format method. It is quite flexible: # formatting with sequencial strings "hello {}, my name is {}".format("everyone", "Gabriel") 'hello everyone, my name is Gabriel' # formatting with indexed strings "hello {1}, my name is {2} and we all love {0}".format("Python", "everyone", "Gabriel") 'hello everyone, my name is Gabriel and we all love Python' Avoid intermediate outputs Every programmer in this world has used print statements for debugging or progress checking purposes at least once. If you don't know pdb for debugging yet, you should check it out immediately. But I'll agree that it's really easy to write print statements inside your loops to keep track of where your program is. I'll just tell you to avoid them because they're synchronous and will significantly raise the execution time. You can think of alternative ways to check progress, such as watching via the filesystem the files that you have to generate anyway. Asynchronous programming is a huge topic that you should take a look at if you're dealing with a lot of I/O operations. Cache the most requested results Caching is one of the greatest performance tunning tweaks you'll ever find. Python gives us a handy way of caching function calls with a simple decorator, functools.lru_cache. Each time you call a function that is decorated with lru_cache, the interpreter checks whether that call was made recently, on a cache that is a dictionary of parameters-result pairs. dict checks are as fast as those of set, and if we have repetitive calls, it's worth looking at this cache before running the code again. from functools import lru_cache @lru_cache(maxsize=16) def my_repetitive_function(value): # pretend this is an extensive calculation return value * 2 for value in range(100): my_repetitive_function(value % 8) The decorator gives the method cache_info, where we can find the statistics about the cache. We can see the eight misses (for the eight times the function was really called), and 92 hits. As we only have eight different inputs (because of the % 8 thing), the cache size was never fully filled. my_repetitive_function.cache_info() CacheInfo(hits=92, misses=8, maxsize=16, currsize=8) Read In addition to these six tips, read a lot, every day. Read books and other people's code. Make code and talk about code. Time, practice, and exchanging experiences will make you a great programmer, and you'll naturally write better Python code. About the author Gabriel Marcondes is a computer engineer working on Django and Python in São Paulo, Brazil. When he is not coding, you can find him at @ggzes, talking about rock n' roll, football, and his attempts to send manned missions to fictional moons.
Read more
  • 0
  • 0
  • 12588

article-image-how-has-python-remained-so-popular
Antonio Cucciniello
21 Sep 2017
4 min read
Save for later

How has Python remained so popular?

Antonio Cucciniello
21 Sep 2017
4 min read
In 1991, the Python programming language was created. It is a dynamically typed object oriented language that is often used for scripting and web applications today. It is usually paired with frameworks such as Django or Flask on the backend. Since its creation, it's still extremely relevant and one of the most widely used programming languages in the world. But why is this the case? Today we will look at the reason why the Python programming language has still been extremely popular over the last couple of years. Used by bigger companies Python is widely used by bigger technology companies. When bigger tech companies (think companies such as Google) use Python, the engineers that work there also use it. If the developers use Python at their jobs, they will take it to their next job and pass the knowledge on. In addition, Python continues to be used organically when these developers use their knowledge of this language in any of their personal projects they have as well, futher spreading its usage. Plenty of styles are not used In Python, whitespace is important, but in other languages such as JavaScript and C++ it is not. The whitespace is used to dictate the scope of the statements in that indent. By making whitespace important it reduces the need for things like braces and semicolons in your code. That reduction alone can make your code look simpler and cleaner. People are always more willing to try a language that looks and feels cleaner, because it seems easier to learn psychologically. Variety of libraries and third-party support Being around as long as it has, Python has plenty of built-in functionality. It has an extremely large standard library with plenty of things that you can use in your code. On top of that, it has plenty of third-party support libraries that make things even easier. All of this gained functionality allows programmers to focus on the more important logic that is vital to their application's core functionality. This makes programmers more efficient, and who doesn't like efficiency? Object oriented As mentioned earlier, Python is an object oriented programming language. Due to it being object oriented, more people are likely to adopt it because object orient programming allows developers to model their code very similar to real world behavior. Built-in testing Python allows you to import a package called unittest. This package is a full unit testing suite with setup and teardown functions. Having this built in, it is something that is stable for developers to use to test their applications. Readability and learnability As we mentioned earlier, the whitespace is important, therefore we do not need brackets and semicolons. Also Python is dynamically typed so it is easier to create and use variables while not really having to worry about the type. All of these topics could be difficult for new programmers to learn. Python makes it easier by removing some of the difficult parts and having nicer looking code. The reduction of difficulty allows people to choose Python as their first programming language more often than others. (This was my first programming language.) Well documented Building upon the standard libraries and the vast amount of third-party packages, the code in those applications is usually well documented. They tend to have plenty of helpful comments and tons of additional documentation to explain what is happening. From a developer stand point this is crucial. Having great documentation can make or break a language's usage for me. Multiple applications To top it off, Python has many applications it can be used in. It can be used to develop games for fun, web applications to aid businesses, and data science applications. The wide variety of usage attracts more and more people to Python, because when you learn the language you now have the power of versatility. The scope of applications is vast. With all of these benefits, who wouldn't consider Python as their next language of choice? There are many options out there, but Python tends to be superior when it comes to readability, support, documentation, and its wide application usage. About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 12331

article-image-angularjs-nodejs-and-firebase-startup-web-developers-toolkit
Erik Kappelman
09 Mar 2016
7 min read
Save for later

Angular.js, Node.js, and Firebase: the startup web developer's toolkit

Erik Kappelman
09 Mar 2016
7 min read
So, you’ve started a web company. You’ve even attracted one or two solid clients. But now you have to produce, and you have to produce fast. If you’ve been in this situation, then we have something in common. This is where I found myself a few months ago. A caveat: I am a self-taught web developer in an absolute sense. Self-taught or not, in August of 2015 I found myself charged with creating a fully functional blogging app for an author. Needless to say, I was in over my head. I was aware of Node.js, because that had been the backend for the very simple static content site my company had produced first. I was aware of database concepts and I did know a reasonable amount of JavaScript, but I felt ill prepared to pull of these tools together in a cohesive fashion. Luckily for me it was 2015 and not 1998. Today, web developers are blessed with tools that make the development of websites and web apps a breeze. After some research, I decided to use Angular.js to control the frontend behavior of the website, Node.js with Express.js as the backend, and Firebase to hold the data. Let me walk you through the steps I used to get started. First of all, if you aren’t using Express.js on top of Node.js for your backend in development you should start. Node.js was written in C, C++ and JavaScript by Ryan Dahl in 2009. This multiplatform runtime environment for JavaScript is fast, open-source, and easy to learn, because odds are you already know JavaScript. Using Express.js and the Express-Generator in congress with Node.js makes development quite simple. Express.js is Node middleware. In simple terms, Express.js makes your life a whole lot easier by doing most of the work for you. So, let’s build our backend. First, install Node.js and NPM on your system. There are a variety of online resources to complete this step. Then, using NPM, install the Express application generator. $ npm install express-generator -g Once we have Node.js and the Express generator installed, get to your development folder and execute the following commands to build the skeleton of your web app’s backend: $ express app-name –e I use the –e flag to set the middleware to use ejs files instead of the default jade files. I prefer ejs to jade but you might not. This command will produce a subdirectory called app-name in your current directory. If you navigate into this directory and type the commands $ npm install $ npm start and then navigate in a browser to http://localhost:3000 you will see the basic welcome page auto generated by Express. There are thousands of great things about Node.js and Express.js and I will leave them to be discovered by you as you continue to use these tools. Right now, we are going to get Firebase connected to our server. This can serve as general instructions for installing and using Node modules as well. Head over to firebase.com and create a free account. If you end up using Firebase for a commercial app you will probably want to upgrade to a paid account, but for now the starter account should be fine. Once you get your Firebase account setup, create a Firebase instance using their online interface. Once this is done get back to your backend code to connect the Firebase to your server. First install the Firebase Node module. $ npm install firebase --save Make sure to use the --save flag because this puts a new line in your packages.json file located in the root of the web server. This means that if you type npm install, as you did earlier, NPM will know that you added firebase to your webserver and install it if it is not already present. Now open the index.js file in the routes folder in the root of your Node app. At the top of this folder, put in the line var Firebase = require(‘firebase’); This pulls the Firebase module you installed into your code. Then to create a connection to the account you just created on Firebase[MA1]  ,put in the following line of code: var FirebaseRef = new Firebase("https://APP-NAME.firebaseio.com/"); Now, to take a snapshot in JSON of your Firebase and store it in an object, include the following lines var FirebaseContent = {}; FirebaseRef.on("value", function(snapshot) { FirebaseContent = snapshot.val(); }, function(errorObject) { console.log("The read failed: " + errorObject.code); }); FirebaseContent is now a JavaScript object containing your complete Firebase data. Now let’s get Angular.js hooked up to the frontend of the website and then it’s time for you to start developing. Head over to angularjs.org and download the source code or get the CDN. We will be using the CDN. Open the file index.ejs in the views directory in your Node app’s root. Modify the <head> tag, adding the CDN. <head> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.0-rc.1/angular.min.js"></script> </head> This allows you to use the Angular.js tools. Angular.js uses controllers to control your app. Let’s make your angular app and connect a controller. Create a file called myapp.js in your public/javascripts directory. In myapp.js include the following angular.module(“myApp”,[]); This file will grow but for now this is all you need. Now create a file in the same directory called myController.js and put this code into it. Angular.module(“myApp”).controller(‘myController’,['$scope',function($scope){ $scope.jsVariable = 'Controller is working'; }]) Now modify the index.ejs file again. <html ng-app=“myApp”> <head> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.0-rc.1/angular.min.js"></script> <script src="/javascripts/myApp.js"></script> <script src="/javascripts/myController.js"></script> </head> <body ng-controller= “myController”> <h1> {{jsVariable}} </h1> </body> </html> If you start your app again and go back to http://localhost:3000 you should see that your controller is now controlling the contents of the first heading. This is just a basic setup and there is much more you will learn along the way. Speaking from experience, taking the time to learn these tools and put them into use will make your development faster and easier and your results will be of much higher quality. About the Author Erik was born and raised in Missoula, Montana. He attended the University of Montana and received a degree in economics. Along the way, he discovered the value of technology in our modern world, especially to businesses. He is currently seeking a master's degree in economics from the University of Montana and his research focuses on the economics of gender empowerment in the developing world. During his professional life, he has worn many hats – from cashier, to barista, to community organizer. He feels that Montana offers a unique business environment that is often best understood by local businesses. He started his company, Duplovici, with his friends in an effort to meet the unique needs of Montana businesses, non-profits, and individuals. He believes technology is not simply an answer to profit maximization for businesses: by using internet technologies we can unleash human creativity through collective action and the arts, as well as business ventures. He is also the proud father of two girls and has a special place in his heart for getting girls involved with technology and business.
Read more
  • 0
  • 1
  • 12191
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-computerizing-our-world-wearables-and-iot
Sam Wood
23 Mar 2016
5 min read
Save for later

Computerizing Our World with Wearables and IoT

Sam Wood
23 Mar 2016
5 min read
“Sure, the Frinkiac 7 looks impressive– Don’t touch it!– But I predict that within 100 years, computers will be twice as powerful, 10,000 times larger, and so expensive only the five richest kings in Europe will own them.” Professor Frink, The Simpsons, Series 7, Episode 23, 1996 We've always been laughably bad at predicting what technology is going to look like in the future. Alexander Graham Bell famously quoted “I truly believe that one day there will be a telephone in every town in America". Today, we've got a telephone - complete with a supercomputer - for every cerebral hemisphere. From the mainframe, through the PC, and on to the smartphone - computers have been getting smaller and smaller even as their processing power keeps increasing. This has gone hand in hand with the ubiquity of computing devices; from something that could only be afforded by the largest of organizations, to something that is owned by every individual. As the progress of computers is to get continuously smaller and cheap, it's easy to see we will soon see a world where they easily outnumbered people a world where they move from being separate devices to being integrated utterly into our lives. So what form will this integration take? The two most likely options are Wearables and Home Automation - computerizing ourselves, and computerizing our world. Automating Our Homes Currently, the smart home is a luxury product. But with ever-improving micro technology intersecting with rising energy prices, we may soon find that home automation becomes a necessity to us all. It's easy to imagine a home controlled from your mobile device - already, you can purchase commercial systems to set your thermostat from an app. In the next two years, we will be developing systems that cater to real life generic needs such as stable implementation of lighting systems, entertainment systems, intrusion detection and monitoring systems and so on. The immediate challenges for developers looking to work in the home automation space is likely to be wrangling amazing UI for our new remote controls. Are you relishing the prospect of creating an interface that lets a whole family squabble over the thermostat, the lighting, and more, all from their cellphones? Beyond that? Established systems will become more advanced and scalable to perform complex and dynamic tasks. Security will be much more than retina and fingerprint scans, and move towards mapping multiple biometric feeds. Home assistance will progress to speech, mood and behavioral recognition. How about your own AI assistant that learns the needs of your and your family, and controls your home accordingly? Already, the likes of GoogleNow can make predictions about your movements and habits. The future will see this technology become refined and integrated across our automated homes - smart houses that turn on the lights for when we get up, get the hot water ready for when we shower, track the weather forecast and adjust the heating accordingly - all without us lifting a finger. Wearing Our Tech Today, the successful wearables are either those with narrow applications like fitness monitoring, or providing 'second screen' options to our phones. Tomorrow? So many of the proposed areas of what the future of tech will look like require us to be plugged in to a wearable device. From Internet of Things monitoring and tracking our body's vital signs, to gesture computing, to virtual and augmented reality - all of these assume the taking computing devices out of our pockets and putting them onto our bodies. Skeptical? Sure, Google Glass may have died a death, but the history of tech is littered with the bodies of ideas that were just before their time. The Apple Newton did not usher in the age of personal devices like the iPhone did; the failure of the Rocket eBook did not mean the Kindle was doomed from the start. I'm personally hoping that just because Glass failed to get any serious traction does not mean I'm never going to get my Spider Jerusalem or Adam Jensen glasses-based HUDs. One thing's for sure though - we'll carry more and more computing power wherever we go. Just like advances in miniaturization and technology meant that the pocket watch was superseded by the wrist watch in the name of efficiency, we'll see our outfits come to incorporate these always-available personal assistants, giving us information and guidance based on context. One study by ONWorld has portrayed a wearable tech industry on the cusp of reaching $50 billion within the next five years. According to the research, more than 700 million wearable tech units will be shipped to a market eager for advanced technology on their wrists, on their face, and even incorporated into their clothing. The Internet of Everything Ultimately, the success of wearables and home automation will be about incorporating our own squishy, organic forms into the Internet of Things. As wearables and home automation allow the data of our lives to be seamlessly recorded and crunched, the ability of technology to predict our need and accommodate us only increases. In ten years time? Our phone might know us better than we know ourselves. From 23rd to the 25th you can save 50% on some of our best Internet of Things titles. If one simply isn't enough, grab any 5 of the featured titles for $50. Start exploring here.
Read more
  • 0
  • 0
  • 12174

article-image-virtual-reality-developers-cardboard-gear-vr-rift-and-vive
Casey Borders
22 Jul 2016
5 min read
Save for later

Virtual Reality for Developers: Cardboard, Gear VR, Rift, and Vive

Casey Borders
22 Jul 2016
5 min read
Right now is a very exciting time in the virtual reality space! We’ve already seen what mobile VR platforms have to offer with Google Cardboard and Samsung Gear VR. Now we’re close to a full commercial release of the two biggest desktop VR contenders. Oculus started taking pre-orders in January for their Rift and plan to start shipping bythe end of March. HTC opened pre-orders on March 1st for their Vive, which will ship in April. Both platforms are working to make it as easy as possible for developers to build games for their platform. Oculus have offered Unity integration since the beginning with its first development kit (DK1), but have stopped supporting OSX after their acquisition by Facebook, leaving the Mac runtime at version V0.5.0.1-beta. The Windows runtime is up to version v0.8.0.0-beta. You can download both of these as well as a bunch of other tools from their developer download site. HTC has teamed up with Valve, who is writing all of the software for the Vive. It was announced at the Vision AR/VR Summit in February that they would have official Unity integration coming soon. Adding basic VR support to your game is amazingly easy with the Oculus Unity package. From the developer download page, look under the Engine Integration heading and download the “Oculus Utilities for Unity 5” bundle. Importing that into your Unity project will bring in everything you need to integrate VR into your game as well as some sample scenes that you can use for reference while you're getting started. Looking under OVR> Prefabs, you'll find an OVRCameraRig prefab that works as a drop-in replacement for the standard Unity camera. This prefab handles retrieving of the sensor data from the head-mounted display (HMD) and rendering the stereoscopic output. This lets you go from downloading the Unity package to viewing your game in Oculus Rift in just a few minutes! Virtual Reality Games Virtual reality opens up a whole new level of immersion in games. It can make the player truly feel like they are in another world. It also brings with it some unique obstacles that you'll need to consider when working on a VR game. The first and most obvious thing to consider is that the player’s vision is going to be completely blocked by the head-mounted display. This means that you can't ask the player to type anything and it's going to be extremely difficult and frustrating for them to use a large number of key bindings. Game controllers are a great way to get around this since they have a limited number of buttons and are very tactile. If you are going to target PCs then supporting a mouse and keyboard is a must; just try to keep the inputs to a reasonable number. The User Interface The second issue is the user interface. Screen space UI is jarring in VR and can really damage a player’s sense of immersion. Also, if it blocks out a large portion of the player’s field of view, it can cause them to become nauseous since it will remain static as they move their head around. A better way to handle this would be to build the UI into your world. If you want to show the user how much ammo they have left, build a display into the gun. If your game requires users to follow a set path, try putting up signs along the way or paint the directions on the road. If you really need to keep some kind of information visible all the time, try to make that fit with the theme of your world. For example, maybe your player has a helmet that projects an HUD like Tony Stark's Ironman suit. Player Movement The last big thing to keep in mind when making VR-friendly games is player movement. The current hardware offerings allow two levels of player movement. Google Cardboard, Samsung Gear VR and Oculus Rift allow for mostly stationary interaction. The head-mounted display will give you its yaw, pitch and roll, but the player will remain mostly still. The Rift DK2 and consumer version allow for some range of motion as long as the head-mounted display stays within the field of view of its IR camera. This allows players to lean in and get a closer look at things, but not much else. To allow the player to explore your game world, you'll still need to implement the same type of movement controls that you would for a regular non-VR game. The HTC Vive is full room-scale VR, which means that the player has a volume within which they have complete freedom to move around. The position and orientation of the head-mounted display and the controllers will be given to you, so you can see where the player is and what they are trying to interact with. This comes with its own interesting problems since the game world can be larger than the player’s play space. And each person is going to have different amounts of room that they can dedicate to VR, so the volumes are going to be different for each player. Above everything else though, virtual reality is just a whole lot of fun! For developers, it offers a lot of new and interesting challenges, and for players, it allows you to explore worlds like you never could before. And with VR setups ranging from a few dollars of cardboard to an entire room-scale rig, there's something out there to satisfy just about anybody! About the author Casey Borders is an avid gamer and VR fan with over 10 years of experience with graphics development. He was worked on everything from military simulation to educational VR / AR experiences to game development. More recently, he has focused on mobile development.
Read more
  • 0
  • 0
  • 12133

article-image-virtual-reality-and-social-e-commerce-rift-between-worlds
Julian Ursell
30 Jun 2014
4 min read
Save for later

Virtual Reality and Social E-Commerce: a Rift between Worlds?

Julian Ursell
30 Jun 2014
4 min read
It’s doubtful many remember Nintendo’s failed games console, the Virtual Boy, which was one of the worst commercial nose dives for a games console in the past 20 years. Commercial failure though it was, the concept of virtual reality back then and up till the present day is still intriguing for many people considering what the sort of technology that can properly leverage VR is capable of. The most significant landmark in this quarter of technology in the past 6 months undoubtedly is Facebook’s acquisition of the Oculus Rift VR headset manufacturer, Oculus VR. Beyond using the technology purely for creating new and immersive gaming experiences (you can imagine it’s pretty effective for horror games), there are plans at Facebook and amongst other forward-thinking companies of mobilizing the tech for transforming the e-commerce experience into something far more interactive than the relatively passive browsing experience it is right now. Developers are re-imagining the shopping experience through the gateway of virtual reality, in which a storefront becomes an interactive user experience where shoppers can browse and manipulate the items they are looking to buy ( this is how the company Chaotic Moon Studios imagines it), adding another dimension to the way we can evaluate and make decisions on the items we are looking to purchase. On the surface there’s a great benefit to being able to draw the user experience even closer to the physical act of going out into the real world to shop, and one can imagine a whole other array of integrated experiences that can extend from this (say, for example, inspecting the interior of the latest Ferrari). We might even be able to shop with others, making decisions collectively and suggesting items of interest to friends across social networks, creating a unified and massively integrated user experience. Setting aside the push from the commercial bulldozer that is Facebook, is this kind of innovation something that people will get on board with? We can probably answer with some confidence that even with a finalized experience, people are not going to instantly “buy-in” to virtual reality e-commerce, especially with the requirement of purchasing an Oculus Rift (or any other VR headset that emerges, such as Sony’s Morpheus headset) for this purpose. Factor in the considerable backlash against the KickStarter-backed Oculus Rift after its buyout by Facebook and there’s an even steeper hill of users already averse to engaging with the idea. From a purely personal perspective, you might also ask if wearing a headset is going to be anything like the annoying appendage of wearing 3D glasses at the cinema, on top of the substantial expense of acquiring the Rift headset. 3D cinema actually draws a close parallel – both 3D and VR are technology initiatives attempted and failed in years previous, both are predicated on higher user costs, and both are never too far away from being harnessed to that dismissive moniker of “gimmick”. From Facebook’s point of view we can see why incorporating VR activity is a big draw for them. In terms of keeping social networking fresh, there’s only so far re-designing the interface and continually connecting applications (or the whole Internet) through Facebook will take them. Acquiring Oculus is one step towards trying to augment (reinvigorate?) the social media experience, orchestrating the user (consumer) journey for business and e-commerce in one massive virtual space. Thought about in another way, it represents a form of opt-in user subscription, but one in which the subscription is based upon a strong degree of sustained investment from users into the idea of VR, which is something that is extremely difficult to engineer. It’s still too early to say whether the tech mash-up between VR, social networking, and e-commerce is one in which people will be ready to invest (and if they will ever be ready). You can’t fault the idea on the basis of sheer innovation, but at this point one would imagine that users aren’t going to plunge head first into a virtual reality world without hesitation. For the time being, perhaps, people would be more interested in more productive uses of immersive VR technology, say for example flying like a bird.
Read more
  • 0
  • 0
  • 12035

article-image-containerized-data-science-docker
Darwin Corn
03 Jul 2016
4 min read
Save for later

Containerized Data Science with Docker

Darwin Corn
03 Jul 2016
4 min read
So, you're itching to begin your journey into data science but you aren't sure where to start. Well, I'm glad you’ve found this post since I will give the details in a step-by-step fashion as to how I circumvented the unnecessarily large technological barrier to entry and got my feet wet, so to speak. Containerization in general and Docker in particular have taken the IT world by storm in the last couple of years by making LXC containers more than just VM alternatives for the enterprising sysadmin. Even if you're coming at this post from a world devoid of IT, the odds are good that you've heard of Docker and their cute whale mascot. Of course, now that Microsoft is on board, the containerization bandwagon and a consortium of bickering stakeholders have formed, so you know that container tech is here to stay. I know, FreeBSD has had the concept of 'jails' for almost two decades now. But thanks to Docker, container tech is now usable across the big three of Linux, Windows and Mac (if a bit hack-y in the case of the latter two), and today we're going to use its positives in an exploration into the world of data science. Now that I have your interest piqued, you're wondering where the two intersect. Well, if you're like me, you've looked at the footprint of R-studio and the nightmare maze of dependencies of IPython and “noped” right out of there. Thanks to containers, these problems are solved! With Docker, you can limit the amount of memory available to the container, and the way containers are constructed ensures that you never have to deal with troubleshooting broken dependencies on update ever again. So let's install Docker, which is as straightforward as using your package manager in Linux, or downloading Docker Toolbox if you're using a Mac or Windows PC, and running the installer. The instructions that follow will be tailored to a Linux installation, but are easily adapted to Windows or Mac as well. On those two platforms, you can even bypass these CLI commands and use Kitematic, or so I hear. Now that you have Docker installed, let's look at some use cases for how to use it to facilitate our journey into data science. First, we are going to pull the Jupyter Notebook container so that you can work with that language-agnostic tool. # docker run --rm -it -p 8888:8888 -v "$(pwd):/notebooks" jupyter/notebook The -v "$(pwd):/notebooks" flag will mount the current directory to the /notebooks directory in the container, allowing you to save your work outside the container. This will be important because you’ll be using the container as a temporary working environment. The --rm flag ensures that the container is destroyed when it exits. If you rerun that command to get back to work after turning off your computer for instance, the container will be replaced with an entirely new one. That flag allows it access to the folder on the local filesystem, ensuring that your work survives the casually disposable nature of development containers. Now go ahead and navigate to http://localhost:8888, and let's get to work. You did bring a dataset to analyze in a notebook, right? The actual nuts and bolts of data science are beyond the scope of this post, but for a quick intro to data and learning materials, I've found Kaggle to be a great resource. While we're at it, you should look at that other issue I mentioned previously—that of the application footprint. Recently a friend of mine convinced me to use R, and I was enjoying working with the language until I got my hands on some real data and immediately felt the pain of an application not designed for endpoint use. I ran a regression and it locked up my computer for minutes! Fortunately, you can use a container to isolate it and only feed it limited resources to keep the rest of the computer happy. # docker run -m 1g -ti --rm r-base This command will drop you into an interactive R CLI that should keep even the leanest of modern computers humming along without a hiccup. Of course, you can also use the -c and --blkio-weight flags to restrict access to the CPU and HDD resources respectively, if limiting it to the GB of RAM wasn't enough. So, a program installation and a command or two (or a couple of clicks in the Kitematic GUI), and we're off and running using data science with none of the typical headaches. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 11956
article-image-why-android-will-rule-our-future
Sam Wood
11 Mar 2016
5 min read
Save for later

Why Android Will Rule Our Future

Sam Wood
11 Mar 2016
5 min read
We've been joking for years that in the future we'll be ruled by Android Overlords - we just didn't think it would be an operating system. In 2015, it's predicted that Android shipped over one billion devices - a share of the mobile market equating to almost 80%. In our 2015 Skill Up survey, we also discovered that Android developers were by far the highest paid of mobile application developers. Android dominates our present - so why is it likely going to be vital to the world of tomorrow too? IoT Will Run On Android (Probably) Ask any group of developers what the Next Big Thing will be, and I bet you that more than one of them is going to say Internet of Things. In 2015, Google announced Android stepping into the ring of IoT operating systems when it showed us Brillo. Based on the Android kernal but 'scrubbed down by a Brillo pad', Brillo offers the possibility of a Google-backed cohesive platform for IoT - something potentially vital to a tech innovation increasingly marred by small companies attempting to blaze their own trail off in different directions. If IoT needs to be standardized, what better solution than with Android, the operating system that's already the go-to choice for open-source mobile devices? We've already got Smart Fridges running on Android, smart cars running on Android, and tons of smart-watches running on Android - the rest of the Internet of Things is likely just around the corner. Android is Colonizing Desktop Microsoft is still the King of Desktop, and Windows isn't going anywhere any time soon. However, its attempts to enter the mobile space have been miserable-at-best - a 2.8% share of the mobile market in 2015. What has been more successful is the idea of hybridizing the desktop and the mobile, in particular with the successful line of Surface laptops-come-tablets. But is the reverse likely to happen? Just like we're seeing Android moving from being a mobile OS to being used for IoT, we're also seeing the rise of ideas of Android Desktop. The Remix OS for PC operating system is created by former Google developers, and promises an "Android for PC" experience. Google-proper's own experiments in desktop are currently all based on the Chrome OS - which is growing fast in its market share, particularly among the education and student sectors. I'm an enthusiastic Chromebook owner and user, and when it falls short of meeting the full requirements of a major desktop OS, I'll often turn to my Android device to bridge the gap. According to the Wall Street journal, Google may be thinking similar and is considering folding Chrome OS and Android into one product. Consider the general praise that Microsoft received for Windows 10 mobile, and the successful unification of their platforms under a single OS. It's easy to imagine the integration of Google's mobile and desktop projects into a similar single user experience - and that this hybrid-Android would make a serious impact in the marketplace. For Apple, the Only Way Is Down Apple has banked on being the market in luxury for its mobile devices - and that might spell its doom. The pool of new buyers in the smartphone market is shrinking, and those late adopters are more likely to be price-conscious and enamored with the cheaper options available on Android. (After all, if your grandmother still complains about how much milk costs these days, is she really going to want to shell out $650 for an iPhone?) If Apple wants a bigger share of the market, it's going to need to consider a 'budget option' - and as any brand consultant will tell you, nothing damages the image of luxury like the idea that there's a 'cheap version'. Apple is aware of this, and has historically protested that it's never happening. But in 2015, we saw the number people switching from Android to iOS fall from from 13% to 11%. Even larger, the number of first-time smartphone buyers contributing to Apple's overall sales went from 20% to 11% over the same period. Those are worrying figures - especially when it also looks like more people switched from iOS to Android, than switched from Android to iOS. Apple may be a little damned-if-it-does, damned-if-it-doesn't in the face of Android. You can get a lot for your money if you're willing to buy something which doesn't carry an Apple logo. It's easy to see Android's many producers creating high-powered luxury devices; it's harder to see Apple succeeding by doing the opposite. And are we really ever going to see something like the iFridge? Android's Strength is its Ubiquity Principal to Android's success in the future is its ubiquity. In just six years, it's gone from being a new and experimental venture to over a billion downloads and being used across almost every kind of smart device out there. As an open source OS, the possibilities of Android are only going to get wider. When Androids rule our future, it may be on far more than just our phones. Dive into developing for Android all this week with our exclusive Android Week deals! Get 50% off selected titles, or build your own bundle of any five promoted Android books for only $50.
Read more
  • 0
  • 0
  • 11653

article-image-data-science-getting-easier
Erik Kappelman
10 Sep 2017
5 min read
Save for later

Is data science getting easier?

Erik Kappelman
10 Sep 2017
5 min read
The answer is yes, and no. This is a question that could've easily been applied to textile manufacturing in the 1890s, and could've received a similar answer. By this I mean, textile manufacturing improved leaps and bounds throughout the industrial revolution, however, despite their productivity, textile mills were some of the most dangerous places to work. Before I further explain my answer, let’s agree on a definition for data science. Wikipedia defines data science as, “an interdisciplinary field about scientific methods, processes, and systems to extract knowledge or insights from data in various forms, either structured or unstructured.” I see this as the process of acquiring, managing, and analyzing data. Advances in data science First, let's discuss why data science is definitely getting easier. Advances in technology and data collection have made data science easier. For one thing, data science as we know it wasn’t even possible 40 years ago, but due to advanced technology we can now analyze, gather, and manage data in completely new ways. Scripting languages like R and Python have mostly replaced more convoluted languages like Haskell and Fortran in the realm of data analysis. Tools like Hadoop bring together a lot of different functionality to expedite every element of data science. Smartphones and wearable tech collect data more effectively and efficiently than older data collection methods, which gives data scientists more data of higher quality to work with. Perhaps most importantly, the utility of data science has become more and more recognized throughout the broader world. This helps provide data scientists the support they need to be truly effective. These are just some of the reasons why data science is getting easier. Unintended consequences While many of these tools make data science easier in some respects, there are also some unintended consequences that actually might make data science harder. Improved data collection has been a boon for the data science industry, but using the data that is streaming in is similar to drinking out of a firehose. Data scientists are continually required to come up with more complicated ways of taking data in, because the stream of data has become incredibly strong. While R and Python are definitely easier to learn than older alternatives, neither language is usually accused of being parsimonious. What a skilled Haskell programming might be able to do in 100 lines, might take a less skilled Python scripter 500 lines. Hadoop, and tools like it, simplify the data science process, but it seems like there are 50 new tools like Hadoop a day. While these tools are powerful and useful, sometimes data scientists spend more time learning about tools and less time doing data science, just to keep up with the industry’s landscape. So, like many other fields related to computer science and programming, new tech is simultaneously making things easier and harder. Golden age of data science Let me rephrase the title question in an effort to provide even more illumination: is now the best time to be a data scientist or to become one? The answer to this question is a resounding yes. While all of the current drawbacks I brought up remain true, I believe that we are in a golden age of data science, for all of the reasons already mentioned, and more. We have more data than ever before and our data collection abilities are improving at an exponential rate. The current situation has gone so far as to create the necessity for a whole new field of data analysis, Big Data. Data science is one of the most vast and quickly expanding human frontiers at present. Part of the reason for this is what data science can be used for. Data science can effectively answer questions that were previously unanswered. Of course this makes for an attractive field of study from a research standpoint. One final note on whether or not data science is getting easier. If you are a person who actually creates new methods or techniques in data science, especially if you need to support these methods and techniques with formal mathematical and scientific reasoning, data science is definitely not getting easier for you. As I just mentioned, Big Data is a whole new field of data science created to deal with new problems caused by the efficacy of new data collection techniques. If you are a researcher or academic, all of this means a lot of work. Bootstrapped standard errors were used in data analysis before a formal proof of their legitimacy was created. Data science techniques might move at the speed of light, but formalizing and proving these techniques can literally take lifetimes. So if you are a researcher or academic, things will only get harder. If you are more of a practical data scientist, it may be slightly easier for now, but there’s always something! About the Author Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 11603

article-image-4-ways-use-machine-learning-enterprise-security
Kartikey Pandey
29 Aug 2017
6 min read
Save for later

4 Ways You Can Use Machine Learning for Enterprise Security

Kartikey Pandey
29 Aug 2017
6 min read
Cyber threats continue to cost companies money and reputation. Yet security seems to be undervalued. Or maybe it's just misunderstood. With a series of large-scale cyberattack events and the menace of ransomware, earlier WannaCry and now Petya, continuing to affect millions globally, it’s time you reimagined how your organization stays ahead of the game when it comes to software security.  Fortunately, machine learning can help support a more robust, reliable and efficient security initiative. Here are just 4 ways machine learning can support your software security strategy. Revamp of your company’s endpoint protection with machine learning We have seen in the past how a single gap in endpoint protection resulted in serious data breaches. In May this year, Mexican fast food giant Chipotle learned the hard way when cybercriminals exploited the company's point of sale systems to steal credit card information. The Chiptole incident was a very real reminder for many retailers to patch critical endpoints on a regular basis. It is crucial to guard your company’s endpoints which are virtual front doors to your organization’s precious information. Your cybersecurity strategy must consider a holistic endpoint protection strategy to secure against a variety of threats, both known and unknown. Traditional endpoint security approaches are proving to be ineffective and costing businesses millions in terms of poor detection and wasted time. The changing landscape of the cybersecurity market brings with it its own set of unique challenges (Palo Alto Networks have highlighted some of these challenges in their whitepaper here). Sophisticated Machine Learning techniques can help fight back threats that aren’t easy to defend with traditional ways. One could achieve this by adopting any of the three ML approaches: Supervised machine learning, unsupervised machine learning and reinforcement learning. Establishing the right machine learning approach entails a significant understanding of your expectations from the endpoint protection product. You might consider checking on the speed, accuracy, and efficiency of the machine learning based endpoint protection solution with the vendor to make an informed choice of what you are opting for. We recommend the use of a supervised machine learning approach for endpoint protection as it’s a proven way of malware detection and it delivers accurate results. The only catch is that these algorithms require relevant data in sufficient quantity to work on and the training rounds need to be speedy and effective to guarantee efficient malware detection. Some of the popular ML-based endpoint protection options available in the market are Symantec Endpoint Protection 14, CrowdStrike, and TrendMicro’s XGen. Use machine learning techniques to predict security threats based on historical data Predictive analytics is no longer just restricted to data science. By adopting predictive analytics, you can take a proactive approach to cybersecurity too. Predictive analytics makes it possible to not only identify infections and threats after they have caused damage, but also to raise an alarm for any future incidents or attacks. Predictive analytics is a crucial part of the learning process for the system. With sophisticated detection techniques the system can monitor network activities and report real-time data. One incredibly effective technique organizations are now beginning to use is a combination of  advanced predictive analytics with a red team approach. This enables organizations to think like the enemy and model a broad range of threats. This process mines and captures large sets of data which is then processed. The real value here is the ability to generate meaningful insights out of the large data set collected and then letting the red team work on processing and identifying potential threats. This is then used by the organization to evaluate its capabilities, to prepare for future threats and to mitigate potential risks. Harness the power of behavior analytics to detect security intrusions Behavior analytics is a highly trending area today in the cybersecurity space. Traditional systems such as antiviruses are skilled in identifying attacks based on historical data and matching signatures. Behavior analytics, on the other hand, detects anomalies and makes a judgement against what would be considered normal behaviour. As such, behavior analytics in enterprises is proving very effective when it comes to detecting intrusions that otherwise evade firewalls or antivirus software. It complements existing security measures such as firewall and antivirus rather than replacing them. Behavior analytics work well within private cloud and infrastructures and is able to detect threats within internal networks. One popular example is Enterprise Immune System, by the vendor Darktrace, which uses machine learning to detect abnormal behavior in the system. It helps IT staff narrow down their perimeter of search and look out for specific security events through a visual console. What’s really promising is that because Darktrace uses machine learning, the system is not just learning from events within internal systems, but from events happening globally as well. Use machine learning to close down IoT vulnerabilities Trying to manage large amounts of data and logs generated from millions of IoT devices manually could be overwhelming if your company relies on the Internet of Things. Many a time, IoT devices are directly connected to the network which means it is fairly easy for attackers and hackers to take advantage of your inadequately protected networks. It could therefore be next to impossible to build a secure IoT system, if you set out to identify and fix vulnerabilities manually. Machine learning can help you analyze and make sense of millions of data logs generated from IoT capable devices. Machine learning powered cybersecurity systems placed and seated directly inside your system can learn about security events as they happen. It can then monitor both incoming and outgoing IoT traffic in devices connected to the network and generate profiles for appropriate and inappropriate behavior inside your IoT ecosystem. This way the security system is able to react to even the slightest of irregularities and detect anomalies that were not experienced before. Currently, only a handful number of software and tools use Machine Learning or Artificial Intelligence for IoT security. But we are already seeing development on this front by major security vendors such as Symantec. Surveys carried out frequently on IoT continue to highlight security as a major barrier to IoT adoption and we are hopeful that Machine Learning will come to the rescue. Cyber crimes are evolving at a breakneck speed while businesses remain slow in adapting their IT security strategies to keep up with the times. Machine learning can help businesses make that leap to proactively address cyber threats and attacks by: Having an intelligent revamp of your company’s endpoint protection Investing in machine learning techniques that predict threats based on historical data Harnessing the power of behavior analytics to detect intrusions Using machine learning to close down IoT vulnerabilities And, that’s just the beginning. Have you used machine learning in your organization to enhance cybersecurity? Share with us your best practices and tips for using machine learning in cybersecurity in the comments below!
Read more
  • 0
  • 0
  • 11584
article-image-best-angular-yet-new-features-angularjs-13
Sebastian Müller
16 Apr 2015
5 min read
Save for later

The best Angular yet - New Features in AngularJS 1.3

Sebastian Müller
16 Apr 2015
5 min read
AngularJS 1.3 was released in October 2014 and it brings with it a lot of new and exciting features and performance improvements to the popular JavaScript framework. In this article, we will cover the new features and improvements that make AngularJS even more awesome. Better Form Handling with ng-model-options The ng-model-options directive added in version 1.3 allows you to define how model updates are done. You use this directive in combination with ng-model. Debounce for Delayed Model Updates In AngularJS 1.2, with every key press, the model value was updated. With version 1.3 and ng-model-options, you can define debounce time in milliseconds, which will delay the model update until the user hasn’t pressed a key in the configured time. This is mainly a performance feature to save $digest cycles that would normally occur after every key press when you don’t use ng-model-options: <input type="text" ng-model="my.username" ng-model-options="{ debounce: 500 }" /> updateOn - Update the Model on a Defined Event An alternative to the debounce option inside the ng-model-options directive is updateOn. This updates the model value when the given event name is triggered. This is also a useful feature for performance reasons. <input type="text" ng-model="my.username" ng-model-options="{ updateOn: 'blur' }" /> In our example, we only update the model value when the user leaves the form field. getterSetter - Use getter/setter Functions in ng-model app.js: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { var myEmail = 'example@example.com'; $scope.user = { email: function email(newEmail) { if (angular.isDefined(newEmail)) { myEmail = newEmail; } return myEmail; } }; }]); index.html: <div ng-app="myApp" ng-controller="MyController"> current user email: {{ user.email() }} <input type="email" ng-model="user.email" ng-model-options="{ getterSetter: true }" /> </div> When you set getterSetter to true, Angular will treat the referenced model attribute as a getter and setter method. When the function is called with no parameter, it’s a getter call and AngularJS expects that you return the current assigned value. AngularJS calls the method with one parameter when the model needs to be updated. New Module - ngMessages The new ngMessages module provides features for a cleaner error message handling in forms. It’s a feature that is not contained in the core framework and must be loaded via a separate script file. index.html: … <body> ... <script src="angular.js"></script> <script src="angular-messages.js"></script> <script src="app.js"></script> </body> app.js: // load the ngMessages module as a dependency angular.module('myApp', ['ngMessages']);  The first version contains only two directives for error message handling: <form name="myForm"> <input type="text" name="myField" ng-model="myModel.field" ng-maxlength="5" required /> <div ng-messages="myForm.myField.$error" ng-messages-multiple> <div ng-message="maxlength"> Your field is too long! </div> <div ng-message="required"> This field is required! </div> </div> </form> First, you need a container element that has an “ng-messages” directive with a reference to the $error object of the field you want to show error messages for. The $error object contains all validation errors that currently exist. Inside the container element, you can use the ng-message directive for every error type that can occur. Elements with this directive are automatically hidden when no validation error for the given type exists. When you set the “ng-messages-multiple” attribute on the element, you are using the “ng-messages” directive and all validation error messages are displayed at the same time. Strict-DI Mode AngularJS provides multiple ways to use the dependency injection mechanism in your application. One way is not safe to use when you minify your JavaScript files. Let’s take a look at this example: angular.module('myApp', []).controller('MyController', function($scope) { $scope.username = 'JohnDoe'; }); This example works perfectly in the browser as long as you do not minify this code with a JavaScript minifier like UglifyJS or Google Closure Compiler. The minified code of this controller might look like this: angular.module('myApp', []).controller('MyController', function(a) { a.username = 'JohnDoe'; }); When you run this code in your browser, you will see that your application is broken. Angular cannot inject the $scope service anymore because the minifier changed the function parameter name. To prevent this type of bug, you have to use this array syntax: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { $scope.username = 'JohnDoe'; }]); When this code is minified by your tool of choice, AngularJS knows what to inject because the provided string ‘$scope’ is not rewritten by the minifier: angular.module('myApp', []).controller('MyController', ['$scope', function(a) { a.username = 'JohnDoe'; }]); Using the new Strict-DI mode, developers are forced to use the array syntax. An exception is thrown when they don’t use this syntax. To enable the Strict-DI mode, you have to add the ng-strict-di directive to the element that you are using for the ng-app directive: <html ng-app="myApp" ng-strict-di> <head> </head> <body> ... </body> </html> IE8 Browser Support Angular 1.2 had built-in support for Internet Explorer 8 and up. Now that the global market share of IE8 has dropped and it takes a lot of time and extra code to support the browser, the team decided to drop support for the browser that was released back in 2009. Summary This article shows only a few new features added to Angular1.3. To learn about all of the new features, read the changelog file on Github or check out the AngularJS 1.3 migration guide. About the Author Sebastian Müller is Senior Software Engineer at adesso AG in Dortmund, Germany. He spends his time building Single Page Applications and is interested in JavaScript Architectures. He can be reached at @Sebamueller on Twitter and as SebastianM on Github.
Read more
  • 0
  • 0
  • 11570

article-image-are-distributed-networks-decentralized-systems-same
Amarabha Banerjee
31 Jul 2018
3 min read
Save for later

Are distributed networks and decentralized systems the same?

Amarabha Banerjee
31 Jul 2018
3 min read
The emergence of Blockchain has paved way for the implementation of non-centralized network architecture. The seeds of distributed network architecture was sown back in 1964, by Paul Baran in his paper “On Distributed Networks“. Since then, there have been many attempts at implementing this architecture in network systems. The most recent implementation has been aided by the discovery of Blockchain technology, in 2009, by the anonymous Satoshi Nakamoto. Terminologies: Network: A collection of interlinked nodes that exchange information. Node: The most basic part of the network; for example, a user or computer. Link: The connection between two nodes. Server: A node that has connections to a relatively large amount of other nodes The mother of all the network architectures is a centralized Network. Here, the primary decision making control rests with one Node. The message is carried on by Sub Nodes. Distributed networks are in a way, a conglomeration of small centralized networks. It consists of multiple Nodes which themselves are miniature versions of centralized networks. Decentralized networks consist of individual nodes and every one of these Nodes are capable of independent decision making. Hence there is no central control in Decentralized networks. Source: Meshworld A common misconception is that Distributed and Decentralized systems are the same; they just have different nomenclature and slightly different functionalities. In a way this is true, but not completely. A distributed system still has one central control algorithm, that takes the final call over the process and protocol to be followed. As an example, let’s consider distributed digital ledgers. Each of these ledgers are independent network nodes. These ledgers get updated with individual transactions and other details and this information is not passed on to the other Nodes. This particular feature makes this system secure and decentralized. The other Nodes are not aware of the information stored. This is how a decentralized network behaves. Now the same system behaves a bit differently, when the Nodes communicate with each other, (in-case of Ethereum, by the movement of “Ether” between nodes). Although the individual Node’s information stays secure, the information about the state of the Nodes is passed on, finally to a central peer. Then the peer takes on the decision of which state to finally change to and what is the optimum process to change the state. This is decided by the votes of the individual nodes. The Nodes then change to the new state, preserving information about the previous state. This makes the system dynamic, secure and distributed, because although the Nodes get to vote based on their individual states, the final decision is taken by the centralized peer. This is a distributed system. Hence we can clearly state that decentralized systems are a subset of distributed systems, more independent and minus any central controlling authority. This presents a familiar question to us - are the current blockchain based apps purely decentralized, or are they just distributed systems with a central control? Could this be the reason why we have not yet reached the ultimate Cryptocurrency based alternative economy? Put differently, is the invisible central control hindering the evolution of blockchain based systems to a purely decentralized system and economy? Only time and more dedicated research, along with better practical implementation of decentralized applications will answer these questions. A brief history of Blockchain Blockchain can solve tech’s trust issues – Imran Bashir Partnership alliances of Kontakt.io and IOTA Foundation for IoT and Blockchain
Read more
  • 0
  • 0
  • 11536
Modal Close icon
Modal Close icon