Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-learn-scikit-learn
Guest Contributor
23 Nov 2017
8 min read
Save for later

Why you should learn Scikit-learn

Guest Contributor
23 Nov 2017
8 min read
Today, machine learning in Python has become almost synonymous with scikit-learn. The "Big Bang" moment for scikit-learn was in 2007 when a gentleman named David Cournapeau decided to write this project as part of Google Summer of Code 2007. Let's take a moment to thank him. Matthieu Brucher later came on board and developed it further as part of his thesis. From that point on, sklearn never looked back. In 2010, the prestigious French research organization INRIA took ownership of the project with great developers like Gael Varoquaux, Alexandre Gramfort et al. starting work on it. Here's the oldest pull request I could find in sklearn’s repository. The title says "we're getting there"! Starting from there to today where sklearn receives funding and support from Google, Telecom ParisTech and Columbia University among others, it surely must’ve been quite a journey. Sklearn is an open source library which uses the BSD license. It is widely used in industry as well as in academia. It is built on Numpy, Scipy and Matplotlib while also having wrappers around various popular libraries such LIBSVM. Sklearn can be used “out of the box” after installation. Can I trust scikit-learn? Scikit-learn, or sklearn, is a very active open source project having brilliant maintainers. It is used worldwide by top companies such as Spotify, booking.com and the like. That it is open source where anyone can contribute might make you question the integrity of the code, but from the little experience I have contributing to sklearn, let me tell you only very high-quality code gets merged. All pull requests have to be affirmed by at least two core maintainers of the project. Every code goes through multiple iterations. While this can be time-consuming for all the parties involved, such regulations ensure sklearn’s compliance with the industry standard at all times. You don’t just build a library that’s been awarded the “best open source library” overnight! How can I use scikit-learn? Sklearn can be used for a wide variety of use-cases ranging from image classification to music recommendation to classical data modeling. Scikit-learn in various industries: In the Image classification domain, Sklearn’s implementation of K-Means along with PCA has been used for handwritten digit classification very successfully in the past. Sklearn has also been used for facial/ faces recognition using SVM with PCA. Image segmentation tasks such as detecting Red Blood Corpuscles or segmenting the popular Lena image into sections can be done using sklearn. A lot of us here use Spotify or Netflix and are awestruck by their recommendations. Recommendation engines started off with the collaborative filtering algorithm. It basically says “if people like me like something, I’ll also most probably like that.” To find out users with similar tastes, a KNN algorithm can be used which is available in sklearn. You can find a good demonstration of how it is used for music recommendation here. Classical data modeling can be bolstered using sklearn. Most people generally start their kaggle competitive journeys with the titanic challenge. One of the better tutorials out there on starting out is by dataquest and generally acts as a good introduction on how to use pandas and sklearn (a lethal combination!) for data science. It uses the robust Logistic Regression, Random Forest and the Ensembling modules to guide the user. You will be able to experience the user-friendliness of sklearn first hand while completing this tutorial. Sklearn has made machine learning literally a matter of importing a package. Sklearn also helps in Anomaly detection for highly imbalanced datasets (99.9% to 0.1% in credit card fraud detection) through a host of tools like EllipticEnvelope and OneClassSVM. In this regard, the recently merged IsolationForest algorithm especially works well in higher dimensional sets and has very high performance. Other than that, sklearn has implementations of some widely used algorithms such as linear regression, decision trees, SVM and Multi Layer Perceptrons (Neural Networks) to name a few. It has around 39 models in the “linear models” module itself! Happy scrolling here! Most of these algorithms can run very fast compared to raw python code since they are implemented in Cython and use Numpy and Scipy (which in-turn use C) for low-level computations. How is sklearn different from TensorFlow/MLllib? TensorFlow is a popular library to implement deep learning algorithms (since it can utilize GPUs). But while it can also be used to implement machine learning algorithms, the process can be arduous. For implementing logistic regression in TensorFlow, you will first have to “build” the logistic regression algorithm using a computational graph approach. Scikit-learn, on the other hand, provides the same algorithm out of the box however with the limitation that it has to be done in memory. Here's a good example of how LogisticRegression is done in Tensorflow. Apache Spark’s MLlib, on the other hand, consists of algorithms which can be used out of the box just like in Sklearn, however, it is generally used when the ML task is to be performed in a distributed setting. If your dataset fits into RAM, Sklearn would be a better choice for the task. If the dataset is massive, most people generally prototype on a small subset of the dataset locally using Sklearn. Once prototyping and experimentation are done, they deploy in the cluster using MLlib. Some sklearn must-knows Scikit-learn can be used for three different kinds of problems in machine learning namely supervised learning, unsupervised learning and reinforcement learning (ahem AlphaGo). Unsupervised learning happens when one doesn’t have ‘y’ labels in their dataset. Dimensionality reduction and clustering are typical examples. Scikit-learn has implementations of variations of the Principal Component Analysis such as SparsePCA, KernelPCA, and IncrementalPCA among others. Supervised learning covers problems such as spam detection, rent prediction etc. In these problems, the ‘y’ tag for the dataset is present. Models such as Linear regression, random forest, adaboost etc. are implemented in sklearn. From sklearn.linear_models import LogisticRegression Clf = LogisticRegression().fit(train_X, train_y) Preds = Clf.predict(test_X) Model evaluation and analysis Cross-validation, grid search for parameter selection and prediction evaluation can be done using the Model Selection and Metrics module which implements functions such as cross_val_score and f1_score respectively among others. They can be used as such: Import numpy as np From model_selection import cross_val_score From sklearn.metrics import f1_score Cross_val_avg = np.mean(cross_val_score(clf, train_X, train_y, scoring=’f1’)) # tune your parameters for better cross_val_score # for model results on a certain classification problem F_measure = f1_score(test_y, preds) Model Saving Simply pickle your model using pickle.save and it is ready to be distributed and deployed! Hence a whole machine learning pipeline can be built easily using sklearn. Finishing Remarks There are many good books out there talking about machine learning, but in context to Python,  Sebastian Raschka`s  (one of the core developers on sklearn) recently released his book titled “ Python Machine Learning” and it’s in great demand. Another great blog you could follow is Erik Bernhardsson’s blog. Along with writing about machine learning, he also discusses software development and other interesting ideas. Do subscribe to the scikit-learn mailing list as well. There are some very interesting questions posted there and a lot of learnings to take home. The machine learning subreddit also collates information from a lot of different sources and is thus a good place to find useful information. Scikit-learn has revolutionized the machine learning world by making it accessible to everyone. Machine learning is not like black magic anymore. If you use scikit-learn and like it, do consider contributing to sklearn. There is a huge clutter of open issues and PRs on the sklearn GitHub page. Scikit-learn needs contributors! Have a look at this page to start contributing. Contributing to a library is easily the best way to learn it! [author title="About the Author"]Devashish Deshpande started his foray into data science and machine learning in 2015 with an online course when the question of how machines can learn started intriguing him. He pursued more online courses as well as courses in data science during his undergrad. In order to gain practical knowledge he started contributing to open source projects beginning with a small pull request in Scikit-Learn. He then did a summer project with Gensim and delivered workshops and talks at PyCon France and India in 2016. Currently, Devashish works in the data science team at belong.co, India. Here's the link to his GitHub profile.[/author]
Read more
  • 0
  • 0
  • 21898

article-image-why-does-oculus-cto-prefer-2d-vr-interfaces-over-3d-virtual-reality-interfaces
Sugandha Lahoti
23 May 2019
6 min read
Save for later

Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces?

Sugandha Lahoti
23 May 2019
6 min read
Creative immersive 3D experiences in Virtual reality setup is the new norm. Tech companies around the world are attempting to perfect these 3D experiences to make them as natural, immersive, and realistic as possible. However, a certain portion of Virtual Reality creators still believe that creating a new interaction paradigm in 3D is actually worse than 2D. One of them is John Carmack, CTO of Oculus VR, the popular Virtual Reality headgear. He has penned a Facebook post highlighting why he thinks 3D interfaces are usually worse than 2D interfaces. Carmack details a number of points to justify his assertion and says that the majority of browsing, configuring, and selecting interactions benefit from designing in 2D. He wrote an internal post in 2017 clarifying his views. Recently, he was reviewing a VR development job description before an interview last week, where he saw that one of the responsibilities for the open Product Management Leader position was: “Create a new interaction paradigm that is 3D instead of 2D based” which made him write this post. Splitting information across multiple depths is harmful Carmack says splitting information across multiple depths makes our eyes re-verge and re-focus. He explains this point with an analogy. “If you have a convenient poster across the room in your visual field above your monitor – switch back and forth between reading your monitor and the poster, then contrast with just switching back and forth with the icon bar at the bottom of your monitor.” Static HMD optics should have their focus point at the UI distance. If we want to be able to scan information as quickly and comfortably as possible, says Carmack, it should all be the same distance from the viewer and it should not be too close. As Carmack observes, you don't see in 3D. You see two 2D planes that your brain extracts a certain amount of depth information from. A Hacker news user points out, “As a UI goes, you can't actually freely use that third dimension, because as soon as one element obscures another, either the front element is too opaque to see through, in which case the second might as well not be there, or the opacity is not 100% in which case it just gets confusing fast. So you're not removing a dimension, you're acknowledging it doesn't exist. To truly "see in 3D" would require a fourth-dimension perspective. A 4D person could use a 3D display arbitrarily, because they can freely see the entire 3D space, including seeing things inside opaque spheres, etc, just like we can look at a 2D display and see the inside of circles and boxes freely.” However, a user critiqued also Carmack’s statement of splitting information across multiple depths being harmful. He says, “Frequently jumping between dissimilar depths is harmful. Less frequent, sliding, and similar depths, can be wonderful, allowing the much denser and easily accessible presentation of information. A general takeaway is that “most of the current commentary about "VR", is coming from a community focused on a particular niche, current VR gaming. One with particular and severe, constraints and priorities that don't characterize the entirety of a much larger design space.” Visualize 3D environment as a pair of 2D projections Camack says that unless we move significantly relative to the environment, they stay essentially the same 2D projections. He further adds, “even on designing a truly 3D UI, developers would have to consider this notion to keep the 3D elements from overlapping each other when projected onto the view.” It can also be difficult for 2D UX/product designers to transfer their thinking over to designing immersive products. https://twitter.com/SuzanneBorders/status/1130231236243337216 However, building in 3D is important for things which are naturally intuitive in 3D. This, as Carmack mentions is "true 3D" content, for which you get a 3D interface whether you like it or not. A user on Hacker News points out, “Sometimes things which we struggle to decode in 2D are just intuitive in 3D like knots or the run of wires or pipes.” Use 3D elements for efficient UI design Carmack says that 3D may have a small place for efficient UI design as a “treatment” for UI elements. He gives examples such as using slightly protruding 3D buttons sticking out of the UI surface in places where we would otherwise use color changes or faux-3D effects like bevels or drop shadows. He says, “the visual scanning and interaction is still fundamentally 2D, but it is another channel of information that your eye will naturally pick up on.” This doesn’t mean that VR interfaces should just be “floating screens”. The core advantage of VR from a UI standpoint is the ability to use the entire field of view, and allow it to be extended by “glancing” to the sides. Content selection, Carmack says, should go off the sides of the screens and have a size/count that leaves half of a tile visible at each edge when looking straight ahead. Explaining his statement he adds, “actually interacting with UI elements at the angles well away from the center is not good for the user, because if they haven’t rotated their entire body, it is a stress on their neck to focus there long, so the idea is to glance, then scroll. He also advises putting less frequently used UI elements off to the sides or back. A Twitter user agreed to Carmack’s floating screens comment. https://twitter.com/SuzanneBorders/status/1130233108073144320 Most users agreed to Carmack’s assertion, sharing their own experiences. A comment on reddit reads, “He makes a lot of good points. There are plenty examples of 'real life' instances where the existence and perception of depth isn't needed to make useful choices or to interact with something, and that in fact, as he points out, it's actually a nuisance to have to focus on multiple planes, back and forth', to get something done.” https://twitter.com/feiss/status/1130524764261552128 https://twitter.com/SculptrVR/status/1130542662681939968 https://twitter.com/jeffchangart/status/1130568914247856128 However, some users point out that this can also be because the tools for doing full 3D designs are nowhere near as mature as the tools for doing 2D designs. https://twitter.com/haltor/status/1130600718287683584 A Twitter user aptly observes: “3D is not inherently superior to 2D.” https://twitter.com/Clarice07825084/status/1130726318763462656 Read the full text of John’s article on Facebook. More insights on this Twitter thread. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset Oculus Rift S: A new VR with inside-out tracking, improved resolution and more! What’s new in VR Haptics?
Read more
  • 0
  • 0
  • 21830

article-image-keep-your-serverless-aws-applications-secure-tutorial
Savia Lobo
18 Jun 2018
11 min read
Save for later

Keep your serverless AWS applications secure [Tutorial]

Savia Lobo
18 Jun 2018
11 min read
Handling security is an extensive and complex topic. If not done right, you open up your app to dangerous hacks and breaches. Even if everything is right, it may be hacked. So it's important we understand common security mechanisms to avoid exposing websites to vulnerabilities and follow the recommended practices and methodologies that have been largely tested and proven to be robust. In this tutorial, we will learn how to secure serverless applications using AWS. Additionally, we will learn about the security basics and then move on to handle authorization and authentication using AWS. This article is an excerpt taken from the book, 'Building Serverless Web Applications' wriiten by Diego Zanon. Security basics in AWS One of the mantras of security experts is this: don't roll your own. It means you should never use in a production system any kind of crypto algorithm or security model that you developed by yourself. Always use solutions that have been highly used, tested, and recommended by trusted sources. Even experienced people may commit errors and expose a solution to attacks, especially in the cryptography field, which requires advanced math. However, when a proposed solution is analyzed and tested by a great number of specialists, errors are much less frequent. In the security world, there is a term called security through obscurity. It is defined as a security model where the implementation mechanism is not publicly known, so there is a belief that it is secure because no one has prior information about the flaws it has. It can be indeed secure, but if used as the only form of protection, it is considered as a poor security practice. If a hacker is persistent enough, he or she can discover flaws even without knowing the internal code. In this case, again, it's better to use a highly tested algorithm than your own. Security through obscurity can be compared to someone trying to protect their own money by burying it in the backyard when the common security mechanism would be to put the money in a bank. The money can be safe while buried, but it will be protected only until someone finds about its existence and starts to look for it. Due to this reason, when dealing with security, we usually prefer to use open source algorithms and tools. Everyone can access and discover flaws in them, but there are also a great number of specialists that are involved in finding the vulnerabilities and fixing them. In this section, we will discuss other security concepts that everyone must know when building a system. Information security When dealing with security, there are some attributes that need to be considered. The most important ones are the following: Authentication: Confirm the user's identity by validating that the user is who they claim to be Authorization: Decide whether the user is allowed to execute the requested action Confidentiality: Ensure that data can't be understood by third-parties Integrity: Protect the message against undetectable modifications Non-repudiation: Ensure that someone can't deny the authenticity of their own message Availability: Keep the system available when needed These terms will be better explained in the next sections. Authentication Authentication is the ability to confirm the user's identity. It can be implemented by a login form where you request the user to type their username and password. If the hashed password matches what was previously saved in the database, you have enough proof that the user is who they claim to be. This model is good enough, at least for typical applications. You confirm the identity by requesting the user to provide what they know. Another kind of authentication is to request the user to provide what they have. It can be a physical device (like a dongle) or access to an e-mail account or phone number. However, you can't ask the user to type their credentials for every request. As long as you authenticate it in the first request, you must create a security token that will be used in the subsequent requests. This token will be saved on the client side as a cookie and will be automatically sent to the server in all requests. On AWS, this token can be created using the Cognito service. How this is done will be described later in this chapter. Authorization When a request is received in the backend, we need to check if the user is allowed to execute the requested action. For example, if the user wants to checkout the order with ID 123, we need to make a query to the database to identify who is the owner of the order and compare if it is the same user. Another scenario is when we have multiple roles in an application and we need to restrict data access. For example, a system developed to manage school grades may be implemented with two roles, such as student and teacher. The teacher will access the system to insert or update grades, while the students will access the system to read those grades. In this case, the authentication system must restrict the actions insert and update for users that are part of the teachers group and users in the students group must be restricted to read their own grades. Most of the time, we handle authorization in our own backend, but some serverless services don't require a backend and they are responsible by themselves to properly check the authorization. For example, in the next chapter, we are going to see how serverless notifications are implemented on AWS. When we use AWS IoT, if we want a private channel of communication between two users, we must give them access to one specific resource known by both and restrict access to other users to avoid the disclosure of private messages. Confidentiality Developing a website that uses HTTPS for all requests is the main drive to achieve confidentiality in the communication between the users and your site. As the data is encrypted, it's very hard for malicious users to decrypt and understand its contents. Although there are some attacks that can intercept the communication and forge certificates (man-in-the-middle), those require the malicious user to have access to the machine or network of the victim user. From our side, adding HTTPS support is the best thing that we can do to minimize the chance of attacks. Integrity Integrity is related to confidentiality. While confidentiality relies on encrypting a message to prevent other users from accessing its contents, integrity deals with protecting the messages against modifications by encrypting messages with digital signatures (TLS certificates). Integrity is an important concept when designing low level network systems, but all that matters for us is adding HTTPS support. Non-repudiation Non-repudiation is a term that is often confused with authentication since both of them have the objective to prove who has sent the message. However, the main difference is that authentication is more interested in a technical view and the non-repudiation concept is interested in legal terms, liability, and auditing. When you have a login form with user and password input, you can authenticate the user who correctly knows the combination, but you can't have 100% certain since the credentials can be correctly guessed or stolen by a third-party. On the other hand, if you have a stricter access mechanism, such as a biometric entry, you have more credibility. However, this is not perfect either. It's just a better non-repudiation mechanism. Availability Availability is also a concept of interest in the information security field because availability is not restricted to how you provision your hardware to meet your user needs. Availability can suffer attacks and can suffer interruptions due to malicious users. There are attacks, such as Distributed Denial of Service (DDoS), that aim to create bottlenecks to disrupt site availability. In a DDoS attack, the targeted website is flooded with superfluous requests with the objective to overload the systems. This is usually accomplished by a controlled network of infected machines called a botnet. On AWS, all services run under the AWS Shield service, which was designed to protect against DDoS attacks with no additional charge. However, if you run a very large and important service, you may be a direct target of advanced and large DDoS attacks. In this case, there is a premium tier offered in the AWS Shield service to ensure your website's availability even in worst case scenarios. This requires an investment of US$ 3,000 per month, and with this, you will have 24x7 support of a dedicated team and access to other tools for mitigation and analysis of DDoS attacks. Security on AWS We use AWS credentials, roles, and policies, but security on AWS is much more than handling authentication and authorization of users. This is what we will discuss in this section. Shared responsibility model Security on AWS is based on a shared responsibility model. While Amazon is responsible for keeping the infrastructure safe, the customers are responsible for patching security updates to software and protecting their own user accounts. AWS's responsibilities include the following: Physical security of the hardware and facilities Infrastructure of networks, virtualization, and storage Availability of services respecting Service Level Agreements (SLAs) Security of managed services such as Lambda, RDS, DynamoDB, and others A customer's responsibilities are as follows: Applying security patches to the operating system on EC2 machines Security of installed applications Avoiding disclosure of user credentials Correct configuration of access policies and roles Firewall configurations Network traffic protection (encrypting data to avoid disclosure of sensitive information) Encryption of server-side data and databases In the serverless model, we rely only on managed services. In this case, we don't need to worry about applying security patches to the operating system or runtime, but we do need to worry about third-party libraries that our application depends on to execute. Also, of course, we need to worry about all the things that we need to configure (firewalls, user policies, and so on), the network traffic (supporting HTTPS) and how data is manipulated by the application. The Trusted Advisor tool AWS offers a tool named Trusted Advisor, which can be accessed through https://console.aws.amazon.com/trustedadvisor. It was created to offer help on how you can optimize costs or improve performance, but it also helps identify security breaches and common misconfigurations. It searches for unrestricted access to specific ports on your EC2 machines, if Multi-Factor Authentication is enabled on the root account and if IAM users were created in your account. You need to pay for AWS premium support to unlock other features, such as cost optimization advice. However, security checks are free. Pen testing A penetration test (or pen test) is a good practice that all big websites must perform periodically. Even if you have a good team of security experts, the usual recommendation is to hire a specialized third-party company to perform pen tests and to find vulnerabilities. This is because they will most likely have tools and procedures that your team may not have tried yet. However, the caveat here is that you can't execute these tests without contacting AWS first. To respect their user terms, you can only try to find breaches on your own account and assets, in scheduled time frames (so they can disable their intrusion detection systems for your assets), and only on restricted services, such as EC2 instances and RDS. AWS CloudTrail AWS CloudTrail is a service that was designed to record all AWS API calls that are executed on your account. The output of this service is a set of log files that register the API caller, the date/time, the source IP address of the caller, the request parameters, and the response elements that were returned. This kind of service is pretty important for security analysis, in case there are data breaches, and for systems that need the auditing mechanism for compliance standards. MFA Multi-Factor Authentication (MFA) is an extra security layer that everyone must add to their AWS root account to protect against unauthorized access. Besides knowing the user and password, a malicious user would also need physical access to your smartphone or security token, which greatly restricts the risks. On AWS, you can use MFA through the following means: Virtual devices: Application installed on Android, iPhone, or Windows phones Physical devices: Six-digit tokens or OTP cards SMS: Messages received on your phone We have discussed the basic security concepts and how to apply them on a serverless project. If you've enjoyed reading this article, do check out 'Building Serverless Web Applications' to implement signup, sign in, and log out features using Amazon Cognito. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 21826

article-image-microsofts-github-acquisition-is-good-for-the-open-source-community
Pavan Ramchandani
19 Jul 2018
6 min read
Save for later

Microsoft’s GitHub acquisition is good for the open source community

Pavan Ramchandani
19 Jul 2018
6 min read
Microsoft buying GitHub is "good news" for open source. - Jim Zemlin, the Executive Director of Linux Foundation Unless you have been living under a rock, you will have heard about the software giant Microsoft’s acquisition of the open source platform giant GitHub for $7.5 Billion. Since the announcement a few weeks ago, the discussions in the open source community have heated up regarding the future of open source. This acquisition has seen a surge in the number of developers migrating to rival version control systems like BitBucket, GitLab, etc., but mostly GitLab. This will affect GitHub’s user base and in turn the contribution to the platform, which is the primary source of funding to keep any open source service alive. This goes to show how difficult it is to create a great product for developers and still make money. Microsoft has created great products for enterprises and has been making money in the process. As such, this acquisition is one worth waiting and watching as it transforms both entities. The common fear among developers is that Microsoft will exploit the limitations inherent to an open source platform and will inject its subscription model into GitHub to make it profitable. The insane price that Microsoft paid to acquire GitHub afterall needs to be recovered. However, it may not be as straightforward. Many believe it’s not the platform’s monetizing potential, but its access to the user base that Microsoft is most interested in. A lot of them also believe, Microsoft has the potential to resurrect GitHub and revolutionize the open source movement. Let us explore some reasons why this acquisition is fruitful for the developer community. GitHub’s losses have been significant GitHub had reportedly been suffering losses and is said to have lost $66 Mn loss in 2016. The software industry is a fierce eat-or-get-eaten jungle. Losing out in the market to giant companies or other emerging startups is a common fear. There is always an alternative tool for every developer need as the software market relentlessly works to make things cheaper while offering variety. Startups are reaching the deflection point sooner in their operation cycle. The GitHub community is the platform’s greatest strengths and the reason why the platform has remained operational through difficult times; but there were regular internal frictions at the management level in GitHub. The strife became apparent when reports came of developers feeling ignored by the GitHub management. The founder, Chris Wanstrath, had to come out and address reports of toxic environment, in a report last year. With Microsoft buying GitHub, there would be a massive cashflow for all the projects in development and the management will be streamlined with Nat Friedman, announced as the head of GitHub operations. Nat’s successful history with leading open source projects such as Xamarin, gives many hope that this time around, Microsoft really does mean well for GitHub with its acquisition. The Azure Cloud advantage for GitHub One of the key challenges that GitHub has faced lately is scaling their infrastructure smoothly without adversely impacting their users. Outages have become a common occurrences that most GitHub users are familiar with. Microsoft has a strong suite of cloud platform and services in the form of Azure. GitHub users can expect receiving the native experience of using the Azure stack as a part of the integration with GitHub. This integration will further enhance the collaboration on the GitHub platform for developers and advance the GitHub ecosystem. Microsoft can integrate GitHub into its enterprise offerings GitHub, in the last few years, has been attempting to extend its reach in the enterprise market with various offerings for business. However, this offering was limited to creating private repositories for some fees. Microsoft, on the other end, has been a leader when it comes to providing enterprise tools and venturing into the subscription market. This acquisition will excite the brand-loyal enterprises, using Microsoft suites. Imagine the new clientele that GitHub now has access to thanks to Microsoft. Just as Microsoft have bundled Skype with their Office 365 suite, it is easy to postulate similar offerings being designed for enterprises with GitHub at the center of such plans. Just like Excel, GitHub could end up as the default version control tool that enterprises use to build new projects, prototype ideas, open source or otherwise. In exchange, Github could be Microsoft’s ace up its sleeve in  strengthening its open source community ties and help put Microsoft in a position to inject innovative strategies in the community. Microsoft’s push to open source projects Microsoft, have plunged head first into open sourcing projects in recent years. The push is not only for their experimental projects, but has also has been for their successful enterprise tools like .NET Core and Visual Studio. Historically, Microsoft has taken a lot of heat from the open source community for opposing the Linux model. But the recent paradigm shift in Microsoft, with a change in its leadership and vision, is focussed on working around the community and doing business from the enterprises. End of last year, Microsoft joined the Linux Foundation and went platinum with the Open Source Initiative. TypeScript is a full open source language and sees regular updates from Microsoft. It is now an established language for web development and is managed better than some of the open source languages. Also, TypeScript is fully hosted on GitHub for developers to improve on it.  This indicates that Microsoft has been able to reach out to the community and has the potential to operate open source projects without necessarily commercializing them. Conclusion Microsoft buying out GitHub is not necessarily bad. The tech giant has been one of the biggest contributors to GitHub with its projects like Visual Studio Code, TypeScript, etc. While the panic is understandable, considering Microsoft’s past strategies to counter the open source model in its early days, the recent activities in Microsoft, especially under the leadership of Satya Nadella are suggesting a paradigm shift in Microsoft’s approach to serving the IT market. You can hate Microsoft for being a profit-driven company, but there is no denying that Microsoft was one of the pioneers of the modern day software industry and more importantly, the bitter pill that GitHub needs to get out of the evergrowing loss making sinkhole. Microsoft understand software better and are capable of doing open source the right way and with more efficiency.  This acquisition was inevitable to sustain the platform and to scale it to serve the increasing demand of developer market. What Microsoft must bear in mind while revamping GitHub policies and the business model is that, it’s greatest challenge and its greatest asset is the paradox of this alliance itself. As GitHub gets more profit conscious, Microsoft must get more community centric to ensure an equilibrium is reached where developers can thrive on a platform that provides a great developing and community experience. The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab GitHub for Unity 1.0 is here with Git LFS and file locking support Microsoft releases Open Service Broker for Azure (OSBA) version 1.0  
Read more
  • 0
  • 0
  • 21798

article-image-unity-plugins-for-augmented-reality-app-development
Sugandha Lahoti
10 Apr 2018
4 min read
Save for later

Unity plugins for augmented reality application development

Sugandha Lahoti
10 Apr 2018
4 min read
Augmented Reality is the powerhouse for the next set of magic tricks headed to our mobile devices.  Augmented Reality combines real-world objects with Digital information. Heard about Pokemon Go? It was first showcased by Niantic at WWDC 2017 and was built on Apple’s augmented reality framework, ARKit. Following the widespread success of Pokemon Go, a large number of companies are eager to invest in AR technology. Unity is one of the dominant players in the industry when it comes to creating desktop, console and mobile games. Augmented Reality has been exciting game developers for quite some time now, and following this excitement Unity has released prominent tools for developers to experiment with AR Apps. Bear in mind that Unity is not designed exclusively for Augmented Reality and so developers can access additional functionality by importing extensions. These extensions also provide pre-designed game components such as characters or game props. Let us briefly look at 3 prominent tools or extensions for Augmented Reality development provided by Unity: Unity ARKit plugin The Unity ARKit plugin uses the functionality of the ARKit SDK within Unity projects. As on September 2017, this plugin is also extended for iOS apps as iOS ARKit plugin. The ARKit plugin provides Unity developers with access to features such as motion tracking, vertical and horizontal plane finding, live video rendering, hit-testing, raw point cloud data, ambient light estimation, and more for their AR projects. This plugin also provides easy integration of AR features in existing Unity projects. A new tool, the Unity ARKit Remote speeds up iteration by allowing developers to make real-time changes to the scene and debug scripts in the Unity Editor. The latest update to iOS ARKit is version 1.5 which provides developers with the more tools to power more immersive AR experiences. Google ARCore Google ARCore for Unity provides mobile AR experiences for Android, without the need for additional hardware. The latest major version ARCore 1.0 enables AR applications to track a phone’s motion in the real world, detect planes in the environment, and understand lighting in the camera scene. ARCore 1.0 introduces featured oriented points which help in the placement of anchors on textured surfaces. These feature points enhance the environmental understanding of the scene. So ARCore is not just limited to horizontal and vertical planes like ARKit, but can create AR Apps on any surface. ARCore 1.0 is supported by the Android Emulator in Android Studio 3.1 Beta and is available for use on multiple supported Android devices. Vuforia integration with Unity Vuforia allows developers to build cross-platform AR apps directly from the Unity editor. It provides Augmented Reality support for Android, iOS, and UWP devices, through a single API. It attaches digital content to different types of objects and environments using Model Targets and Ground Plane, across a broad range of devices and operating systems. Ground Plane attaches digital content to horizontal surfaces. Model Targets provides Object Recognition capabilities. Other targets include Image (to put AR content on flat objects) and Cloud (manage large collections of Image Targets from your own CMS). Vuforia also includes Device Tracking capability which provides an inside-out device tracker for rotational head and hand tracking. It also provides APIs to create immersive experiences that transition between AR and VR. You can browse through various AR projects from the Unity community to help you get started with your next big AR idea as well as to choose the toolkit best suited for you. Leap Motion open sources its $100 augmented reality headset, North Star Unity and Unreal comparison Types of Augmented Reality targets Create Your First Augmented Reality Experience: The Tools and Terms You Need to Understand
Read more
  • 0
  • 0
  • 21791

article-image-my-friend-the-robot-artificial-intelligence-needs-emotional-intelligence
Aaron Lazar
21 Feb 2018
8 min read
Save for later

My friend, the robot: Artificial Intelligence needs Emotional Intelligence

Aaron Lazar
21 Feb 2018
8 min read
Tommy’s a brilliant young man, who loves programming. He’s so occupied with computers that he hardly has any time for friends. Tommy programs a very intelligent robot called Polly, using Artificial Intelligence, so that he has someone to talk to. One day, Tommy gets hurt real bad about something and needs someone to talk to. He rushes home to talk to Polly and pours out his emotions to her. To his disappointment, Polly starts giving him advice like she does for any other thing. She doesn’t understand that he needs someone to “feel” what he’s feeling rather than rant away on what he should or shouldn’t be doing. He naturally feels disconnected from Polly. My Friend doesn’t get me Have you ever wondered what it would be like to have a robot as a friend? I’m thinking something along the lines of Siri. Siri’s pretty good at holding conversations and is quick witted too. But Siri can’t understand your feelings or emotions, neither can “she” feel anything herself. Are we missing that “personality” from the artificial beings that we’re creating? Even if you talk about chatbots, although we gain through convenience, we lose the emotional aspect, especially at a time when expressive communication is the most important. Do we really need it? I remember watching the Terminator, where Arnie asks John, “Why do you cry?” John finds it difficult to explain to him, why humans cry. The fact is though, that the machine actually understood there was something wrong with the human, thanks to the visual effects associated with crying. We’ve also seen some instances of robots or AI analysing sentiment through text processing as well. But how accurate is this? How would a machine know when a human is actually using sarcasm? What if John was faking it and could cry at the drop of a hat or he just happened to be chopping onions? That’s food for thought now. On the contrary, you might wonder though, do we really want our machines to start analysing our emotions? What if they take advantage of our emotional state? Well, that’s a bit of a far fetched thought and what we need to understand is that it’s necessary for robots to gauge a bit of our emotions to enhance the experience of interacting with them. There are several wonderful applications for such a technology. For instance, Marketing organisations could use applications that detect users facial expressions when they look at a new commercial to gauge their “interest”. It could also be used by law enforcement as a replacement to the polygraph. Another interesting use case would be to help autism affected individuals understand the emotions of others better. The combination of AI and EI could find a tonne of applications right from cars that can sense if the driver is tired or sleepy and prevent an accident by pulling over, to a fridge that can detect if you’re stressed and lock itself, to prevent you from binge eating! Recent Developments in Emotional Intelligence There are several developments happening from the past few years, in terms of building systems that understand emotions. Pepper, a Japanese robot, for instance, can tell feelings such as joy, sadness and anger, and respond by playing you a song. A couple of years ago, Microsoft released a tool, the Emotion API, that could breakdown a person’s emotions based only on their picture. Physiologists, Neurologists and Psychologists, have collaborated with engineers to find measurable indicators of human emotion that can be taught to computers to look out for. There are projects that have attempted to decode facial expressions, the pitch of our voices, biometric data such as heart rate and even our body language and muscle movements. Bronwyn van der Merwe, General Manager of Fjord in the Asia Pacific region revealed that big companies like Amazon, Google and Microsoft are hiring comedians and script writers in order to harness the human-like aspect of AI by inducing personality into their technologies. Jerry, Ellen, Chris, Russell...are you all listening? How it works Almost 40% of our emotions are conveyed through tone of voice and the rest is read through facial expressions and gestures we make. An enormous amount of data is collected from media content and other sources and is used as training data for algorithms to learn human facial expressions and speech. One type of learning used is Active Learning or human-assisted machine learning. This is a kind of supervised learning, where the learning algorithm is able to interactively query the user to obtain new data points or an output. Situations might exist where unlabeled data is plentiful but manually labeling the data is expensive. In such a scenario, learning algorithms can query the user for labels. Since the algorithm chooses the examples, the number of examples to learn a concept turns out to be lower than what is required for usual supervised learning. Another approach is to use Transfer Learning, a method that focuses on storing the knowledge that’s gained while solving one problem and then applying it to a different but related problem. For example, knowledge gained while learning to recognize fruits could apply when trying to recognize vegetables. This works by analysing a video for facial expressions and then transfering that learning to label speech modality. What’s under the hood of these machines? Powerful robots that are capable of understanding emotions would most certainly be running Neural Nets under the hood. Complementing the power of these Neural Nets are beefy CPUs and GPUs on the likes of the Nvidia Titan X GPU and Intel Nervana CPU chip. Last year at NIPS, amongst controversial body shots and loads of humour filled interactions, Kory Mathewson and Piotr Mirowski entertained audiences with A.L.Ex and Pyggy, two AI robots that have played alongside humans in over 30 shows. These robots introduce audiences to the “comedy of speech recognition errors” by blabbering away to each other as well as to humans. Built around a Recurrent Neural Network that’s trained on dialogue from thousands of films, A.L.Ex. communicates with human performers, audience participants, and spectators through speech recognition, voice synthesis, and video projection. A.L.E.x is written in Torch and Lua code and has a word vocabulary of 50,000 words that have been extracted from 102,916 movies and it is built on an RNN with Long-Short Term Memory architecture and 512 dimensional layers. The unconquered challenges today The way I see it, there are broadly 3 challenge areas that AI powered robots face in this day: Rationality and emotions: AI robots need to be fed with initial logic by humans, failing which, they cannot learn on their own. They may never have the level of rationality or the breadth of emotions to take decisions the way humans do. Intuition, strategic thinking and emotions: Machines are incapable of thinking into the future and taking decisions the way humans can. For example, not very far into the future, we might have an AI powered dating application that measures a subscriber’s interest level while chatting with someone. It might just rate the interest level lower, if the person is in a bad mood due to some other reason. It wouldn’t consider the reason behind the emotion and whether it was actually linked to the ongoing conversation. Spontaneity, empathy and emotions: It may be years before robots are capable of coming up with a plan B, the way humans do. Having a contingency plan and implementing it in an emotional crisis is something that AI fails at accomplishing. For example, if you’re angry at something and just want to be left alone, your companion robot might just follow what you say without understanding your underlying emotion, while an actual human would instantly empathise with your situation and rather try to be there for you. Bronwyn van der Merwe said, "As human beings, we have contextual understanding and we have empathy, and right now there isn't a lot of that built into AI. We do believe that in the future, the companies that are going to succeed will be those that can build into their technology that kind of an understanding". What’s in store for the future If you ask me, right now we’re on the highway to something really great. Yes, there are several aspects that are unclear about AI and robots making our lives easier vs disrupting them, but as time passes, science is fitting the pieces of the puzzle together to bring about positive changes in our lives. AI is improving on the emotional front as I write, although there are clearly miles to go. Companies like Affectiva are pioneering emotion recognition technology and are working hard to improve the way AI understands human emotions. Biggies like Microsoft had been working on bringing in emotional intelligence into their AI since before 2015 and have come a long way since then. Perhaps, in the next Terminator movie, Arnie might just comfort a weeping Sarah Connor, saying, “Don’t cry, Sarah dear, he’s not worth it”, or something of the sort. As a parting note and just for funsies, here’s a final question for you, “Can you imagine a point in the future when robots have such high levels of EQ, that some of us might consider choosing them as a partner over humans?”
Read more
  • 0
  • 0
  • 21789
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-phonegap
Robi Sen
27 Feb 2015
9 min read
Save for later

An Introduction to PhoneGap

Robi Sen
27 Feb 2015
9 min read
This is the first of a series of posts that will focus on using PhoneGap, the free and open source framework for creating mobile applications using web technologies such as HTML, CSS, and JavaScript that will come in handy for game development. In this first article, we will introduce PhoneGap and build a very simple Android application using PhoneGap, the Android SDK, and Eclipse. In a follow-on article, we will look at how you can use PhoneGap and PhoneGap Build to create iOS apps, Android apps, BlackBerry apps, and others from the same web source code. In future articles, we will dive deeper into exploring the various tools and features of PhoneGap that will help you build great mobile applications that perform and function just like native applications. Before we get into setting up and working with PhoneGap, let’s talk a little bit about what PhoneGap is. PhoneGap was originally developed by a company called Nitobi but was later purchased by Adobe Inc. in 2011. When Adobe acquired PhoneGap, it donated the code of the project to the Apache Software Foundation, which renamed the project to Apache Cordova. While both tools are similar and open source, and PhoneGap is built upon Cordova, PhoneGap has additional capabilities to integrate tightly with Adobe’s Enterprise products, and users can opt for full support and training. Furthermore, Adobe offers PhoneGap Build, which is a web-based service that greatly simplifies building Cordova/PhoneGap projects. We will look at PhoneGap Build in a future post.   Apache Cordova is the core code base that Adobe PhoneGap draws from. While both are open source and free, PhoneGap has a paid-for Enterprise version with greater Adobe product integration, management tools, and support. Finally, Adobe offers a free service called PhoneGap Build that eases the process of building applications, especially for those needing to build for many devices. Getting Started For this post, to save space, we are going to jump right into getting started with PhoneGap and Android and spend a minimal amount of time on other configurations. To follow along, you need to install node.js, PhoneGap, Apache Ant, Eclipse, the Android Developer Tools for Eclipse, and the Android SDK. We’ll be using Windows 8.1 for development in this post, but the instructions are similar regardless of the operating system. Installation guides, for any major OS, can be found at each of the links provided for the tools you need to install. Eclipse and the Android SDK The easiest way to install the Android SDK and the Android ADT for Eclipse is to download the Eclipse ADT bundle here. Just downloading the bundle and unpacking it to a directory of your choice will include everything you need to get moving. If you already have Eclipse installed on your development machine, then you should go to this link here, which will let you download the SDK and the Android Development Tools along with instructions on how to integrate the ADT into Eclipse. Even if you have Eclipse, I would recommend just downloading the Eclipse ADT bundle and installing it into your own unique environment. The ADT plugin can sometimes have conflicts with other Eclipse plugins. Making sure Android tooling is set up One thing you will need to do, no matter whether you use the Eclipse ADT bundle or not, is to make sure that the Android tools are added to your class path. This is because PhoneGap uses the Android Development Tools and Android SDK to build and compile the Android application. The easiest way to make sure everything is added to your path is to edit your environment variables. To do that, just search for “Edit Environment” and select Edit the system environment variables. This will open your System Properties window. From there, select Advanced and then Environment Variables as shown in the next figure. Under System Variables, select Path and Edit. Now you need to add sdkplatform-tools and sdktools  to your path as shown in the next figure. If you have used the Eclipse ADT bundle, your SDK directory should be of the form C:adt-bundle-windows-x86_64-20131030sdk.  If you cannot find your Android SDK, search for your ADT. In our case, the two directory paths we add to the Path  variable are C:adt-bundle-windows-x86_64-20131030sdkplatform-tools  and C:adt-bundle-windows-x86_64-20131030sdktools. Once you’re done, select OK , but don’t just exit the Environment Variables  screen yet since we will need to do this again when installing Ant. Installing Ant PhoneGap makes use of Apache Ant to help build projects. Download Ant from here and make sure to add the bin directory to your path. It is also good to set the environment variable ANT_HOME as well. To do that, create a new variable in the Environment Variables screen under System Variables called ANT_HOME and point it to the directory where you installed Ant: For more detailed instructions, you can read the official install guide for Apache Ant here. Installing Node.js Node.js is a development platform built on Chrome’s JavaScript runtime engine that can be used for building large-scale, real-time, server-based applications. Node.js is used to provide a lot of the command-line tools for PhoneGap, and to install PhoneGap, we first need Node.js. Unix, OS X, and Windows users can find installers as well as source code here on the Node.js download site. For this post, we will be using the Windows 64-bit installer, which you should be able to double-click and install. Once you’re done installing, you should be able to open a command prompt and type npm –version and see something like this: Installing PhoneGap Once you have Node.js installed, open a command line and type npm install –g phonegap. Node will now download and install PhoneGap and its dependencies as shown here: Creating an initial project in PhoneGap Now that you have PhoneGap installed, let’s use the command-line tools to create an initial PhoneGap project. First, create a folder where you want to store your project. Then, to create a basic project, all you need to do is type phonegap create mytestapp as shown in the following figure. PhoneGap will now build a basic project with a deployable app. Now go to the directory you are using for your project’s root directory. You should see a directory called mytestapp, and if you open that directory, you should see something like the following: Now look under platforms>android and you should see something like what is shown in the next figure, which is the directory structure that PhoneGap made for your Android project. Make sure to note the assets directory, which contains the HTML and JavaScript of the application or the Cordova directories that contain the necessary code to tie Android’s API’s to PhoneGap/Cordova’s API calls. Now let’s import the project into Eclipse. Open Eclipse and select Create a New Project, and select Android Project from Existing Code. Browse to your project directory and select the platforms/android folder and select Finish, like this: You should now see the mytestapp project, but you may see a lot of little red X’s and warnings about the project not building correctly. To fix this, all you need to do is clean and build the project again like so: Right-click on the project directory. In the resulting Properties dialog, select Android from the navigation pane. For the project build target, select the highest Android API level you have installed. Click on OK. Select Clean from the Project menu. This should correct all the errors in the project. If it does not, you may need to then select Build again if it does not automatically build. Now you can finally launch your project. To do this, select the HelloWorld project and right-click on it, and select Run as and then Android application. You may now be warned that you do not have an Android Virtual Device, and Eclipse will launch the AVD manager for you. Follow the wizard and set up an AVD image for your API. You can do this by selecting Create in the AVD manager and copying the values you see here: Once you have built the image, you should now be able to launch the emulator. You may have to again right-click on the HelloWorld directory and select Run as then Android application. Select your AVD image and Eclipse will launch the Android emulator and push the HelloWorld application to the virtual image. Note that this can take up to 5 minutes! In a later post, we will look at deploying to an actual Android phone, but for now, the emulator will be sufficient. Once the Android emulator has started, you should see the Android phone home screen. You will have to click-and-drag on the home screen to open it, and you should see the phone launch pad with your PhoneGap HelloWorld app. If you click on it, you should see something like the following: Summary Now that probably seemed like a lot of work, but now that you are set up to work with PhoneGap and Eclipse, you will find that the workflow will be much faster when we start to build a simple application. That being said, in this post, you learned how to set up PhoneGap, how to build a simple application structure, how to install and set up Android tooling, and how to integrate PhoneGap with the Eclipse ADT. In the next post, we will actually get into making a real application, look at how to update and deploy code, and how to push your applications to a real phone. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 21779

article-image-the-evolution-cybercrime
Packt Editorial Staff
29 Mar 2018
4 min read
Save for later

The evolution of cybercrime

Packt Editorial Staff
29 Mar 2018
4 min read
A history of cybercrime As computer systems have now become integral to the daily functioning of businesses, organizations, governments, and individuals we have learned to put a tremendous amount of trust in these systems. As a result, we have placed incredibly important and valuable information on them. History has shown, that things of value will always be a target for a criminal. Cybercrime is no different. As people flood their personal computers, phones, and so on with valuable data, they put a target on that information for the criminal to aim for, in order to gain some form of profit from the activity. In the past, in order for a criminal to gain access to an individual's valuables, they would have to conduct a robbery in some shape or form. In the case of data theft, the criminal would need to break into a building, sifting through files looking for the information of greatest value and profit. In our modern world, the criminal can attack their victims from a distance, and due to the nature of the internet, these acts would most likely never meet retribution. Cybercrime in the 70s and 80s In the 70s, we saw criminals taking advantage of the tone system used on phone networks. The attack was called phreaking, where the attacker reverse-engineered the tones used by the telephone companies to make long distance calls. In 1988, the first computer worm made its debut on the internet and caused a great deal of destruction to organizations. This first worm was called the Morris worm, after its creator Robert Morris. While this worm was not originally intended to be malicious it still caused a great deal of damage. The U.S. Government Accountability Office in 1980 estimated that the damage could have been as high as $10,000,000.00. 1989 brought us the first known ransomware attack, which targeted the healthcare industry. Ransomware is a type of malicious software that locks a user's data, until a small ransom is paid, which will result in the issuance of a cryptographic unlock key. In this attack, an evolutionary biologist named Joseph Popp distributed 20,000 floppy disks across 90 countries, and claimed the disk contained software that could be used to analyze an individual's risk factors for contracting the AIDS virus. The disk however contained a malware program that when executed, displayed a message requiring the user to pay for a software license. Ransomware attacks have evolved greatly over the years with the healthcare field still being a very large target. The birth of the web and a new dawn for cybercrime The 90s brought the web browser and email to the masses, which meant new tools for cybercriminals to exploit. This allowed the cybercriminal to greatly expand their reach. Up till this time, the cybercriminal needed to initiate a physical transaction, such as providing a floppy disk. Now cybercriminals could transmit virus code over the internet in these new, highly vulnerable web browsers. Cybercriminals took what they had learned previously and modified it to operate over the internet, with devastating results. Cybercriminals were also able to reach out and con people from a distance with phishing attacks. No longer was it necessary to engage with individuals directly. You could attempt to trick millions of users simultaneously. Even if only a small percentage of people took the bait you stood to make a lot of money as a cybercriminal. The 2000s brought us social media and saw the rise of identity theft. A bullseye was painted for cybercriminals with the creation of databases containing millions of users' personal identifiable information (PII), making identity theft the new financial piggy bank for criminal organizations around the world. This information coupled with a lack of cybersecurity awareness from the general public allowed cybercriminals to commit all types of financial fraud such as opening bank accounts and credit cards in the name of others. Cybercrime in a fast-paced technology landscape Today we see that cybercriminal activity has only gotten worse. As computer systems have gotten faster and more complex we see that the cybercriminal has become more sophisticated and harder to catch. Today we have botnets, which are a network of private computers that are infected with malicious software and allow the criminal element to control millions of infected computer systems across the globe. These botnets allow the criminal element to overload organizational networks and hide the origin of the criminals: We see constant ransomware attacks across all sectors of the economy People are constantly on the lookout for identity theft and financial fraud Continuous news reports regarding the latest point of sale attack against major retailers and hospitality organizations This is an extract from Information Security Handbook by Darren Death. Follow Darren on Twitter: @DarrenDeath. 
Read more
  • 0
  • 2
  • 21727

article-image-why-mobile-vr-sucks
Amarabha Banerjee
09 Jul 2018
4 min read
Save for later

Why mobile VR sucks

Amarabha Banerjee
09 Jul 2018
4 min read
If you’re following the news, chances are you’ve heard about Virtual Reality or VR headsets like Oculus, Samsung Gear, HTC Vive etc. Trending terms and buzzwords are all good for a business or tech that’s novel and yet to be adopted by the majority of consumers. But the proof of the pudding is when people have started using the tech. And the first reactions to mobile VR are not at all good. This has even made the founder of Oculus Rift, John Carmack to give a statement, “We are coasting on novelty, and the initial wonder of being something people have never seen before”. The jury is out on present day Mobile VR technologies and headsets -  ‘It Sucks’ in its present form. If you want to know why and what can make it better then read ahead. Hardware are expensive Mobile headsets are costly, mostly in the $399- $799 range. The most successful VR headset till date is Google Cardboard. The reason - it’s dirt cheap and it doesn’t need too much set up and customization. Such a high price at the initial launching phase of any tech is going to make the users worried. Not many people would want to buy an expensive new toy without knowing exactly how it’s going to be. VR games don’t match up to video game quality The initial VR games for mobile were very poor. There are 13 billion mobile gamers across the world, undeniably a huge market to tap into. But we have to keep in mind that these gamers have already access to high quality games which they can play just by tapping their mobile screen. For them to strap on that headset and get immersed in VR games, the incentive needs to be too alluring to resist. The current crop of VR games lack complexity, their UI design is not intuitive enough to hold the attention of a user for longer duration of time, especially when playing a VR game means strapping up that head gear. These VR games also take too much time to load which is a huge negative for VR games. The hype vs reality gap is improving, but it’s painfully slow The current phase of VR is the initial breakthrough stage where there are lot of expectations from it. But the games and apps are not upto the mark and hence those who have used it are giving it a thumbs down. The word of mouth publicity is mostly negative and this is creating a negative impact on mobile VR as a whole. The chart below shows the gap between initial expectation and the reality of VR and how it might shape up in the near future according to Unity's CEO John Riccitiello. AR vs VR vs MR: A concoction for confusion The popularity of Augmented Reality (AR) and the emergence of Mixed Reality - an amalgamation of both AR and VR have distracted the developers as per which platform and what methodology to adapt. The UX and UI design are quite different for both AR and VR and MR and hence all of these three disciplines would need dedicated development resources. For this to happen, these disciplines would have to be formalized first and until that time, the quality of the apps will not improve drastically. No unified VR development platform Mobile VR is dependant on SDKs and primarily on the two game engines Unity and Unreal Engine that have come up with support for VR game development. While Unity is one of the biggest names in game development industry, a dedicated and unified VR development platform is still missing in action. As for Unity and Unreal Engine their priority will not be VR any time soon. Things can change if and when some tech giant like Google, Microsoft, Facebook etc. will dedicate their resources to create VR apps and Games for mobile. Although Google has cardboard, Facebook unveiled React VR and support for AR development, Microsoft has their own game going on with Hololens AR and MR development, the trend that started it all still seems to be lost among its newer cousins. I think, VR will be big, but it will have to wait till its implementation by some major business or company. Till then, we will have to wear our ghastly headsets and imagine that we are living in the future. Game developers say Virtual Reality is here to stay Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint Build a Virtual Reality Solar System in Unity for Google Cardboard  
Read more
  • 0
  • 0
  • 21724

article-image-understanding-sentiment-analysis-and-other-key-nlp-concepts
Sunith Shetty
20 Dec 2017
12 min read
Save for later

Understanding Sentiment Analysis and other key NLP concepts

Sunith Shetty
20 Dec 2017
12 min read
[box type="note" align="" class="" width=""]This article is an excerpt taken from a book Big Data Analytics with Java written by Rajat Mehta. This book will help you learn to perform big data analytics tasks using machine learning concepts such as clustering, recommending products, data segmentation and more. [/box] With this post, you will learn what is sentiment analysis and how it is used to analyze emotions associated within the text. You will also learn key NLP concepts such as Tokenization, stemming among others and how they are used for sentiment analysis. What is sentiment analysis? One of the forms of text analysis is sentimental analysis. As the name suggests this technique is used to figure out the sentiment or emotion associated with the underlying text. So if you have a piece of text and you want to understand what kind of emotion it conveys, for example, anger, love, hate, positive, negative, and so on you can use the technique sentimental analysis. Sentimental analysis is used in various places, for example: To analyze the reviews of a product whether they are positive or negative This can be especially useful to predict how successful your new product is by analyzing user feedback To analyze the reviews of a movie to check if it's a hit or a flop Detecting the use of bad language (such as heated language, negative remarks, and so on) in forums, emails, and social media To analyze the content of tweets or information on other social media to check if a political party campaign was successful or not  Thus, sentimental analysis is a useful technique, but before we see the code for our sample sentimental analysis example, let's understand some of the concepts needed to solve this problem. [box type="shadow" align="" class="" width=""]For working on a sentimental analysis problem we will be using some techniques from natural language processing and we will be explaining some of those concepts.[/box] Concepts for sentimental analysis Before we dive into the fully-fledged problem of analyzing the sentiment behind text, we must understand some concepts from the NLP (Natural Language Processing) perspective. We will explain these concepts now. Tokenization From the perspective of machine learning one of the most important tasks is feature extraction and feature selection. When the data is plain text then we need some way to extract the information out of it. We use a technique called tokenization where the text content is pulled and tokens or words are extracted from it. The token can be a single word or a group of words too. There are various ways to extract the tokens, as follows: By using regular expressions: Regular expressions can be applied to textual content to extract words or tokens from it. By using a pre-trained model: Apache Spark ships with a pre-trained model (machine learning model) that is trained to pull tokens from a text. You can apply this model to a piece of text and it will return the predicted results as a set of tokens. To understand a tokenizer using an example, let's see a simple sentence as follows: Sentence: "The movie was awesome with nice songs" Once you extract tokens from it you will get an array of strings as follows: Tokens: ['The', 'movie', 'was', 'awesome', 'with', 'nice', 'songs'] [box type="shadow" align="" class="" width=""]The type of tokens you extract depends on the type of tokens you are interested in. Here we extracted single tokens, but tokens can also be a group of words, for example, 'very nice', 'not good', 'too bad', and so on.[/box] Stop words removal Not all the words present in the text are important. Some words are common words used in the English language that are important for the purpose of maintaining the grammar correctly, but from conveying the information perspective or emotion perspective they might not be important at all, for example, common words such as is, was, were, the, and so. To remove these words there are again some common techniques that you can use from natural language processing, such as: Store stop words in a file or dictionary and compare your extracted tokens with the words in this dictionary or file. If they match simply ignore them. Use a pre-trained machine learning model that has been taught to remove stop words. Apache Spark ships with one such model in the Spark feature package. Let's try to understand stop words removal using an example: Sentence: "The movie was awesome" From the sentence we can see that common words with no special meaning to convey are the and was. So after applying the stop words removal program to this data you will get: After stop words removal: [ 'movie', 'awesome', 'nice', 'songs'] [box type="shadow" align="" class="" width=""]In the preceding sentence, the stop words the, was, and with are removed.[/box] Stemming Stemming is the process of reducing a word to its base or root form. For example, look at the set of words shown here: car, cars, car's, cars' From our perspective of sentimental analysis, we are only interested in the main words or the main word that it refers to. The reason for this is that the underlying meaning of the word in any case is the same. So whether we pick car's or cars we are referring to a car only. Hence the stem or root word for the previous set of words will be: car, cars, car's, cars' => car (stem or root word) For English words again you can again use a pre-trained model and apply it to a set of data for figuring out the stem word. Of course there are more complex and better ways (for example, you can retrain the model with more data), or you have to totally use a different model or technique if you are dealing with languages other than English. Diving into stemming in detail is beyond the scope of this book and we would encourage readers to check out some documentation on natural language processing from Wikipedia and the Stanford nlp website. [box type="shadow" align="" class="" width=""]To keep the sentimental analysis example in this book simple we will not be doing stemming of our tokens, but we will urge the readers to try the same to get better predictive results.[/box] N-grams Sometimes a single word conveys the meaning of context, other times a group of words can convey a better meaning. For example, 'happy' is a word in itself that conveys happiness, but 'not happy' changes the picture completely and 'not happy' is the exact opposite of 'happy'. If we are extracting only single words then in the example shown before, that is 'not happy', then 'not' and 'happy' would be two separate words and the entire sentence might be selected as positive by the classifier However, if the classifier picks the bi-grams (that is, two words in one token) in this case then it would be trained with 'not happy' and it would classify similar sentences with 'not happy' in it as 'negative'. Therefore, for training our models we can either use a uni-gram or a bi-gram where we have two words per token or as the name suggest an n-gram where we have 'n' words per token, it all depends upon which token set trains our model well and it improves its predictive results accuracy. To see examples of n-grams refer to the following table:   Sentence The movie was awesome with nice songs Uni-gram ['The', 'movie', 'was', 'awesome', 'with', 'nice', 'songs'] Bi-grams ['The movie', 'was awesome', 'with nice', 'songs'] Tri-grams ['The movie was', 'awesome with nice', 'songs']   For the purpose of this case study we will be only looking at unigrams to keep our example simple. By now we know how to extract words from text and remove the unwanted words, but how do we measure the importance of words or the sentiment that originates from them? For this there are a few popular approaches and we will now discuss two such approaches. Term presence and term frequency Term presence just means that if the term is present we mark the value as 1 or else 0. Later we build a matrix out of it where the rows represent the words and columns represent each sentence. This matrix is later used to do text analysis by feeding its content to a classifier. Term Frequency, as the name suggests, just depicts the count or occurrences of the word or tokens within the document. Let's refer to the example in the following table where we find term frequency:   Sentence The movie was awesome with nice songs and nice dialogues. Tokens (Unigrams only for now) ['The', 'movie', 'was', 'awesome', 'with', 'nice', 'songs', 'and', 'dialogues'] Term Frequency ['The = 1', 'movie = 1', 'was = 1', 'awesome = 1', 'with = 1', 'nice = 2', 'songs = 1', 'dialogues = 1']   As seen in the preceding table, the word 'nice' is repeated twice in the preceding sentence and hence it will get more weight in determining the opinion shown by the sentence. Bland term frequency is not a precise approach for the following reasons: There could be some redundant irrelevant words, for example, the, it, and they that might have a big frequency or count and they might impact the training of the model There could be some important rare words that could convey the sentiment regarding the document yet their frequency might be low and hence they might not be inclusive for greater impact on the training of the model Due to this reason, a better approach of TF-IDF is chosen as shown in the next sections. TF-IDF TF-IDF stands for Term Frequency and Inverse Document Frequency and in simple terms it means the importance of a term to a document. It works using two simple steps as follows: It counts the number of terms in the document, so the higher the number of terms the greater the importance of this term to the document. Counting just the frequency of words in a document is not a very precise way to find the importance of the words. The simple reason for this is there could be too many stop words and their count is high so their importance might get elevated above the importance of real good words. To fix this, TF-IDF checks for the availability of these stop words in other documents as well. If the words appear in other documents as well in large numbers that means these words could be grammatical words such as they, for, is, and so on, and TF-IDF decreases the importance or weight of such stop words. Let's try to understand TF-IDF using the following figure: As seen in the preceding figure, doc-1, doc-2, and so on are the documents from which we extract the tokens or words and then from those words we calculate the TF-IDFs. Words that are stop words or regular words such as for , is, and so on, have low TF-IDFs, while words that are rare such as 'awesome movie' have higher TF-IDFs. TF-IDF is the product of Term Frequency and Inverse document frequency. Both of them are explained here: Term Frequency: This is nothing but the count of the occurrences of the words in the document. There are other ways of measuring this, but the simplistic approach is to just count the occurrences of the tokens. The simple formula for its calculation is:      Term Frequency = Frequency count of the tokens Inverse Document Frequency: This is the measure of how much information the word provides. It scales up the weight of the words that are rare and scales down the weight of highly occurring words. The formula for inverse document frequency is: TF-IDF: TF-IDF is a simple multiplication of the Term Frequency and the Inverse Document Frequency. Hence: This simple technique is very popular and it is used in a lot of places for text analysis. Next let's look into another simple approach called bag of words that is used in text analytics too. Bag of words As the name suggests, bag of words uses a simple approach whereby we first extract the words or tokens from the text and then push them in a bag (imaginary set) and the main point about this is that the words are stored in the bag without any particular order. Thus the mere presence of a word in the bag is of main importance and the order of the occurrence of the word in the sentence as well as its grammatical context carries no value. Since the bag of words gives no importance to the order of words you can use the TF-IDFs of all the words in the bag and put them in a vector and later train a classifier (naïve bayes or any other model) with it. Once trained, the model can now be fed with vectors of new data to predict on its sentiment. Summing it up, we have got you well versed with sentiment analysis techniques and NLP concepts in order to apply sentimental analysis. If you want to implement machine learning algorithms to carry out predictive analytics and real-time streaming analytics you can refer to the book Big Data Analytics with Java.    
Read more
  • 0
  • 0
  • 21650
article-image-what-are-lightweight-architecture-decision-records
Richard Gall
16 May 2018
4 min read
Save for later

What are lightweight Architecture Decision Records?

Richard Gall
16 May 2018
4 min read
Architecture Decision Records (ADRs) document all the decisions made about software. Every change is recorded in a plain text file sitting inside a version control system (like GitHub). The record should be a complement to the information you can find in a version control system. The ADR provides context and information around every decision made about a piece of software. Why are lightweight Architecture Decision Records needed? We are always making decisions when we build software. Even the simplest piece of software will have required the engineer to take a number of different decisions. Often these decisions aren't obvious. If you've ever had to work with code written by someone else you're probably familiar with this sort of situation. You might have even found that when you come across someone else's code, you need to make a further decision. Either you can simply accept what has been written, and merely surmise and assume why it has been done in the way that it, or you can decide to change it, based on your own judgement. Neither option is ideal. This was what Michael Nygard identified in this blog post in 2011. This was when the concept of Architecture Decision Records first emerged. An ADR should prevent situations like this arising. That makes life easier for you. More importantly, it should mean that every decision is transparent to everyone involved. So, instead of blindly accepting something or immediately changing it, you can simply check the Architecture Decision Record. This will then inform how you proceed. Perhaps you need to make a change. But perhaps you also now understand the context of why something was built in the way it was. Any questions you might have should be explicitly answered in the architecture decision record. So, when you start asking yourself why has she done it like that? instead of floundering helplessly, you can find the answer in the ADR. Why lightweight Architecture Decision Records now? Architecture Decision Records aren't a new thing. Nygard wrote his post all the way back in 2011, after all. But the fact remains that the context from which Nygard was writing in 2011 was very specific. Today it is mainstream. As we've moved away from monolithic architecture towards microservices or serverless, decision making has become more and more important in software engineering. This is a point that is well explained in a blog post here: "The rise of lean development and microservices... complicates the ability to communicate architecture decisions. While these concepts are not inherently opposed to documentation, their processes often fail to effectively capture decision-making processes and reasoning. Another possible inefficiency when recording decisions is bad or out-of-date documentation. It's often a herculean effort to keep large, complex architecture documents current, making maintenance one of the most common barriers to entry." ADRs are, then, a way of managing the complexity in modern software engineering. They are a response to a fundamental need to better communicate decisions. Most importantly, they codify decision-making within the development process. It is when they are lightweight and sit within the project itself that they are most effective. Architecture Decision Record template Architecture Decision Records must follow a template. Not only does that mean everyone is working off the same page, it also means people are actually more likely to document their decisions. Think about it: if you're asked to note how you decide to do something without any guidelines, you're probably not going to do it at all. Below, you'll find an Architecture Decision Record example template. There are a number of different templates you can use, but it's probably best to sit down with your team and agree on what needs to be captured. An Architecture Decision Record example Date Decision makers [who was involved in the decision taken] Category [which part of the architecture does this decision pertain to] Contextual outline [Explain why this decision was made. Outline the key considerations and assumptions at play] Impact consequences [What does this decision mean for the project? What should someone reading this be aware of in terms of future decisions?] As I've already noted, there are a huge number of ways you may want to approach this. Use this as a starting point. Read next Enterprise Architecture Concepts Reactive Programming and the Flux Architecture
Read more
  • 0
  • 0
  • 21593

article-image-5-hurdles-overcome-javascript
Antonio Cucciniello
26 Jul 2017
5 min read
Save for later

The 5 hurdles to overcome in JavaScript

Antonio Cucciniello
26 Jul 2017
5 min read
If you are new to JavaScript, you may find it a little confusing depending on what computer language you were using before. Although JavaScript is my favorite language to use today, I cannot say that it was always this way. There were some things I truly disliked and was genuinely confused with in JavaScript. At this point I have come to accept these things. Today we will discuss the five hurdles you may come across in the JavaScript programing language. Global variables No matter what programming language you are using, it is never a good idea to have variables, functions, or objects as part of your global scope. It is good practice to limit the amount of global variables as much as possible. As programs get larger, there is a greater chance of naming collisions and giving access to code that does not necessarily need it by making it global. When implementing things, you want a variable to have a large enough scope as you need it to be. In JavaScript, you can access some global variables and objects through window. You can add things to this if you would like, but you should not do this. Use of Bitwise Operators As you probably know, JavaScript is a high level language that does not communicate with the hardware much. There are these things called Bitwise Operators that allow you to compare the bits of two variables. For instance x & y does an AND operation on x and y. The problem with this is, in JavaScript there is no such thing as integers, only double precision floating point numbers. So in order to do the bitwise operation, it must convert x and y to integers, compare the bits, and then convert them back to floating point numbers. This is much slower to perform and really should not be done, but then again it is somehow allowed. Coding style variations From seeing many different open source repositories, there does not seem to be one coding style standard that everyone adheres too. Some people love semicolons, others hate them. Some people adore ES6, other people despise it. Personally, I am fan of using standard for coding style, and I use ES5. That is soley my opinion though. When comparing code with other people who have completely different styles, it can be difficult to use their code or write something similar. It would be nice to have a more generally accepted style that is used by all JavaScript developers. It would make us overall more productive. Objects Coming from a class-based language, I found the topic of prototypical inheritance difficult to understand and use. In prototypical inheritance all objects inherit from Object.prototype. That means that if you try to refer to a property of an object that you have not defined for yourself, but it exists as part of Object.prototype, then it will execute using that property or function. There is a chain of objects where each object inherits all of the properties from its parent and all of that parents' parents. Meaning, your object might have access to plenty of functions it does not need. Luckily, you can override any of the parent functions by establishing a function for this object. A large amount of falsy values Here is a table of falsy values that are used in JavaScript: False Value Type 0 Numbers NaN Numbers '' String false Boolean null Object undefined undefined All of these values represent a different falsy value, but they are not interchangeable. They only work for their type in JavaScript. As a beginner, trying to figure out how to check for errors at certain points in your code can be tricky. Not to harp on the problem with global variables again, but undefined and NaN are both variables that are part of global scope. This means that you can actually edit the values of them. This should perhaps be illegal, because this one change can affect your entire product or system. Conclusion As mentioned in the beginning, this post is simply an opinion. I am coming from a background in C/C++ and then to JavaScript. These were the top 5 problems I had with JavaScript that made me really scratch my head. You might have a completely different opinion reading this from your different technical background. I hope you share your opinion! If you enjoyed this post, tweet and tell me your least favorite part of using JavaScript, or if you have no such problems, then please share your favorite JavaScript feature! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 21593

article-image-what-you-should-know-about-unity-2018-interface
Amarabha Banerjee
23 Jul 2018
8 min read
Save for later

What you should know about Unity 2018 Interface

Amarabha Banerjee
23 Jul 2018
8 min read
In this article we will show Unity 2018 primary views and windows; we will also cover layouts and the toolbar. The interface components covered in the post are the used most ones. This article is taken from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. Unity 2018 User Interface Components at glance When we first launch Unity, we might be intimidated by all the areas, tabs, menus, and buttons on the interface. Unity is a complex game engine with a lot of functionality, so we should expect more components for us to interact with. If we break the interface down into separate components, we can examine each one independently to gain a thorough understanding of the entire interface. As you can see here, we have identified six primary areas of the interface. We will examine each of these in subsequent sections. As you will quickly learn, this interface is customizable. The following screenshot shows the default configuration of the Unity user interface. Menu The Unity editor's main menu bar, as depicted here, consists of eight pull-down options. We will briefly review each menu option in this section. Additional details will be provided in subsequent chapters, as we start developing our Cucumber Beetle game: Unity's menus are contextual. This means that only menu items pertinent to the currently selected object will be enabled. Other non-applicable menu items will appear as gray instead of black and not be selectable. Unity The Unity menu item, shown here, gives us access to information about Unity, our software license, display options, module information, and access to preferences: Accessing the Unity | About Unity... menu option gives you access to the version of the engine you are running. There is additional information as well, but you would probably only use this menu option to check your Unity version. The Unity | Preferences... option brings up the Unity Preferences dialog window. That interface has seven side tabs: General, External Tools, Colors, Keys, GI Cache, 2D, and Cache Server. You are encouraged to become familiar with them as you gain experience in Unity. The Unity | Modules option provides you with a list of playback engines that are running as well as any Unity extensions. You can quit the Unity game engine by selecting the Unity | Quit menu option. File Unity's File menu includes access to your game's scenes and projects. We will use these features throughout our game development process. As you can see in the following screenshot, we also have access to the Build Settings. Edit The Edit menu has similar functionality to standard editors, not just game engines. For example, the standard Cut, Copy, Paste, Delete, Undo, and Redo options are there. Moreover, the short keys are aligned with the software industry standard. As you can see from the following screenshot, there is additional functionality accessible here. There are play, pause, and step commands. We can also sign in and out of our Unity account: The Edit | Project Settings option gives us access to Input, Tags and Layers, Audio, Time, Player, Physics, Physics 2D, Quality, Graphics, Network, Editor, and Script Execution Order. In most cases, selecting one of these options opens or focuses keyboard control to the specific functionality. Assets Assets are representations of things that we can use in our game. Examples include audio files, art files, and 3D models. There are several types of assets that can be used in Unity. As you can see from the following screenshot, we are able to create, import, and export assets: You will become increasingly familiar with this collection of functionality as you progress through the book and start developing your game. GameObject The GameObject menu provides us with the ability to create and manipulate GameObjects. In Unity, GameObjects are things we use in our game such as lights, cameras, 3D objects, trees, characters, cars, and so much more. As you can see here, we can create an empty GameObject as well as an empty child GameObject: We will have extensive hands-on dealings with the GameObject menu items throughout this book. At this point, it is important that you know this is where you go to create GameObjects as well as perform some manipulations on them. Component We know that GameObjects are just things. They actually only become meaningful when we add components to them. Components are an important concept in Unity, and we will be working with them a lot as we progress with our game's development. It is the components that implement functionality for our GameObjects. The following screenshot shows the various categories of components. This is one method for creating components in Unity: Window The Window menu option provides access to a lot of extra features. As you can see here, there is a Minimize option that will minimize the main Unity editor window. The Zoom option toggles full screen and zoomed view: The Layouts option provides access to various editor layouts, to save or delete a layout. The following table provides a brief description of the remaining options available via the Window menu item. You will gain hands-on experience with these windows as you progress through this book: Window OptionDescriptionServicesAccess to integrated services: Ads, Analytics, Cloud Build, Collaborate, Performance Reporting, In-App Purchasing, and Multiplayer.SceneBrings focus to the Scene view. Opens the window if not already open. Additional details are provided later in this chapter.GameBrings focus to the Game view. Opens the window if not already open. Additional details are provided later in this chapter.InspectorBrings focus to the Inspector window. Opens the window if not already open. Additional details are provided later in this chapter.HierarchyBrings focus to the Hierarchy window. Opens the window if not already open. Additional details are provided later in this chapter.ProjectBrings focus to the Project window. Opens the window if not already open. Additional details are provided later in this chapter.AnimationBrings focus to the Animation window. Opens the window if not already open.ProfilerBrings focus to the Profiler window. Opens the window if not already open.Audio MixerBrings focus to the Audio Mixer window. Opens the window if not already open.Asset StoreBrings focus to the Asset Store window. Opens the window if not already open.Version ControlUnity provides functionality for most popular version control systems.Collab HistoryIf you are using an integrated collaboration tool, you can access the history of changes to your project here.AnimatorBrings focus to the Animator window. Opens the window if not already open.Animator ParameterBrings focus to the Animator Parameter window. Opens the window if not already open.Sprite PackerBrings focus to the Sprite Packer window. Opens the window if not already open. In order to use this feature, you will need to enable Legacy Sprite Packing in Project Settings.ExperimentalBrings focus to the Experimental window. Opens the window if not already open. By default, the Look Dev experimental feature is available. Additional experimental features can be found in the Unity Asset Store.Test RunnerBrings focus to the Experimental window. Opens the window if not already open. This is a tool that runs tests on your code both in edit and play modes. Builds can also be tested.Timeline EditorBrings focus to the Timeline Editor window. Opens the window if not already open. This is a contextual menu item.LightingAccess to the Lighting window and the Light Explorer window.Occlusion CullingThis feature allows you to select and edit how objects are drawn. With occlusion culling, only the objects within the current camera's visual range, and not obscured by other objects, are rendered.Frame DebuggerThis feature allows you to step through a game, one frame at a time, so you can see the draw calls on a given frame.NavigationUnity's navigation system allows us to implement artificial intelligence with regards to non-player character movement Physics DebuggerBrings focus to the Physics Debugger window. Opens the window if not already open. Here we can toggle several physics-related components to help debug physics in our games.ConsoleBrings focus to the Console window. Opens the window if not already open. The Console window shows warnings and errors. You can also output data here during gameplay, which is a common internal testing approach. To summarize, we have discussed the Unity 2018 interface. If you are interested to know more about using Unity 2018 and want to leverage its powerful features, you may refer to the book Getting Started with Unity 2018. What’s got game developers excited about Unity 2018.2? Put your game face on! Unity 2018.1 is now available Implementing lighting & camera effects in Unity 2018
Read more
  • 0
  • 0
  • 21590
article-image-what-is-interactive-machine-learning
Amey Varangaonkar
23 Jul 2018
4 min read
Save for later

What is interactive machine learning?

Amey Varangaonkar
23 Jul 2018
4 min read
Machine learning is a useful and effective tool to have when it comes to building prediction models or to build a useful data structure from an avalanche of data. Many ML algorithms are in use today for a variety of real-world use cases. Given a sample dataset, a machine learning model can give predictions with only certain accuracy, which largely depends on the quality of the training data fed to it. Is there a way to increase the prediction accuracy by somehow involving humans in the process? The answer is yes, and the solution is called as ‘Interactive Machine Learning’. Why we need interactive machine learning As we already discussed above, a model can give predictions only as good as the quality of the training data fed to it. If the quality of the training data is not good enough, the model might: Take more time to learn and then give accurate predictions Quality of predictions will be very poor This challenge can be overcome by involving humans in the machine learning process. By incorporating human feedback in the model training process, it can be trained faster and more efficiently to give more accurate predictions. In the widely adopted machine learning approaches, including supervised and unsupervised learning or even active learning for that matter, there is no way to include human feedback in the training process to improve the accuracy of predictions. In case of supervised learning, for example, the data is already pre-labelled and is used without any actual inputs from the human during the training process. For this reason alone, the concept of interactive machine learning is seen by many machine learning and AI experts as a breakthrough. How interactive machine learning works Machine Learning Researchers Teng Lee, James Johnson and Steve Cheng have suggested a novel way to include human inputs to improve the performance and predictions of the machine learning model. It has been called as the ‘Transparent Boosting Tree’ algorithm, which is a very interesting approach to combine the advantages of machine learning and human inputs in the final decision making process. The Transparent Boosting Tree, or TBT in short, is an algorithm that would visualize the model and the prediction details of each step in the machine learning process to the user, take his/her feedback, and incorporate it into the learning process. The ML model is in charge of updating the assigned weights to the inputs, and filtering the information shown to the user for his/her feedback. Once the feedback is received, it can be incorporated by the ML model as a part of the learning process, thus improving it. A basic flowchart of the interactive machine learning process is as shown: Interactive Machine Learning More in-depth information on how interactive machine learning works can be found in their paper. What can Interactive machine learning do for businesses With the rising popularity and applications of AI across all industry verticals, humans may have a key role to play in the learning process of an algorithm, apart from just coding it. While observing the algorithm’s own outputs or evaluations in the form of visualizations or plain predictions, humans can suggest way to to improve that prediction by giving feedback in the form of inputs such as labels, corrections or rankings. This helps the models in two ways: Increases the prediction accuracy Time taken for the algorithm to learn is shortened considerably Both the advantages can be invaluable to businesses, as they look to incorporate AI and machine learning in their processes, and look for faster and more accurate predictions. Interactive Machine Learning is still in its nascent stage and we can expect more developments in the domain to surface in the coming days. Once production-ready, it will undoubtedly be a game-changer. Read more Active Learning: An approach to training machine learning models efficiently Anatomy of an automated machine learning algorithm (AutoML) How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 21577

article-image-what-micro-frontend
Amit Kothari
08 Oct 2017
6 min read
Save for later

What is a micro frontend?

Amit Kothari
08 Oct 2017
6 min read
The microservice architecture enables us to write scalable and agile backend systems. Writing independent, self-contained services give us the flexibility to quickly add a new feature or easily change an existing one without affecting the whole system. Independently deployable services also allow us to scale our services as per the demand. We will show you how you can use a similar approach for frontend applications. You will learn about micro frontend architecture, its benefits, and strategy to break down a monolith web app into micro frontends. What is micro frontend architecture? Micro frontend architecture is an approach to developing web application as a composition of small frontend apps. Instead of writing a large monolith frontend application, the application is broken down into domain specific micro frontends, which are self-contained and can be developed and deployed independently. Advantages of using micro frontends Micro frontends bring the concept and benefits of micro services to frontend applications. Each micro frontend is self-contained, which allows faster delivery as multiple teams can work on different parts of the application without affecting each other. This also gives each team the freedom to choose different technology as required. Since the micro frontends are highly decoupled, they have a lower impact on other parts of the application and can be enhanced and deployed independently. Design considerations Let's say we want to build an online shopping website using micro frontend architecture. Instead of developing the site as one large application, we can split the website into micro frontends. For example, the pages to display lists of products and product details can be one micro frontend and the pages to show order history of a user can be another micro frontend. The user interface is made up of multiple micro frontends, but we do not want our users to feel that different pages are part of different apps. Here are some of the practices we can use to decompose a frontend application into smaller micro frontends, without compromising user experience. Single responsibility The first thing to consider is how to split an application into smaller apps so that each app can be developed and deployed independently. When teams are working on the different micro frontends, we want the apps to be highly decoupled so that a change in one app would not affect the other apps. This can be achieved by building domain specific micro frontends with single responsibility and well-defined bounded context. Just like our code, we want our micro frontends to have high cohesion and low coupling i.e. all the related code should be close to each other and less dependent on other modules. If we take the example of our online shopping site again, we want all the product related UI components in the product micro frontend and all the order related functionality in the order micro frontend. Let's say we have a user dashboard screen where users can see information from different domains, they can see their pending orders and also products which are on specials. Instead of creating a dashboard micro frontend, it is recommended to have the pending order UI component as part of order micro frontend and product related components as part of product micro frontend. This will allow us to split our system vertically and have domain specific frontend and backend services. Common interface for communication and data exchange For micro frontends to work harmoniously as a single web application, they need a common and consistent way to communicate with each other. Even if they are highly independent, they still need to talk to each other. One of the common approaches is to have an application that works as an integration layer. The app can work as a container to render different micro frontends and also facilitate communication between them. For example, in our online shopping website, once a user submits an order through the shopping cart micro frontend, we want to take the user to their order lists screen. Since both the order and shopping cart micro frontends are highly decoupled and do not know about each other, we can use the container app as the orchestration layer. On receiving order submission events from the shopping cart micro frontend, the container app will navigate the user to the order micro frontend. The container app can also be used to handle cross cutting concerns like user session management, analytics, etc. This approach works well with existing monolith frontends where the existing monolith application can work as the container and any new feature can be independently developed as a micro frontend and can be integrated into the existing app. The existing functionality can be also extracted and rewritten as micro frontends as required. Consistent look and feel Although our user interface is divided into multiple micro frontends, we still want our users to feel as if they are interacting with a single application. We want our apps to have a consistent look and feel, and also the ability to make UI changes easily across multiple apps. For example, we should be able to change the font or the primary colors across multiple micro frontends. This can be done by sharing CSS and assets like images, fonts, icons, etc. We also want the apps to use same UI components, for example, if we have date picker on multiple screens, we want all the date pickers to look the same. This can be achieved by creating a common library of UI components, which can be shared by micro frontends. Using shared assets and a UI component library will allow us to make changes easily instead of having to update multiple micro frontends. In this post, we discussed micro frontends, their benefits, and things to consider before migrating to micro frontend architecture. To deliver faster, we want the ability to build, test, and deploy features independently and this can be achieved by using micro frontends and microservices. Implementing micro frontends may present its own challenges and there will be technical hurdles to overcome but the benefits outweigh the complexity. If you are using micro frontend architecture, please share your experience with us. About the author Amit Kothari is a full stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software mainly in Java/JEE. His recent experience is in building web application using JavaScript frameworks like React and AngularJS and backend micro services/ REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 0
  • 21459
Modal Close icon
Modal Close icon