Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-github-acquires-semmle-to-secure-open-source-supply-chain-attains-cve-numbering-authority-status
Savia Lobo
19 Sep 2019
5 min read
Save for later

GitHub acquires Semmle to secure open-source supply chain; attains CVE Numbering Authority status

Savia Lobo
19 Sep 2019
5 min read
Yesterday, GitHub announced that it has acquired Semmle, a code analysis platform provider and also that it is now a Common Vulnerabilities and Exposures (CVE) Numbering Authority. https://twitter.com/github/status/1174371016497405953 The Semmle acquisition is a part of the plan to securing the open-source supply chain, Nat Friedman explains in his blog post. Semmle provides a code analysis engine, named QL, which allows developers to write queries that identify code patterns in large codebases and search for vulnerabilities and their variants. Security researchers use Semmle to quickly find vulnerabilities in code with simple declarative queries. “Semmle is trusted by security teams at Uber, NASA, Microsoft, Google, and has helped find thousands of vulnerabilities in some of the largest codebases in the world, as well as over 100 CVEs in open source projects to date,” Friedman writes. Also Read: GitHub now supports two-factor authentication with security keys using the WebAuthn API Semmle originally spun out of research at Oxford in 2006 announced a $21 million Series B investment led by Accel Partners, last year. “In total, the company raised $31 million before this acquisition,” Techcrunch reports. Shanku Niyogi, Senior Vice President of Product at GitHub, in his blog post writes, “An important measure of the success of Semmle’s approach is the number of vulnerabilities that have been identified and disclosed through their technology. Today, over 100 CVEs in open source projects have been found using Semmle, including high-profile projects like Apache Struts, Apple’s XNU, the Linux Kernel, Memcached, U-Boot, and VLC. No other code analysis tool has a similar success rate.” GitHub also announced that it has been approved as a CVE Numbering Authority for open source projects. Now, GitHub will be able to issue CVEs for security advisories opened on GitHub, allowing for even broader awareness across the industry. With Semmle integration, every CVE-ID can be associated with a Semmle QL query, which can then be shared and tracked by the broader developer community. The CVE approval will make it easier for project maintainers to report security flaws directly from their repositories. Also, GitHub can assign CVE identifiers directly and post them to the CVE List and the National Vulnerability Database (NVD). Earlier this year, GitHub acquired Dependabot, to provide automatic security fixes natively within GitHub. With automatic security fixes, developers no longer need to manually patch their dependencies. When a vulnerability is found in a dependency, GitHub will automatically issue a pull request on downstream repositories with the information needed to accept the patch. In August, GitHub was in the limelight for being a part of the Capital One data breach that affected 106 million users in the US and Canada. The law firm Tycko & Zavareei LLP filed a lawsuit in California’s federal district court on behalf of their plaintiffs Seth Zielicke and Aimee Aballo. Also Read: GitHub acquires Spectrum, a community-centric conversational platform Both plaintiffs claimed Capital One and GitHub were unable to protect user’s personal data. The complaint highlighted that Paige A. Thompson, the alleged hacker stole the data in March, posted about the theft on GitHub in April. According to the lawsuit, “As a result of GitHub’s failure to monitor, remove, or otherwise recognize and act upon obviously-hacked data that was displayed, disclosed, and used on or by GitHub and its website, the Personal Information sat on GitHub.com for nearly three months.” The Semmle acquisition may be GitHub’s move to improve security for users in the future. It would be interesting to know how GitHub will mold security for users with additional CVE approval. A user on Reddit writes, “I took part in a tutorial session Semmle held at a university CS society event, where we were shown how to use their system to write semantic analysis passes to look for things like use-after-free and null pointer dereferences. It was only an hour and a bit long, but I found the query language powerful & intuitive and the platform pretty effective. At the time, you could set up your codebase to run Semmle passes on pre-commit hooks or CI deployments etc. and get back some pretty smart reporting if you had introduced a bug.” The user further writes, “The session focused on Java, but a few other languages were supported as first-class, iirc. It felt kinda like writing an SQL query, but over AST rather than tuples in a table, and using modal logic to choose the selections. It took a little while to first get over the 'wut' phase (like 'how do I even express this'), but I imagine that a skilled team, once familiar with the system, could get a lot of value out of Semmle's QL/semantic analysis, especially for large/enterprise-scale codebases.” https://twitter.com/kurtseifried/status/1174395660960796672 https://twitter.com/timneutkens/status/1174598659310313472 To know more about this announcement in detail, read GitHub’s official blog post. Other news in Data Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine GQL (Graph Query Language) joins SQL as a Global Standards Project and is now the international standard declarative query language for graphs
Read more
  • 0
  • 0
  • 45768

article-image-understanding-dependencies-c-application
Packt
05 Apr 2017
9 min read
Save for later

Understanding the Dependencies of a C++ Application

Packt
05 Apr 2017
9 min read
This article by Richard Grimes, author of the book, Beginning C++ Programming explains the dependencies of a C++ application. A C++ project will produce an executable or library, and this will be built by the linker from object files. The executable or library is dependent upon these object files. An object file will be compiled from a C++ source file (and potentially one or more header files). The object file is dependent upon these C++ source and header files. Understanding dependencies is important because it helps you understand the order to compile the files in your project, and it allows you to make your project builds quicker by only compiling those files that have changed. (For more resources related to this topic, see here.) Libraries When you include a file within your source file the code within that header file will be accessible to your code. Your include file may contain whole function or class definitions (these will be covered in later chapters) but this will result in a problem: multiple definitions of a function or class. Instead, you can declare a class or function prototype, which indicates how calling code will call the function without actually defining it. Clearly the code will have to be defined elsewhere, and this could be a source file or a library, but the compiler will be happy because it only sees one definition. A library is code that has already been defined, it has been fully debugged and tested, and therefore users should not need to have access to the source code. The C++ Standard Library is mostly shared through header files, which helps you when you debug your code, but you must resist any temptation to edit these files. Other libraries will be provided as compiled libraries. There are essentially two types of compiled libraries: static libraries and dynamic link libraries. If you use a static library then the compiler will copy the compiled code that you use from the static library and place it in your executable. If you use a dynamic link (or shared) library then the linker will add information used during runtime (it may be when the executable is loaded, or it may even be delayed until the function is called) to load the shared library into memory and access the function. Windows uses the extension lib for static libraries and dll for dynamic link libraries. GNU gcc uses the extension a for static libraries and so for shared libraries. If you use library code in a static or dynamic link library the compiler will need to know that you are calling a function correctly—to make sure your code calls a function with the correct number of parameters and correct types. This is the purpose of a function prototype—it gives the compiler the information it needs to know about calling the function without providing the actual body of the function, the function definition. In general, the C++ Standard Library will be included into your code through the standard header files. The C Runtime Library (which provides some code for the C++ Standard Library) will be static linked, but if the compiler provides a dynamic linked version you will have a compiler option to use this. Pre-compiled Headers When you include a file into your source file the preprocessor will include the contents of that file (after taking into account any conditional compilation directives) and recursively any files included by that file. As illustrated earlier, this could result in thousands of lines of code. As you develop your code you will often compile the project so that you can test the code. Every time you compile your code the code defined in the header files will also be compiled even though the code in library header files will not have changed. With a large project this can make the compilation take a long time. To get around this problem compilers often offer an option to pre-compile headers that will not change. Creating and using precompiled headers is compiler specific. For example, with gcc you compile a header as if it is a C++ source file (with the /x switch) and the compiler creates a file with an extension of gch. When gcc compiles source files that use the header it will search for the gch file and if it finds the precompiled header it will use that, otherwise it will use the header file. In Visual C++ the process is a bit more complicated because you have to specifically tell the compiler to look for a precompiled header when it compiles a source file. The convention in Visual C++ projects is to have a source file called stdafx.cpp which has a single line that includes the file stdafx.h. You put all your stable header file includes in stdafx.h. Next, you create a precompiled header by compiling stdafx.cpp using the /Yc compiler option to specify that stdafx.h contains the stable headers to compile. This will create a pch file (typically, Visual C++ will name it after your project) containing the code compiled up to the point of the inclusion of the stdafx.h header file. Your other source files must include the stdafx.h header file as the first header file, but it may also include other files. When you compile your source files you use the /Yu switch to specify the stable header file (stdafx.h) and the compiler will use the precompiled header pch file instead of the header. When you examine large projects you will often find precompiled headers are used, and as you can see, it alters the file structure of the project. The example later in this chapter will show how to create and use precompiled headers. Project Structure It is important to organize your code into modules to enable you to maintain it effectively. Even if you are writing C-like procedural code (that is, your code involves calls to functions in a linear way) you will also benefit from organizing it into modules. For example, you may have functions that manipulate strings and other functions that access files, so you may decide to put the definition of the string functions in one source file, string.cpp, and the definition of the file functions in another file, file.cpp. So that other modules in the project can use these files you must declare the prototypes of the functions in a header file and include that header in the module that uses the functions. There is no absolute rule in the language about the relationship between the header files and the source files that contain the definition of the functions. You may have a header file called string.h for the functions in string.cpp and a header file called file.h for the functions in file.cpp. Or you may have just one file called utilities.h that contains the declarations for all the functions in both files. The only rule that you have to abide by is that at compile time the compiler must have access to a declaration of the function in the current source file, either through a header file, or the function definition itself. The compiler will not look forward in a source file, so if a function calls another function in the same source file that called function must have already been defined before the calling function, or there must be a prototype declaration. This leads to a typical convention of having a header file associated with each source file that contains the prototypes of the functions in the source file, and the source file includes this header. This convention becomes more important when you write classes. Managing Dependencies When a project is built with a building tool, checks are performed to see if the output of the build exist and if not, perform the appropriate actions to build it. Common terminology is that the output of a build step is called a target and the inputs of the build step (for example, source files) are the dependencies of that target. Each target's dependencies are the files used to make them. The dependencies may themselves be a target of a build action and have their own dependencies. For example, the following picture shows the dependencies in a project: In this project there are three source files (main.cpp, file1.cpp, file2.cpp) each of these includes the same header utils.h which is precompiled (and hence why there is a fourth source file, utils.cpp, that only contains utils.h). All of the source files depend on utils.pch, which in turn depends upon utils.h. The source file main.cpp has the main function and calls functions in the other two source files (file1.cpp and file2.cpp), and accesses the functions through the associated header files file1.h and file2.h. On the first compilation the build tool will see that the executable depends on the four object files and so it will look for the rule to build each one. In the case of the three C++ source files this means compiling the cpp files, but since utils.obj is used to support the precompiled header, the build rule will be different to the other files. When the build tool has made these object files it will then link them together along with any library code (not shown here). Subsequently, if you change file2.cpp and build the project, the build tool will see that only file2.cpp has changed and since only file2.obj depends on file2.cpp all the make tool needs to do is compile file2.cpp and then link the new file2.obj with the existing object files to create the executable. If you change the header file, file2.h, the build tool will see that two files depend on this header file, file2.cpp and main.cpp and so the build tool will compile these two source files and link the new two object files file2.obj and main.obj with the existing object files to form the executable. If, however, the precompiled header source file, util.h, changes it means that all of the source files will have to be compiled. Summary For a small project, dependencies are easy to manage, and as you have seen, for a single source file project you do not even have to worry about calling the linker because the compiler will do that automatically. As a C++ project gets bigger, managing dependencies gets more complex and this is where development environments like Visual C++ become vital. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Preparing to Build Your Own GIS Application [article] Writing a Fully Native Application [article]
Read more
  • 0
  • 0
  • 45642

article-image-how-visual-studio-code-can-help-bridge-the-gap-between-full-stack-development-and-devops
Richard Gall
16 Apr 2019
8 min read
Save for later

How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsored by Microsoft]

Richard Gall
16 Apr 2019
8 min read
The popularity of Visual Studio Code is an indication of how the developer landscape is changing. It’s also a testament to the way the development team behind it have been proactive in responding to these changes and using them to inform iterations of the product. But what are these changes, exactly? Well, one way of understanding them is to focus on the growth of full-stack development as a job role. Thanks to cloud native and increased levels of abstraction across the stack, developers are being asked to do more. They’re not so much writing specialized code for the part of the stack that they ‘own’ in the way they might have been five years ago. Instead, a more holistic approach has become the norm. This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free from Microsoft here. This holistic approach is one where developers assemble applications, with an even greater focus - and indeed deliberation - on matters of infrastructure and architecture. Shipping code is important, but there is more to development roles than simply producing the building blocks of software. A good full stack developer is one who can shift between contexts. This could be context shifting in a literal sense, but it might also more broadly mean shifting between different types of problems. Crucial to this is an in-built DevOps mindset. Siloed compartmentalization is the enemy of the modern developer in just about every respect. This is why Visual Studio Code is so valuable today. It tackles compartmentalization, helping developers move between different problems and contexts seamlessly. In turn, this bridges the gap between what we would typically consider a full-stack engineer role and responsibilities and those of a DevOps engineer. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin How does Visual Studio Code support a DevOps mindset? From writing and optimizing code, to deploying to the cloud and managing those resources in the cloud, Visual Studio allows developers to do multiple things easily. Let's take a look at some of the features of the editor (or IDE, depending on your perspective) to see what this means in practical terms. VSCode helps developers assume responsibility for how their code performs Code optimization becomes important when you really start to take the issue of accountability seriously. Yes, that could be frustrating (especially if you’re a bad developer…) but it also makes writing code much more interesting. And Visual Studio Code has tools that help you to do just that. Like a good and friendly DevOps engineer, the text editor has a set of in-built features and extensions that can help you write cleaner and more performant code. There are, of course, plenty of extensions to help improve code, but before we even get there, Visual Studio Code has out of the box features that can aid productivity. IntelliSense, for example, Microsoft’s useful autocomplete feature, takes some of the leg work out of writing code. But it’s once you get into the extensions that you can begin to see how much support Visual Studio Code can bring. IntelliSense has several useful extensions - one for npm, one for Node.js modules, for example, but there are plenty of other useful tools, from code snippets to Chrome Debugger and Prettier, an extension for the popular code formatter that has seen 5 million downloads in a matter of months. Individually these tools might not seem like a big deal. But it’s the fact that you have access to these various features - if you want them, of course - that marks VSC out as somewhere that wants you to be productive, that wants you to focus on code health. That’s not to say that Atom, emacs, or any other editor doesn’t do this - it’s just that VSC steps in and takes up and performs in lieu of your brain. Which means you can focus your mind elsewhere. VSCode has features that encourages really effective developer collaboration DevOps is, fundamentally, a methodology about the way people build software together. Conversely, code editors often feel like a personal decision - something as unique to you as any other piece of technology that allows you to interface with information and with other people. And although VSC can feel as personal as any code editor (check out the themes - Dracula is particularly lovely…), because it is more than a typical code editor, it offers plenty of ways to enhance project management and collaboration. Perhaps the most notable example of how Visual Studio Code supports collaboration is the Live Share extension.  This nifty extension allows you to work on code with a colleague working on a different machine, as if you were collaborating on a Word document. As well as being pretty novel, the implications of this could be greater than we realise. By making it easier to open up the way we work at a very individual - almost atomized level - the way we work together to build and deploy code immediately changes. Software engineering has always been a team sport but coding itself was all too often a solitary activity. With Live Share you can quite deliberately begin to break down silos in a way that you've never been able to do before. Okay, but what does Live Share really have to do with DevOps? Well, if we can improve collaboration between people, we can immediately cut through any residual siloes. But more than that, it also opens new work practices for the future. Pair Programming, for example, might feel like a bit of a flash in the plan programming practice, but the principles become the norm if its better facilitated. And while an individual agile technique certainly does not make DevOps magically happen, making it normal can go a long way in helping to improve code quality and accountability. Visual Studio Code brings the cloud to full-stack developers (and vice versa) Collaboration, productivity - great. Code optimization - also great. But if you really want to think about how Visual Studio Code bridges the gap between full-stack development and DevOps, you have to look at the ways in which it allows developers to leverage Azure. An article published by TechCrunch in 2016 argued that cloud - managed services - had ‘killed DevOps’. That might have been a bit of an overstatement, but if you follow the logic you can begin to see that the world is moving towards a world where cloud actually and supports and facilitates DevOps processes. It almost removes the need for a separate operations engineer to deploy your applications appropriately. But even if the lack of friction here is appealing, you will nevertheless to be open to a new way of working. And that is never straightforward. And that’s where Visual Studio Code can help. With Azure App Service or Azure Functions extensions, you can deploy and manage Azure resources directly from your editor. That’s a productivity bonus, but it also immediately makes the concept of cloud or serverless more accessible, especially if you’re new to it. In VSC you’re literally working with it in a familiar space. The conclusion of that TechCrunch article is instructive: “DevOps as a team may be gone sooner than later, but its practices will be carried on to whole development teams so that we can continue to build upon what it has brought us in the past years.” Arguably, Visual Studio Code is an editor that takes us to that point - one where DevOps practices are embedded in development teams. Visual Studio Code: built for the needs of individual developers and the future of the industry Choosing a text editor is a subjective thing. Part of it is about what you’re trying to achieve, and what projects you’re working on, but often it’s not even about that: it’s simply about what feels right. This means that there are always going to be some developers for whom Visual Studio Code doesn’t work – and that’s fine. But unlike many other text editors, Visual Studio Code is so adaptable and flexible to every developers' individual needs, that it’s actually relatively straightforward to find a way to make it work for you. In turn, this means its advantages in terms of cloud native development, and improved collaboration aren’t at the expense of that subjective feeling of ‘yes... this works for me’. And if you’re going to build great software, both things are important. Try Visual Code for yourself. Download it now. Learn how to use Node.js on Azure by downloading Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 45597

article-image-getting-to-know-pymc3-a-probabilistic-programming-framework-for-bayesian-analysis-in-python
Vincy Davis
11 Dec 2019
5 min read
Save for later

Getting to know PyMC3, a probabilistic programming framework for Bayesian Analysis in Python

Vincy Davis
11 Dec 2019
5 min read
Bayes' theorem, named after 18th-century British mathematician Thomas Bayes, is a mathematical formula for determining conditional probability. This theorem is used to revise or update existing predictions or theories using new or additional evidence. Bayes theorem is also used in the field of data science as it provides a rule for moving from a prior probability to a posterior probability.  In Bayesian statistics, a prior probability is the probability of an event before a new data is collected and a posterior probability is a conditional probability that is allotted after the relevant evidence is acquired. Hence, the Bayes algorithm is one of the most popular machine learning techniques in the field of data science.  In this post, we are going to discuss a specific Bayesian implementation called probabilistic programming (PP) in Python, considering that modern Bayesian statistics is mainly done by writing code. The probabilistic programming enables flexible specification of complex Bayesian statistical models, thus giving users the ability to focus more on model design, evaluation, and interpretation, and less on mathematical or computational details. Further Reading [box type="shadow" align="" class="" width=""]To know more about Bayesian data analysis techniques using PyMC3 and ArviZ, read our book ‘Bayesian Analysis with Python’, written by Osvaldo Martin. This book will help you acquire skills for a practical and computational approach towards Bayesian statistical modeling. The book also lists the best practices in Bayesian Analysis with the help of sample problems and practice exercises.[/box] A group of researchers have published a paper “Probabilistic Programming in Python using PyMC” exhibiting a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. PyMC3 is a popular open-source PP framework in Python with an intuitive and powerful syntax closer to the natural syntax statisticians. The PyMC3 installation depends on several third-party Python packages which are automatically installed when installing via pip. It requires four dependencies: Theano, NumPy, SciPy, and Matplotlib. To undertake the full advantage of PyMC3, the researchers suggest, the optional dependencies Pandas and Patsy should also be installed using: pip install patsy pandas. How to use PyMC3 in probabilistic programming? In the paper, the researchers have utilized a simple Bayesian linear regression model with normal priors for the parameters. The unknown variables in the model are also assigned a prior distribution. The artificial data in the model are then simulated using NumPy’s random module, followed by the PyMC3 model to retrieve the corresponding parameters. The straightforward PyMC3 model structure is used to generate the unknown data as it is close to the statistical notation.  Firstly, the necessary components are imported from PyMC to build the required model. It is represented in the full format initially and then explained partly. The paper states, “Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement: with basic_model: This creates a context manager, with our basic model as the context, that includes all statements until the indented block ends.” This means that all the PyMC3 objects introduced in the indented code block below the with statements are added to the model behind the scenes. In the absence of this context manager idiom, users would be forced to manually associate each of the variables with the basic model immediately after we create them. Also, if a user tries to create a new random variable without a with model: statement, it will cause an error due to the absence of an obvious model for the variable to be added to.  Next, to obtain posterior estimates for the unknown variables in the model, the posterior estimates are calculated analytically. The researchers have explained two approaches to obtain posterior estimates, users can choose either of them depending on the structure of the model and the goals of the analysis. The first approach is called finding the maximum a posteriori (MAP) point using optimization methods and the second approach is computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods. For producing a posterior analysis of the required model, PyMC3 provides plotting and summarization functions for inspecting the sampling output.  A simple posterior plot can be created using traceplot. In the traceplot, the left column consists of the smoothed histogram while the right column contains the samples of the Markov chain plotted in sequential order. In addition, the summary function of PyMC3 also provides a text-based output of common posterior statistics. You can also learn more about the practical implementation of PyMC3 and its loss functions in the book ‘Bayesian Analysis with Python’ by Packt Publishing. How Facebook data scientists use Bayesian optimization for tuning their online systems How to perform exception handling in Python with ‘try, catch and finally’ Fake Python libraries removed from PyPi when caught stealing SSH and GPG keys, reports ZDNet Netflix open-sources Metaflow, its Python framework for building and managing data science projects ActiveState adds thousands of curated Python packages to its platform
Read more
  • 0
  • 0
  • 45584

article-image-inspecting-apis-in-asp-net-core-tutorial
Prasad Ramesh
24 Feb 2019
7 min read
Save for later

Inspecting APIs in ASP.NET Core [Tutorial]

Prasad Ramesh
24 Feb 2019
7 min read
REST is an architectural style for implementing communication between the application client and server over HTTP. RESTful APIs use HTTP verbs (POST, GET, PUT, DELETE, and so on) to dictate the operation to be performed (Create, Read, Update, Delete) by the server on the domain entity. The REST style has become the de facto standard for creating services in modern application development. This makes it easy to use and consume services in any technology and on any platform, such as web frontends, desktop applications, or other web services. This article is an excerpt from a book written by Tamir Dresher, Amir Zuker, and Shay Friedman titled Hands-On Full-Stack Web Development with ASP.NET Core. In this book, you will learn how to build RESTful APIs in C# with ASP.NET Core, web APIs, and Entity Framework. Overview — REST APIs with ASP.NET Core API A basic ASP.NET Core MVC application can be broken down into three layers: models, controllers, and views. RESTful APIs in ASP.NET Core work very similarly; the only difference is that, instead of returning responses as visual views, the API response is a payload of data (usually in JSON format). The data returned from the API is later consumed by clients, such as Angular-based applications that can render the data as views, or by headless clients that have no UI and simply process data. For example, consider a background process that periodically sends notifications to a user about their account status: Before ASP.NET Core, Microsoft created an explicit distinction between ASP.NET MVC and the ASP.NET Web API. The former was used to create web applications with views that are generated by the server, while the former was used to create services that contain only logic and can be consumed by any client. Over time, the distinction between the two frameworks caused duplication of code and added a burden on the developers who needed to learn and master two technologies. ASP.NET Core unified the two frameworks into the ASP.NET Core MVC suite, and made it simpler to create web applications, with or without visual responses. Let's start with a simple API that will be the foundation for our application called, say, GiveNTake application. Creating a simple API The GiveNTake application allows the user to see a catalog of available products. For this to be possible, our server needs to include a specific API method that the client application can call and get back the collection of available products. We will treat the product as a simple string in the format of [Product ID] - [Product Name]: Open the any basic project you may have created, and add a new controller class to the Controllers folder.  Right-click on the newly created Controllers folder and choose Add | Controller. In the list of available templates, choose API Controller - Empty and click Add: Set the name to ProductsController and click Add. Add the following method to the generated controller: public string[] GetProducts() { return new[] { "1 - Microwave", "2 - Washing Machine", "3 - Mirror" }; } Your controller should look like this: [Route("api/Products")] [ApiController] public class ProductsController : Controller { public string[] GetProducts() { return new[] { "1 - Microwave", "2 - Washing Machine", "3 - Mirror" }; } } Congratulations! You have completed coding your first API method. The GetProducts method returns the collection of the available products. To see it in action, build and run your project. This will open a browser with the base address of your ASP.NET application. Add the  /api/Products string to the base address in the browser's address bar and execute it. You should see the collection of strings appear on the screen like so: Inspecting your APIs using Fiddler and Postman Using your browser to execute your APIs is nice enough for simple APIs that retrieve data, but as you go along and extend your APIs, you'll soon find that you need other powerful tools to test and debug what you develop. There are many tools that let you inspect and debug your APIs, but I have chosen to teach you about Fiddler and Postman because they are both simple and powerful. Fiddler Fiddler is a free web debugging tool that works as a proxy, logging all HTTP(S) traffic that is executed by processes in your computer. Fiddler allows you to inspect the traffic to see that exact HTTP request that was sent and the exact HTTP response that was returned. You can also use other advanced features, such as setting breakpoints and overriding the data that is sent or received. To install Fiddler, navigate to https://www.telerik.com/fiddler and click the Free download button. Save and run the installer: The Fiddler main screen is built from these main parts: Sessions list: Shows the HTTP(S) requests that were sent from processes in your machine Fiddler tabs: Contains different tools for inspecting and controlling sessions Request inspector: When the Inspectors tab and inner Raw tab are selected, this section shows the request as it was sent over-the-wire Response inspector: When the Inspectors tab and inner Raw tab are selected, this section shows the response as it was sent over-the-wire Immediately after you run Fiddler, it starts collecting the HTTP sessions that are performed in your machine. If you refresh the browser that you used to navigate to the /api/Products API  you created, you should see this session in Fiddler's Sessions List, as shown in the preceding screenshot. If you run a .NET application that sends HTTP requests to an address in your localhost, you won't see the session appear in Fiddler. Changing the address to localhost.fiddler will force the request to be captured by Fiddler. Fiddler is a great tool for debugging the requests and responses that are made in your application, but it means that you need to have a client that sends those requests. Many times when debugging and experimenting with APIs, you want to create HTTP requests manually and inspect them. You can accomplish this task with Fiddler's Composer tab, but I want to teach you about another tool that is much more suitable for these scenarios—Postman. Postman Postman is an HTTP client that simplifies the testing of web services and RESTful APIs. Postman allows you to easily construct HTTP requests, send them, and inspect them. Download Postman from https://www.getpostman.com/, and then install and run it: On the introduction screen, click on the Request option: Enter GetProducts in the Request name field, and then type GiveNTake into the collection section and create a new collection. Press Save to create the new request: Enter the full URL of the GetProducts API (for example, http://localhost:5267/api/products) in the URL field: Make sure that the HTTP Verb is set to GET and click on the Send button. After a few moments, you should see the response that was received from your service, and you can now inspect the status code, response body, and headers: You will find Postman to be an indispensable development tool while you develop your APIs, and we will use it many times as we go deeper into ASP.NET Core in this book. At this point, you might be wondering, how come the GetProducts method was invoked when we navigated to /api/Products? To answer this question, we need to talk about how ASP.NET Core routes requests into controllers and actions. ASP.NET Core provides the necessary infrastructure you need to create powerful RESTful APIs. In this article, you learned how to create controllers and actions that respond to HTTP requests, and return HTTP responses that you control. We've introduced two popular tools: Fiddler and Postman, and you'll find them very useful when you create and debug your API applications. To know more about APIs in ASP.NET Core, check out the book Hands-On Full-Stack Web Development with ASP.NET Core. Google announces the general availability of a new API for Google Docs What to expect in ASP.NET Core 3.0 How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 45521

article-image-nativescript-set-up
Amey Varangaonkar
09 May 2018
9 min read
Save for later

NativeScript: What is it, and how to set it up

Amey Varangaonkar
09 May 2018
9 min read
In this tutorial, we introduce you to the NativeScript library, which allows you to create and deploy a web application on a mobile device and use it like a mobile app, rather than as a web or a hybrid application. [box type="shadow" align="" class="" width=""]The following excerpt is taken from the book TypeScript 2.x By Example written by Sachin Ohri. This book presents essential techniques to leverage the power of TypeScript 2.x to build efficient web applications.[/box] What is NativeScript? NativeScript is the open source framework for building native Android and iOS applications with web technologies. This means we can develop native mobile applications with JavaScript, TypeScript, and/or Angular. It is based on the thinking of write once and run everywhere. Applications developed with NativeScript are pure mobile apps when compared to applications developed with technologies such as PhoneGap. As they are native mobile applications, we can use all the richness of the mobile platform and provide the performance associated with that. We use native APIs and use native controls to render, which allows us to create more sophisticated applications compared to a hybrid approach. Hybrid applications do not provide the same level of flexibility or performance because they are hosted on a separate framework and do not get to interact with low-level mobile APIs directly. The best part is that it does not require us to learn a new programming language, unlike developing an iOS-based application, for which you need to know Objective C or Swift. So, we can use our existing skills to develop mobile applications. NativeScript design NativeScript is a runtime that sits on top of the native mobile operating system and uses the JavaScript Virtual Machine (JVM) V8 on Android and JavaScriptCore on iOS. Having access to these platforms allows NativeScript to expose a unified API system for developers, which is then converted into the native API at runtime. This translation between the JavaScript APIs and the native platform APIs is possible through reflection, which NativeScript uses to create its own set of interfaces. Another advantage of using JavaScript by NativeScript is its independence from specific editors. You can use any of your favorite editors to develop a NativeScript application, and you will have access to all the native APIs rather than using Xcode for iOS-based apps and Android Studio for Android-based apps. Architecture The following is a high-level diagram of NativeScript and its interaction with the mobile platform: As we can see, the runtime is responsible for converting JavaScript application code to the native platform code. It has various components that work together to convert and call the native APIs. Because NativeScript uses JVM and JavaScriptCore, it has access to all the latest ECMAScript language specifications for development, which allows us to use the latest ES6 feature set. One of the main components that we need to understand in NativeScript design is modules. Modules The NativeScript team made sure that the platform was developed in a modular fashion, much like plugins, which allow us to include only the modules that we need in our development. These modules provide us with the abstraction of native APIs and allow us to write code that work on both platforms. It has separate APIs for each logical functionality. For example, if you want to use SQLite for your storage needs, there is a package for that; if you want to use a filesystem, there is a package for that. Let's take one example to see how these modules help us write consistent code for a multiplatform environment. If you want to access a filesystem on the native platform using NativeScript, you will write code similar to what you see in the following code snippet: var filesystem = require("file-system"); new filesystem.file(path) This code is written in pure JavaScript, which first gets a reference to a file-system module, and then, using the API of the file-system module calls a file method. This code, when executed by the NativeScript runtime, first checks the platform it wants to run on and then converts the code accordingly, as shown in the following code snippets. The Android version of the code will be as follows: new java.io.file(path) The iOS version of the code will be as follows: nsFileManager.defaultManager(); fileManager.createFileAtPathContentsAttributes(path); If you have worked on any of the mobile platforms before, you will recognize this code as using the native filesystem API to access the file path. NativeScript versus web applications Until now, we have been mentioning that we can use our web technologies to write mobile applications with the help of NativeScript. So, can we write a pure web application and use the it in runtime to create a mobile application? Yes and no. Yes, we can, and we will see with our application that we can use the same code base to write with NativeScript. No, because not all components of web applications can be directly used. NativeScript allows us to use our existing JavaScript/TypeScript and CSS skills for developing the business logic and the design for our application. But because the native platforms are not web-based and do not have a DOM, we cannot use HTML as the template for our applications. Although you will see that the extension of our template files will be HTML, the element tags will be somewhat different. To give you a brief example, it does not have UI elements such as <div> or <span>, but has elements such as <StackLayout> and <DockLayout>, which allow us to arrange our UI components. Another thing to note here is that these UI elements are then converted into native elements based on the platform. So, if we use the <Button> control in NativeScript, it will get converted into android.widget.Button on the Android platform and UIButton on iOS. Setting up your NativeScript environment NativeScript provides very good documentation about installing and setting up your development environment. You can find the documentation at https://docs.nativescript.org/angular/start/quick-setup. We will briefly go through the setup process here, but recommend that you go through the documentation to understand the process. NativeScript CLI The best way to use is through the NativeScript CLI. You can install it from npm using the following command: npm install -g nativescript This command will install the NativeScript library in your global scope. To confirm that the installation has been successful, you can try running the following command from the command-line window: tns The tns command is a short form for Telerik NativeScript, and will show the array of commands associated with it. The NativeScript CLI comes with a host of commands to assist in our development, commands such as create, which helps us create a basic startup project, and deploy, which informs the NativeScript CLI to deploy the application to the device (the device can be a connected device or an emulator). You can check all the commands available with the NativeScript CLI by using the help command as follows: tns --help Installing mobile platform dependencies To build native applications, we need to install the dependencies for those mobile platforms. It is important to remember that if we want to build a NativeScript application for iOS and run it on an iOS-compatible device, we need to use macOS; for building Android applications, we can use both Windows and macOS. It provides an easy single script for Windows and macOS that takes care of the responsibility to install all the tools and framework required. The script for Windows is as shown in the following code: @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://www.nativescript.org/setup/win'))" The script for iOS is as shown in the following code: ruby -e "$(curl -fsSL https://www.nativescript.org/setup/mac)" It's important to note that these scripts require administrator-level privileges, so you may need to run them using the sudo command. It also provides a step-by-step guide to installing all these dependencies manually; details can be found at https://docs.nativescript.org/start/ns-setup-win. Once you have installed all the packages, you can check if the installation was successful by running the following command: tns doctor This command checks all the required prerequisites for building a NativeScript application, and if there are no issues identified, this command will return a success message, No issues were detected. Installing an Android Virtual Device Once you have installed all the dependencies, the next step is to install an Android emulator, which can be used for testing instead of connecting real devices. To be able to create an emulator, you need to have Android Studio on your machine. You can install Android Studio from https://developer.android.com/studio/index.html. Once you have installed Android Studio, you can check whether you have the correct Android SDK version. The NativeScript CLI needs Android SDK version 25 or higher; if you see that you do not have the required Android SDK version, then you can install it either using the following command or using the Android Studio IDE: "%ANDROID_HOME%\tools\bin\sdkmanager" "tools" "platform-tools" "platforms;android-25" "build-tools;25.0.2" "extras;android;m2repository" "extras;google;m2repository" To install the Android emulator, we use Android Studio, the details of which can be found at https://docs.nativescript.org/tooling/android-virtual-devices. On macOS, we need to make sure we have hXcodeCode installed, or else, we will not be able to run iOS-based applications. Again, you can use the tns doctor command to check if your installation was successful. And that's it! You have successfully installed and set up the NativeScript environment. Want to learn how to develop native web apps? We've got it covered. All you have to do is check out this book TypeScript 2.x By Example to create and deploy web app as a native app in a step-by-step manner. Tools in TypeScript Introducing Object Oriented Programmng with TypeScript Writing SOLID JavaScript code with TypeScript  
Read more
  • 0
  • 0
  • 45490
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-chatgpt-for-coding
Jakov Semenski
25 Apr 2024
6 min read
Save for later

ChatGPT for Coding

Jakov Semenski
25 Apr 2024
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionChatGPT's coding style is terrible:Verbosecomplexand outdated.Let's change that.ChatGPT promised to be our coding savior, but sometimes it feels more like a blast from the past.Remember those early 2000’s coding books? Yep, it's giving those vibes.It's like having a sports car with a tractor engine. Great potential, but the performance? Not quite there.Imagine harnessing the power of ChatGPT but with the finesse of a master coder.Ready for the upgrade?Here are 12 Pro prompts that will get you the right results.Tip #1: Specificity is the kingAs soon as you ask for some coding snippet from ChatGPT, by default, you will get the most basic HelloWorld example.The more vague your prompt is, the more mediocre your results will beInstead, specify exactlylanguageversionframeworkWrite backend code for Library app that uses Rest to communicate Cover endpoints for adding, removing, and filtering books by category and date published Use Java latest version. Use lambda streams instead of for loops Use Spring framework Tip #2: Avoid code vomitChatGPT loves to write a lot of code, the way I like to call it “code vomit”We are no longer rewarded by the amount of code we produce, but by the clarity and principles we follow.Give chat GPT instructions towrite clean codeuse latest principlescover logging and exception handlingWrite clean code Code needs to be covered with Logging and proper exception handling Use principles: Kiss & DRY, SOLID Keep in mind to use design patterns where it is applicable Using coding instructions I gave you, give me code for each class Tip #3: Make it easy to use with IDEEvery time ChatGPT writes code you getexplanationsimport statementscomments.This can be good for a beginner but is not something we need for our IDEOur IDE is already good with importing all the right packages, so let ChatGPT knowWhen writing code, avoid detailed explanations, just simple bullet points Don't add import statements, as IDE will do this instead Tip #4 Write testsYour code is not complete if you are not done with tests.But not just any tests. We want to have unit and integration tests in areadable format (give when then)covering the happy and unhappy pathuses the latest testing libraries such as AssertJ and BDDMockitoFor each class write a unit and integration tests Use given when then format For libraries use BDDMockito and AssertJ Cover happy and unhappy paths Tip #5 Give REST call request examplesWhat is the app if we cannot test it without some examplesInstead of creating them manually, ask ChatGPT to create Curl examples we can easily copy to Postman.For each request, generate curl examples Now go ahead and use your terminal or copy/paste them to PostmanTip #6 Create documentationWe don’t want just plain text, instead, we need a quick start guide for developersWrite a quick start guide for developers using markdown. Imagine this app has been published to github repository Cover - Introduction - how to install app - how to run it - how to use it Tip #7 Prepare deployment script for CloudThis app cannot live just in your local environment. Instead, we need a deployment script.Depending on where you want to deploy your changes, it might beKubernetes cluster scriptGoogle-specific terraform scriptsAWS cloud formation scriptAzure-specific deployment scriptOr ask ChatGPT to suggest a deployment scriptProvide me deployment script for one of most popular cloud providers Tip #8 Version ControlOur code for now is living only locally. Let’s ask chatGPT to give us instructions on how to set up Version ControlProvide Github version control setup instructions Tip #9 Define CI/CD pipelineCI/CD or continuous integration and continuous deployment is a must-have step for any serious development.There are plenty of options to choose from, such asJenkinsGitHub actionsBambooWith CI we guarantee we cansafely merge our changes by running build and testscheck if our code changes comply with sonar policiesWith CD we guarantee that we can safely deploy our changesProvide github actions that for each open pull request we run the build and run all the tests Also automatically include sonarqube scans Also create github action to run deployment on every code merge Tip #10 Performance optimizationOur backend rest service is now running, but the question we need to ask ourselveshow fast is ithow many requests it can handlewhat is the maximum limit of requestsFor that, we need to execute performance tests, e.g. using jmeter or gatling.We need to test what is the limit of our app. Write a load test script for gatling that tests how many book searches we can execute Tip #11 Run a security auditHow can we ensure our app is secure and not open to any threats?The best way is to run security scans.Our application might be open for security threats. Which security scan tools we can use for free and how can we use them. Give me step-by-step instruction on how to use it. Tip #12 Optimize for observabilityYou have your app running somewhere in the cloud.But did you optimize it for observability?How can you easily troubleshoot issues?How can you trace requests between different services?Did you set up monitoringWe want to make sure our application is optimized for observability Create guideline and configuration for the cloud environment for Traceability - tracing request from start to finish Monitoring - monitoring key performance metrics Logging - have a centralized logging system ConclusionYou can find the full prompt herehttps://chat.openai.com/share/f0bef1ca-062d-4a22-96aa-9711615329a5ChatGPT is a tool, and like any tool, it shines when used the right way.With these prompts, you get a coding assistant that keeps up with the latest trends, ensuring your code is not just functional but also follows modern standards.Author BioJakov Semenski is an IT Architect working at IBMiX with almost 20 years of experience.He is also a ChatGPT Speaker at the WeAreDevelopers conference and shares valuable tech stories on LinkedIn.
Read more
  • 0
  • 0
  • 45453

article-image-how-netflix-migrated-from-a-monolithic-to-a-microservice-architecture-video
Savia Lobo
17 Oct 2018
3 min read
Save for later

How Netflix migrated from a monolithic to a microservice architecture [Video]

Savia Lobo
17 Oct 2018
3 min read
A microservice architecture enables organizations to carry out continuous delivery/deployment of large, complex applications to further evolve their technology stack. Netflix, a popular name for video-streaming, started off by selling and renting DVDs and gained popularity post its migration to a microservice architecture on AWS. The adoption of public cloud and a microservice architecture were the main drivers of this growth. The elasticity of the cloud allowed them to scale easily without any additional work required. Why did Netflix decide to use microservices? Way back in August 2008, Netflix had a major outage because of a major database corruption which prevented them from shipping DVDs to customers for three days. Following this, they decided to move away from a single point of failure--that could only scale vertically--and move to components that could scale horizontally and are highly available. Netflix decided to abandon their private data centers and migrate to a public cloud--AWS to be specific, which provides horizontal scalability. In order to eliminate all the existing single points of failure, they decided to re-architect their systems instead of just moving them as is to the cloud. The Simian Army A basic technique Netflix uses to make their systems more reliable and highly available is, ‘The Simian army’. These are a set of tools used to increase the resiliency of your services. The most widely used is Chaos monkey, which allows one to introduce random failures in a system to see how it reacts. Read more: Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation At Netflix, they randomly kill servers from their production fleets every once in a while and make sure there is no difference in customer experience, as the system was able to handle these failures gracefully. Chaos monkey can also be used to introduce network latency. Watch the video above by Dimos Raptis to dive deeper into Netflix’s actual transition including details about the specific techniques and methodologies they used. The video also features the lessons they learned from this transition. About Dimos Raptis Dimos Raptis is a professional Software Engineer at Alexa, Amazon with several years of experience, designing and developing software systems for various companies, ranging from small software shops to big tech companies. His technical expertise lies in the Java and Linux ecosystems; he has some hands-on experience with emergent open-source technologies. You can follow Dimos on Twitter @dimosr7. To learn more about where to use microservices and to understand the aspects you should take into account when building your architecture, head over to our course titled, ‘Microservices Architecture’. Building microservices from a monolith Java EE app [Tutorial] How Netflix uses AVA, an Image Discovery tool to find the perfect title image for each of its shows NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!  
Read more
  • 0
  • 0
  • 45321

article-image-html5-and-the-rise-of-modern-javascript-browser-apis-tutorial
Pavan Ramchandani
20 Jul 2018
15 min read
Save for later

HTML5 and the rise of modern JavaScript browser APIs [Tutorial]

Pavan Ramchandani
20 Jul 2018
15 min read
The HTMbrowserification arrived in 2008. HTML5, however, was so technologically advanced in 2008 that it was predicted that it would not be ready till at least 2022! However, that turned out to be incorrect, and here we are, with fully supported HTML5 and ES6/ES7/ES8-supported browsers. A lot of APIs used by HTML5 go hand in hand with JavaScript. Before looking at those APIs, let us understand a little about how JavaScript sees the web. This'll eventually put us in a strong position to understand various interesting, JavaScript-related things such as the Web Workers API, etc. In this article, we will introduce you to the most popular web languages HTML and JavaScript and how they came together to become the default platform for building modern front-end web applications. This is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. The HTML DOM The HTML DOM is a tree version of how the document looks. Here is a very simple example of an HTML document: <!doctype HTML> <html> <head> <title>Cool Stuff!</title> </head> <body> <p>Awesome!</p> </body> </html> Here's how its tree version will look: The previous diagram is just a rough representation of the DOM tree. HTML tags consist of head and body; furthermore, the <body> tag consists of a <p> tag, whereas the <head> tag consists of the <title> tag. Simple! JavaScript has access to the DOM directly, and can modify the connections between these nodes, add nodes, remove nodes, change contents, attach event listeners, and so on. What is the Document Object Model (DOM)? Simply put, the DOM is a way to represent HTML or XML documents as nodes. This makes it easier for other programming languages to connect to a DOM-following page and modify it accordingly. To be clear, DOM is not a programming language. DOM provides JavaScript with a way to interact with web pages. You can think of it as a standard. Every element is part of the DOM tree, which can be accessed and modified with APIs exposed to JavaScript. DOM is not restricted to being accessed only by JavaScript. It is language-independent and there are several modules available in various languages to parse DOM (just like JavaScript) including PHP, Python, Java, and so on. As said previously, DOM provides JavaScript with a way to interact with it. How? Well, accessing DOM is as easy as accessing predefined objects in JavaScript: document. The DOM API specifies what you'll find inside the document object. The document object essentially gives JavaScript access to the DOM tree formed by your HTML document. If you notice, you cannot access any element at all without actually accessing the document object first. DOM methods/properties All HTML elements are objects in JavaScript. The most commonly used object is the document object. It has the whole DOM tree attached to it. You can query for elements on that. Let's look at some very common examples of these methods: getElementById method getElementsByTagName method getElementsByClassName method querySelector method querySelectorAll method By no means is this an exhaustive list of all methods available. However, this list should at least get you started with DOM manipulation. Use MDN as your reference for various other methods. Here's the link: https://developer.mozilla.org/en-US/docs/Web/API/Document#Methods. Modern JavaScript browser APIs HTML5 brought a lot of support for some awesome APIs in JavaScript, right from the start. Although some APIs were released with HTML5 itself (such as the Canvas API), some were added later (such as the Fetch API). Let's see some of these APIs and how to use them with some code examples. Page Visibility API - is the user still on the page? The Page Visibility API allows developers to run specific code whenever the page user is on goes in focus or out of foucs. Imagine you run a game-hosting site and want to pause the game whenever the user loses focus on your tab. This is the way to go! function pageChanged() { if (document.hidden) { console.log('User is on some other tab/out of focus') // line #1 } else { console.log('Hurray! User returned') // line #2 } } document.addEventListener("visibilitychange", pageChanged); We're adding an event listener to the document; it fires whenever the page is changed. Sure, the pageChanged function gets an event object as well in the argument, but we can simply use the document.hidden property, which returns a Boolean value depending on the page's visibility at the time the code was called. You'll add your pause game code at line #1 and your resume game code at line #2. navigator.onLine API – the user's network status The navigator.onLine API tells you if the user is online or not. Imagine building a multiplayer game and you want the game to automatically pause if the user loses their internet connection. This is the way to go here! function state(e) { if(navigator.onLine) { console.log('Cool we\'re up'); } else { console.log('Uh! we\'re down!'); } } window.addEventListener('offline', state); window.addEventListener('online', state); Here, we're attaching two event listeners to window global. We want to call the state function whenever the user goes offline or online. The browser will call the state function every time the user goes offline or online. We can access it if the user is offline or online with navigator.onLine, which returns a Boolean value of true if there's an internet connection, and false if there's not. Clipboard API - programmatically manipulating the clipboard The Clipboard API finally allows developers to copy to a user's clipboard without those nasty Adobe Flash plugin hacks that were not cross-browser/cross-device-friendly. Here's how you'll copy a selection to a user's clipboard: <script> function copy2Clipboard(text) { const textarea = document.createElement('textarea'); textarea.value = text; document.body.appendChild(textarea); textarea.focus(); textarea.setSelectionRange(0, text.length); document.execCommand('copy'); document.body.removeChild(textarea); } </script> <button onclick="copy2Clipboard('Something good!')">Click me!</button> First of all, we need the user to actually click the button. Once the user clicks the button, we call a function that creates a textarea in the background using the document.createElement method. The script then sets the value of the textarea to the passed text (this is pretty good!) We then focus on that textarea and select all the contents inside it. Once the contents are selected, we execute a copy with document.execCommand('copy'); this copies the current selection in the document to the clipboard. Since, right now, the value inside the textarea is selected, it gets copied to the clipboard. Finally, we remove the textarea from the document so that it doesn't disrupt the document layout. You cannot trigger copy2Clipboard without user interaction. I mean, obviously you can, but document.execCommand('copy') will not work if the event does not come from the user (click, double-click, and so on). This is a security implementation so that a user's clipboard is not messed around with by every website that they visit. The Canvas API - the web's drawing board HTML5 finally brought in support for <canvas>, a standard way to draw graphics on the web! Canvas can be used pretty much for everything related to graphics you can think of; from digitally signing with a pen, to creating 3D games on the web (3D games require WebGL knowledge, interested? - visit http://bit.ly/webgl-101). Let's look at the basics of the Canvas API with a simple example: <canvas id="canvas" width="100" height="100"></canvas> <script> const canvas = document.getElementById("canvas"); const ctx = canvas.getContext("2d"); ctx.moveTo(0,0); ctx.lineTo(100, 100); ctx.stroke(); </script> This renders the following: How does it do this? Firstly, document.getElementById('canvas') gives us the reference to the canvas on the document. Then we get the context of the canvas. This is a way to say what I want to do with the canvas. You could put a 3D value there, of course! That is indeed the case when you're doing 3D rendering with WebGL and canvas. Once we have a reference to our context, we can do a bunch of things and add methods provided by the API out-of-the-box. Here we moved the cursor to the (0, 0) coordinates. Then we drew a line till (100,100) (which is basically a diagonal on the square canvas). Then we called stroke to actually draw that on our canvas. Easy! Canvas is a wide topic and deserves a book of its own! If you're interested in developing awesome games and apps with Canvas, I recommend you start off with MDN docs: http://bit.ly/canvas-html5. The Fetch API - promise-based HTTP requests One of the coolest async APIs introduced in browsers is the Fetch API, which is the modern replacement for the XMLHttpRequest API. Have you ever found yourself using jQuery just for simplifying AJAX requests with $.ajax? If you have, then this is surely a golden API for you, as it is natively easier to code and read! However, fetch comes natively, hence, there are performance benefits. Let's see how it works: fetch(link) .then(data => { // do something with data }) .catch(err => { // do something with error }); Awesome! So fetch uses promises! If that's the case, we can combine it with async/await to make it look completely synchronous and easy to read! <img id="img1" alt="Mozilla logo" /> <img id="img2" alt="Google logo" /> const get2Images = async () => { const image1 = await fetch('https://cdn.mdn.mozilla.net/static/img/web-docs-sprite.22a6a085cf14.svg'); const image2 = await fetch('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png'); console.log(image1); // gives us response as an object const blob1 = await image1.blob(); const blob2 = await image2.blob(); const url1 = URL.createObjectURL(blob1); const url2 = URL.createObjectURL(blob2); document.getElementById('img1').src = url1; document.getElementById('img2').src = url2; return 'complete'; } get2Images().then(status => console.log(status)); The line console.log(image1) will print the following: You can see the image1 response provides tons of information about the request. It has an interesting field body, which is actually a ReadableStream, and a byte stream of data that can be cast to a  Binary Large Object (BLOB) in our case. A blob object represents a file-like object of immutable and raw data. After getting the Response, we convert it into a blob object so that we can actually use it as an image. Here, fetch is actually fetching us the image directly so we can serve it to the user as a blob (without hot-linking it to the main website). Thus, this could be done on the server side, and blob data could be passed down a WebSocket or something similar. Fetch API customization The Fetch API is highly customizable. You can even include your own headers in the request. Suppose you've got a site where only authenticated users with a valid token can access an image. Here's how you'll add a custom header to your request: const headers = new Headers(); headers.append("Allow-Secret-Access", "yeah-because-my-token-is-1337"); const config = { method: 'POST', headers }; const req = new Request('http://myawesomewebsite.awesometld/secretimage.jpg', config); fetch(req) .then(img => img.blob()) .then(blob => myImageTag.src = URL.createObjectURL(blob)); Here, we added a custom header to our Request and then created something called a Request object (an object that has information about our Request). The first parameter, that is, http://myawesomewebsite.awesometld/secretimage.jpg, is the URL and the second is the configuration. Here are some other configuration options: Credentials: Used to pass cookies in a Cross-Origin Resource Sharing (CORS)-enabled server on cross-domain requests. Method: Specifies request methods (GET, POST, HEAD, and so on). Headers: Headers associated with the request. Integrity: A security feature that consists of a (possibly) SHA-256 representation of the file you're requesting, in order to verify whether the request has been tampered with (data is modified) or not. Probably not a lot to worry about unless you're building something on a very large scale and not on HTTPS. Redirect: Redirect can have three values: Follow: Will follow the URL redirects Error: Will throw an error if the URL redirects Manual: Doesn't follow redirect but returns a filtered response that wraps the redirect response Referrer: the URL that appears as a referrer header in the HTTP request. Accessing and modifying history with the history API You can access a user's history to some level and modify it according to your needs using the history API. It consists of the length and state properties: console.log(history, history.length, history.state); The output is as follows: {length: 4, scrollRestoration: "auto", state: null} 4 null In your case, the length could obviously be different depending on how many pages you've visited from that particular tab. history.state can contain anything you like (we'll come to its use case soon). Before looking at some handy history methods, let us take a look at the window.onpopstate event. Handling window.onpopstate events The window.onpopstate event is fired automatically by the browser when a user navigates between history states that a developer has set. This event is important to handle when you push to history object and then later retrieve information whenever the user presses the back/forward button of the browser. Here's how we'll program a simple popstate event: window.addEventListener('popstate', e => { console.log(e.state); // state data of history (remember history.state ?) }) Now we'll discuss some methods associated with the history object. Modifying history - the history.go(distance) method history.go(x) is equivalent to the user clicking his forward button x times in the browser. However, you can specify the distance to move, that is history.go(5); . This equivalent to the user hitting the forward button in the browser five times. Similarly, you can specify negative values as well to make it move backward. Specifying 0 or no value will simply refresh the page: history.go(5); // forwards the browser 5 times history.go(-1); // similar effect of clicking back button history.go(0); // refreshes page history.go(); // refreshes page Jumping ahead - the history.forward() method This method is simply the equivalent of history.go(1). This is handy when you want to just push the user to the page he/she is coming from. One use case of this is when you can create a full-screen immersive web application and on your screen there are some minimal controls that play with the history behind the scenes: if(awesomeButtonClicked && userWantsToMoveForward()) { history.forward() } Going back - the history.back() method This method is simply the equivalent of history.go(-1). A negative number, makes the history go backwards. Again, this is just a simple (and numberless) way to go back to a page the user came from. Its application could be similar to a forward button, that is, creating a full-screen web app and providing the user with an interface to navigate by. Pushing on the history - history.pushState() This is really fun. You can change the browser URL without hitting the server with an HTTP request. If you run the following JS in your browser, your browser will change the path from whatever it is (domain.com/abc/egh) to  /i_am_awesome (domain.com/i_am_awesome) without actually navigating to any page: history.pushState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); history.pushState({page2: "Packt"}, "This is page2", "/page2_packt"); // <-- state is currently here The History API doesn't care whether the page actually exists on the server or not. It'll just replace the URL as it is instructed. The  popstate event when triggered with the browser's back/forward button, will fire the function below and we can program it like this: window.onpopstate = e => { // when this is called, state is already updated. // e.state is the new state. It is null if it is the root state. if(e.state !== null) { console.log(e.state); } else { console.log("Root state"); } } To run this code, run the onpopstate event first, then the two lines of history.pushState previously. Then press your browser's back button. You should see something like: {myName: "Mehul"} which is the information related to the parent state. Press back button one more time and you'll see the message Root State. pushState does not fire onpopstate event. Only browsers' back/forward buttons do. Pushing on the history stack - history.replaceState() The history.replaceState() method is exactly like history.pushState(), the only difference is that it replaces the current page with another, that is, if you use history.pushState() and press the back button, you'll be directed to the page you came from. However, when you use history.replaceState() and you press the back button, you are not directed to the page you came from because it is replaced with the new one on the stack. Here's an example of working with the replaceState method: history.replaceState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); This replaces (instead of pushing) the current state with the new state. Although using the History API directly in your code may not be beneficial to you right now, many frameworks and libraries such as React, under the hood, use the History API to create a seamless, reload-less, smooth experience for the end user. If you found this article useful, do check out the book Learn ECMAScript, Second Edition to learn the ECMAScript standards for designing quality web applications. What's new in ECMAScript 2018 (ES9)? 8 recipes to master Promises in ECMAScript 2018 Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 45312

article-image-ai-unity-game-developers-emulate-real-world-senses
Kunal Chaudhari
06 Jun 2018
19 min read
Save for later

AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior

Kunal Chaudhari
06 Jun 2018
19 min read
An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player's sight, and so on. The quality of our  Non-Player Character (NPC's) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it's worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we'll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent's sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room's layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent's AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that's looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we'll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we'll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn't accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent's eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener's hearing is represented by a sphere, and the sounds closest to the listener are more likely to be "heard." We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn't necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player's health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it's another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you'll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: Let's create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. Add a plane to be used as a floor. Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you've set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine; public class Target : MonoBehaviour { public Transform targetMarker; void Start (){} void Update () { int button = 0; //Get the point of the hit position when the mouse is being clicked if(Input.GetMouseButtonDown(button)) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; targetMarker.position = targetPosition; } } } } You'll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don't do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity's Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine; public class PlayerTank : MonoBehaviour { public Transform targetTransform; public float targetDistanceTolerance = 3.0f; private float movementSpeed; private float rotationSpeed; // Use this for initialization void Start () { movementSpeed = 10.0f; rotationSpeed = 2.0f; } // Update is called once per frame void Update () { if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) { return; } Vector3 targetPosition = targetTransform.position; targetPosition.y = transform.position.y; Vector3 direction = targetPosition - transform.position; Quaternion tarRot = Quaternion.LookRotation(direction); transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } } The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let's take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That's all of the variables we need in this component. Whenever our AI character senses something, we'll check the  aspectType to see whether it's the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine; public class Aspect : MonoBehaviour { public enum AspectTypes { PLAYER, ENEMY, } public AspectTypes aspectType; } Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It'll have the following two senses: The perspective sense will check whether the tank aspect is within a set visible range and distance The touch sense will detect if the enemy aspect has collided with its box collider, which we'll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine; public class Wander : MonoBehaviour { private Vector3 targetPosition; private float movementSpeed = 5.0f; private float rotationSpeed = 2.0f; private float targetPositionTolerance = 3.0f; private float minX; private float maxX; private float minZ; private float maxZ; void Start() { minX = -45.0f; maxX = 45.0f; minZ = -45.0f; maxZ = 45.0f; //Get Wander Position GetNextPosition(); } void Update() { if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) { GetNextPosition(); } Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } void GetNextPosition() { targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ)); } } The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine; public class Sense : MonoBehaviour { public bool enableDebug = true; public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY; public float detectionRate = 1.0f; protected float elapsedTime = 0.0f; protected virtual void Initialize() { } protected virtual void UpdateSense() { } // Use this for initialization void Start () { elapsedTime = 0.0f; Initialize(); } // Update is called once per frame void Update () { UpdateSense(); } } The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we'll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine; public class Perspective : Sense { public int fieldOfView = 45; public int viewDistance = 100; private Transform playerTransform; private Vector3 rayDirection; protected override void Initialize() { playerTransform = GameObject.FindGameObjectWithTag("Player").transform; } protected override void UpdateSense() { elapsedTime += Time.deltaTime; if (elapsedTime >= detectionRate) { DetectAspect(); } } //Detect perspective field of view for the AI Character void DetectAspect() { RaycastHit hit; rayDirection = playerTransform.position - transform.position; if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView) { // Detect if player is within the field of view if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance)) { Aspect aspect = hit.collider.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Detected"); } } } } } We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI's current direction. If it's in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it's hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character's line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; } Debug.DrawLine(transform.position, playerTransform.position, Color.red); Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance); //Approximate perspective visualization Vector3 leftRayPoint = frontRayPoint; leftRayPoint.x += fieldOfView * 0.5f; Vector3 rightRayPoint = frontRayPoint; rightRayPoint.x -= fieldOfView * 0.5f; Debug.DrawLine(transform.position, frontRayPoint, Color.green); Debug.DrawLine(transform.position, leftRayPoint, Color.green); Debug.DrawLine(transform.position, rightRayPoint, Color.green); } } Touching is believing The next sense we'll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object's collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine; public class Touch : Sense { void OnTriggerEnter(Collider other) { Aspect aspect = other.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Touch Detected"); } } } } Our sample NPC and tank have  BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you're setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we'll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you'll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to explore the brand-new features in Unity 2017. How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 45267
article-image-bootstrap-grid-system-responsive-website
Savia Lobo
18 May 2018
12 min read
Save for later

How to use Bootstrap grid system for responsive website design?

Savia Lobo
18 May 2018
12 min read
Bootstrap Origins In 2011, Bootstrap was created by two Twitter employees (Mark Otto and Jacob Thornton) to address the issue of fragmentation of internal tools/platforms. Bootstrap aimed to provide consistency among different web applications that were internally developed at Twitter to reduce redundancy and increase adaptability and reusability. As digital creators, we should always aim to make our applications adaptable and reusable. This will help keep coherency between applications and speed up processes, as we won't need to create basic foundations over and over again. In today's tutorial, you will learn what a Bootstrap is, how it relates to Responsive Web Design and its importance to the web industry. When Twitter Blueprint was born, it provided a way to document and share common design patterns/assets within Twitter. This alone is an amazing feature that would make Bootstrap an extremely useful framework. With this more internal developers began contributing towards the Bootstrap project as part of Hackathon week, and the project just exploded. Not long after, it was renamed "Bootstrap" as we know and love it today, and was released as an open source project to the community. A core team led by Mark and Jacob along with a passionate and growing community of developers helped to accelerate the growth of Bootstrap. In early 2012 after a lot of contributions from the core team and the community, Bootstrap 2 was born. It had come a long way from being a framework for providing internal consistency among Twitter tools. It was now a responsive framework using a 12-column grid system. It also provided inbuilt support for Glyphicons and a plethora of other new components. In 2013, Bootstrap 3 was released with a mobile-first approach to design and a fully redesigned set of components using the immensely popular flat design. This is the version many websites use today and it is very suitable for most developers. Bootstrap 4 is the latest stable  release. This article is an excerpt taken from the book,' Responsive Web Design by Example', written by Frahaan Hussain. Why use Bootstrap? You probably have a reasonable idea of why you would use Bootstrap for developing websites after reading its history, but there is more to it. Simply put, it provides the following: A responsive grid, using the design philosophies. Cross browser compatibility, using Normalize.css to ensure elements render consistently across all browsers (which isn't a very easy task). You might be wondering why it's difficult. Simply put, there are several different browsers, each with a plethora of versions, which all render content differently. I've seen some browsers put a border around an image by default, whereas some browsers don't. This type of inconsistency will prove to be very bad for user experience. A plethora of UI components, by providing polished UI components as developers, we are going to bring our creativity to life in a much easier way. These components usually allow a team to increase their development velocity since they start from a solid tried and tested foundation. They not only provide good design, but they are usually implemented using best practices in terms of performance and accessibility. A very compact size with only a small footprint. Really fast to develop with, it doesn't get in the way like many other frameworks, but allows your creativity to shine through. Extremely easy to start using Bootstrap in your website. Bundles common JavaScript plugins such as jQuery. Excellent documentation. Customizable, allowing you to remove any unnecessary features. An amazing community that is always ready, 24/7, to help. It's pretty clear now that Bootstrap is an amazing framework and that it will help provide consistency among our projects and aid cross browser responsive design. But why use Bootstrap over other frameworks? There are endless responsive frameworks like Bootstrap out there, such as Foundation, W3.CSS, and Skeleton, to mention a few. Bootstrap, however, was one of the first responsive frameworks and is by far the most developed with an ever-growing community. It has documentation online, both official and unofficial, and other frameworks aren't able to boast about their resources as much as Bootstrap can. Constantly being updated, it makes it the right choice for any website developer. Also, most JavaScript frameworks, such as Angular and React, have bindings to Bootstrap components that will reduce the amount of code and time spent binding it with another framework. It can also be used with tools such as SASS to customize  the components provided further. Bootstrap's grid system First, let's cover what a grid system is in general, regardless of the framework you choose to develop your amazing website on top of. Without using a framework, CSS would be used to implement the grid. However, a framework like Bootstrap handles all of the CSS side and provides us with easy-to-use classes. A responsive grid system is composed of two main elements: Columns: These are the horizontal containers for storing content on a single row Rows: These are top level containers for storing columns Your website will have at least one row, but it can have more. Each row can contain containers that span a set number of columns. For example, if the grid system had 100 columns, then a container that spans 50 would be half the width of the browser and/or parent element. Basics of Bootstrap Bootstrap's grid system consists of 12 columns that can be used to display content. Bootstrap also uses containers (methods for storing the website's content), rows, and columns to aid in the layout and alignment of the web page's content. All of these employ HTML classes for usage and will be explained very shortly. The purpose of these are as follows: Columns are used to group snippets of the website's content, and they, in turn, allow manipulation without disrupting the internal content's flow. There are two different types of columns: .container: Used for a fixed width, which is set by Bootstrap .container fluid: Used for full width to span the entire browser Rows are used to horizontally group columns, which aids in lining up the site's content properly: .row: There is only one type of row Columns mentioned previously are a way of setting how wide content should be. The following are the classes used for columns: .col-xs: Designed to display the content only on extra-small screens Max container width—none Triggered when the browser width is below 576px .col-sm: Designed to display the content only on small screens Max container width—540px Triggered when the browser width is above or equal to 576px and below 768px .col-md: Designed to display the content only on medium screens Max container width—720px Triggered when the browser width is above or equal to 768 and below 992px .col-lg: Designed to display the content only on large screens Max container width—960px Triggered when the browser width is above or equal to 992px and below 1200px .col-xl: Designed to display the content only on extra-large screens Max container width—1140px Triggered when the browser width is above or equal to 1200px .col: Designed to be triggered on all screen sizes To set a column's width, we simply append an integer ranging from 1 to 12 at the end of the class, like so: .col-6: Spans six columns on all screen sizes .col-md-6: Spans six columns only on extra-small screen sizes Later in this chapter, we will run through some examples of how to use these features and how they work together. Usage and examples To use the aforementioned features, the structure is as follows: div with container class div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content The following examples may have some CSS styling applied; this does not affect their usage. Equal width columns example We will start off with a simple example that consists of one row and three equal columns on all screen sizes. The following code produces the aforementioned result: You may be scratching your head in regards to the column classes, as they have no numbers appended. This is an amazing feature that will come in useful very often. It allows us, as web developers, to add columns easily, without having to update the numbers, if the width of the columns is equal. In this example, there are three columns, which means the three divs equally span their thirds of the row. Multi-row, equal-width columns example Now let's extend the previous example to multiple rows: The following code produces the aforementioned result: As you can see, by adding a new row, the columns automatically go to the next row. This is extremely useful for grouping content together. Multi-row, equal-width columns without multiple rows example The title of this example may seem confusing, but you need to read it correctly. We will now cover creating multiple rows using only a single row class. This can be achieved with the help of a display utility class called w-100. The following code produces the aforementioned result: The example shows multiple row divs are not required for multiple rows. But the result isn't exactly identical, as there is no gap between the rows. This is useful for separating content that is still similar. For example, on a social network, it is common to have posts, and each post will contain information such as its date, title, description, and so on. Each post could be its own row, but within the post, the individual pieces of information could be separated using this class. Differently sized columns Up until now, we have only created rows with equal-width columns. These are useful, but not as useful as being able to set individual sizes. As mentioned in the Bootstrap grid system section, we can easily change the column width by appending a number ranging from 1-12 at the end of the col class. The following code produces the aforementioned result: As you can see, setting the explicit width of a column is very easy, but this applies the width to all screen sizes. You may want it only to be applied on certain screen sizes. The next section will cover this. Differently sized columns with screen size restrictions Let's use the previous example and expand it to change size responsively on differently sized screens. On extra-large screens, the grid will look like the following: On all other screen sizes it will appear with equal-width columns: The following code produces the aforementioned result: Now we are beginning to use breakpoints that provide a way of creating multiple layouts with minimal extra code to make use of the available real estate fully. Mixing and matching We aren't restricted to choosing only one break-point, we are able to set breakpoints for all the available screen sizes. The following figures illustrate all screen sizes, from extra-small to extra-large: Extra-small: Small: Medium: Large: Extra-large: The following code produces the aforementioned results: It isn't necessary for all divs to have the same breakpoints or to have breakpoints at all. Vertical alignment The previous examples provide functionality for use cases, but sometimes the need may arise to align objects vertically. This could technically be done with empty divs, but this wouldn't be a very elegant solution. Instead, there are alignment classes to help with this as can be seen here: As we can see, you can align rows vertically in one of three positions. The following code produces the aforementioned result: We aren't restricted to only aligning rows, we can easily align columns relative to each other, as is demonstrated here: The following code produces the aforementioned result: Horizontal alignment As we vertically aligned content in the previous section, we will now cover how easy it is to align content horizontally. The following figures show the results of horizontal alignment:   The following code produces the aforementioned result: Column offsetting The need may arise to position content with a slight offset. If the content isn't centered or at the start or end, this can become problematic, but using column offsetting, we can overcome this issue. Simply add an offset class, with the screen size to target, and how many columns (1-12) the content should be offset by, as can be seen in the following example:   The following code produces the aforementioned result: Grid wrap up The examples covered so far will suffice for most websites. There are more techniques for manipulating the grid, which can be found on Bootstrap's website. If you tried any of the examples, you may have noticed cascading from smaller screen-size classes to larger screen-size classes. This occurs when there are no explicit classes set for a certain screen size. Bootstrap components There are plethora of amazing components that are provided with Bootstrap, thus saving time creating them from scratch. There are components for dropdowns, buttons, images, and so much more. The usage is very similar to that of the grid system, and the same HTML elements we know and love are used with CSS classes to modify and display Bootstrap constructs. I won't go over every component that Bootstrap offers as that would require an encyclopedia in itself, and many of the commonly used ones will be covered in future chapters through example projects. I would however recommend taking a look at some of the components on Bootstrap's website. If you have found this post useful, do check out this book, ' Responsive Web Design by Example' to build engaging responsive websites using frameworks like Bootstrap and upgrade your skills as a web designer. Get ready for Bootstrap v4.1; Web developers to strap up their boots Web Development with React and Bootstrap Bootstrap 4 Objects, Components, Flexbox, and Layout  
Read more
  • 0
  • 2
  • 45262

article-image-customizing-player-character
Packt
13 Oct 2016
18 min read
Save for later

Customizing the Player Character

Packt
13 Oct 2016
18 min read
One of the key features of an RPG is to be able to customize your character player. In this article by Vahe Karamian, author of the book Building an RPG with Unity 5.x, we will take a look at how we can provide a means to achieve this. (For more resources related to this topic, see here.) Once again, the approach and concept are universal, but the actual implementation might be a little different based on your model structure. Create a new scene and name it Character Customization. Create a Cube prefab and set it to the origin. Change theScaleof the cube to<5, 0.1, 5>, you can also change the name of the GameObject to Base. This will be the platform that our character model stands on while the player customizes his/her character before game play. Drag and drop thefbxfile representing your character model into theScene View. The next few steps will entirely depend on your model hierarchy and structure as designed by the modeller. To illustrate the point, I have placed the same model in the scene twice. The one on the left is the model that has been configured to display only the basics, the model on the right is the model in its original state as shown in the figure below: Notice that this particular model I am using has everything attached. These include the different types of weapons, shoes, helmets, and armour. The instantiated prefab on the left hand side has turned off all of the extras from the GameObject's hierarchy. Here is how the hierarchy looks in the Hierarchy View: The model has a veryextensivehierarchy in its structure, the figure above is a small snippet to demonstrate that you will need to navigate the structure and manually identify and enable or disable the mesh representing a particular part of the model. Customizable Parts Based on my model, I cancustomizea few things on my 3D model. I can customize the shoulder pads, I can customize the body type, I can customize the weapons and armor it has, I can customize the helmet and shoes, and finally I can also customize the skin texture to give it different looks. Let's get a listing of all the different customizable items we have for our character: Shoulder Shields:there are four types Body Type:there are three body types; skinny, buff, and chubby Armor:knee pad, leg plate Shields:there are two types of shields Boots:there are two types of boots Helmet:there are four types of helmets Weapons:there are 13 different types of weapons Skins:there are 13 different types of skins User Interface Now that we know what ouroptionsare for customizing our player character, we can start thinking about the User Interface (UI) that will be used to enable the customization of the character. To design our UI, we will need to create aCanvasGameObject, this is done by right-clicking in theHierarchy Viewand selectingCreate|UI|Canvas. This will place aCanvasGameObject and anEventSystemGameObject in theHierarchy View. It is assumed that you already know how to create a UI in Unity. I am going to use a Panel togroupthe customizable items. For the moment I will be using checkboxes for some items and scroll bars for the weapons and skin texture. The following figure will illustrate how my UI for customization looks: These UI elements will need to be integrated with Event Handlers that will perform the necessary actions for enabling or disabling certain parts of the character model. For instance, using the UI I can select Shoulder Pad 4, Buff Body Type, move the scroll bar until the Hammer weapon shows up, selecting the second Helmet checkbox, selecting Shield 1 and Boot 2, my character will look like the figure below.We need a way to refer to each one of the meshes representing the different types of customizable objects on the model. This will be done through a C# script. The script will need to keep track of all the parts we are going to be managing for customization. Some models will not have the extra meshes attached. You can always create empty GameObjects at a particular location on the model, and you can dynamically instantiate the prefab representing your custom object at the given point. This can also be done for our current model, for instance, if we have a special space weapon that somehow gets dropped by the aliens in the game world, we can attach the weapon to our model through C# code. The important thing is to understand the concept, and the rest is up to you! The Code for Character Customization Things don't happen automatically. So we need to create some C# code that will handle the customization of our character model. The script we create here will handle the UI events that will drive the enabling and disabling of different parts of the model mesh. Create a new C# script and call itCharacterCustomization.cs. This script will be attached to theBaseGameObject in the scene. Here is a listing of the script: using UnityEngine; using UnityEngine.UI; using System.Collections; using UnityEngine.SceneManagement; public class CharacterCustomization : MonoBehaviour { public GameObject PLAYER_CHARACTER; public Material[] PLAYER_SKIN; public GameObject CLOTH_01LOD0; public GameObject CLOTH_01LOD0_SKIN; public GameObject CLOTH_02LOD0; public GameObject CLOTH_02LOD0_SKIN; public GameObject CLOTH_03LOD0; public GameObject CLOTH_03LOD0_SKIN; public GameObject CLOTH_03LOD0_FAT; public GameObject BELT_LOD0; public GameObject SKN_LOD0; public GameObject FAT_LOD0; public GameObject RGL_LOD0; public GameObject HAIR_LOD0; public GameObject BOW_LOD0; // Head Equipment public GameObject GLADIATOR_01LOD0; public GameObject HELMET_01LOD0; public GameObject HELMET_02LOD0; public GameObject HELMET_03LOD0; public GameObject HELMET_04LOD0; // Shoulder Pad - Right Arm / Left Arm public GameObject SHOULDER_PAD_R_01LOD0; public GameObject SHOULDER_PAD_R_02LOD0; public GameObject SHOULDER_PAD_R_03LOD0; public GameObject SHOULDER_PAD_R_04LOD0; public GameObject SHOULDER_PAD_L_01LOD0; public GameObject SHOULDER_PAD_L_02LOD0; public GameObject SHOULDER_PAD_L_03LOD0; public GameObject SHOULDER_PAD_L_04LOD0; // Fore Arm - Right / Left Plates public GameObject ARM_PLATE_R_1LOD0; public GameObject ARM_PLATE_R_2LOD0; public GameObject ARM_PLATE_L_1LOD0; public GameObject ARM_PLATE_L_2LOD0; // Player Character Weapons public GameObject AXE_01LOD0; public GameObject AXE_02LOD0; public GameObject CLUB_01LOD0; public GameObject CLUB_02LOD0; public GameObject FALCHION_LOD0; public GameObject GLADIUS_LOD0; public GameObject MACE_LOD0; public GameObject MAUL_LOD0; public GameObject SCIMITAR_LOD0; public GameObject SPEAR_LOD0; public GameObject SWORD_BASTARD_LOD0; public GameObject SWORD_BOARD_01LOD0; public GameObject SWORD_SHORT_LOD0; // Player Character Defense Weapons public GameObject SHIELD_01LOD0; public GameObject SHIELD_02LOD0; public GameObject QUIVER_LOD0; public GameObject BOW_01_LOD0; // Player Character Calf - Right / Left public GameObject KNEE_PAD_R_LOD0; public GameObject LEG_PLATE_R_LOD0; public GameObject KNEE_PAD_L_LOD0; public GameObject LEG_PLATE_L_LOD0; public GameObject BOOT_01LOD0; public GameObject BOOT_02LOD0; // Use this for initialization void Start() { } public bool ROTATE_MODEL = false; // Update is called once per frame void Update() { if (Input.GetKeyUp(KeyCode.R)) { this.ROTATE_MODEL = !this.ROTATE_MODEL; } if (this.ROTATE_MODEL) { this.PLAYER_CHARACTER.transform.Rotate(new Vector3(0, 1, 0), 33.0f * Time.deltaTime); } if (Input.GetKeyUp(KeyCode.L)) { Debug.Log(PlayerPrefs.GetString("NAME")); } } public void SetShoulderPad(Toggle id) { switch (id.name) { case "SP-01": { this.SHOULDER_PAD_R_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 1); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-02": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 1); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-03": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 1); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-04": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(id.isOn); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 1); break; } } } public void SetBodyType(Toggle id) { switch (id.name) { case "BT-01": { this.RGL_LOD0.SetActive(id.isOn); this.FAT_LOD0.SetActive(false); break; } case "BT-02": { this.RGL_LOD0.SetActive(false); this.FAT_LOD0.SetActive(id.isOn); break; } } } public void SetKneePad(Toggle id) { this.KNEE_PAD_R_LOD0.SetActive(id.isOn); this.KNEE_PAD_L_LOD0.SetActive(id.isOn); } public void SetLegPlate(Toggle id) { this.LEG_PLATE_R_LOD0.SetActive(id.isOn); this.LEG_PLATE_L_LOD0.SetActive(id.isOn); } public void SetWeaponType(Slider id) { switch (System.Convert.ToInt32(id.value)) { case 0: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 1: { this.AXE_01LOD0.SetActive(true); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 2: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(true); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 3: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(true); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 4: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(true); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 5: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(true); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 6: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(true); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 7: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(true); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 8: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(true); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 9: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(true); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 10: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(true); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 11: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(true); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 12: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(true); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 13: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(true); break; } } } public void SetHelmetType(Toggle id) { switch (id.name) { case "HL-01": { this.HELMET_01LOD0.SetActive(id.isOn); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-02": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(id.isOn); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-03": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(id.isOn); this.HELMET_04LOD0.SetActive(false); break; } case "HL-04": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(id.isOn); break; } } } public void SetShieldType(Toggle id) { switch (id.name) { case "SL-01": { this.SHIELD_01LOD0.SetActive(id.isOn); this.SHIELD_02LOD0.SetActive(false); break; } case "SL-02": { this.SHIELD_01LOD0.SetActive(false); this.SHIELD_02LOD0.SetActive(id.isOn); break; } } } public void SetSkinType(Slider id) { this.SKN_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.FAT_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.RGL_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; } public void SetBootType(Toggle id) { switch (id.name) { case "BT-01": { this.BOOT_01LOD0.SetActive(id.isOn); this.BOOT_02LOD0.SetActive(false); break; } case "BT-02": { this.BOOT_01LOD0.SetActive(false); this.BOOT_02LOD0.SetActive(id.isOn); break; } } } } This is a long script but it is straightforward. At the top of the script we have defined all of the variables that will be referencing the different meshes in our model character. All variables are of type GameObject with the exception of thePLAYER_SKINvariable which is an array ofMaterialdata type. The array is used to store the different types of texture created for the character model. There are a few functions defined that are called by the UI event handler. These functions are:SetShoulderPad(Toggle id); SetBodyType(Toggle id); SetKneePad(Toggle id); SetLegPlate(Toggle id); SetWeaponType(Slider id); SetHelmetType(Toggle id); SetShieldType(Toggle id); SetSkinType(Slider id);All of the functions take a parameter that identifies which specific type is should enable or disable. A BIG NOTE HERE! You can also use the system we just built to create all of the different variations of your Non-Character Player models and store them as prefabs! Wow! This will save you so much time and effort in creating your characters representing different barbarians!!! Preserving Our Character State Now that we have spent the time to customize our character, we need to preserve our character and use it in our game. In Unity, there is a function calledDontDestroyOnLoad(). This is a great function that can be utilized at this time. What does it do? It keeps the specified GameObject in memory going from one scene to the next. We can use these mechanisms for now, eventually though, you will want to create a system that can save and load your user data. Go ahead and create a new C# script and call itDoNotDestroy.cs. This script is going to be very simple. Here is the listing: using UnityEngine; using System.Collections; public class DoNotDestroy : MonoBehaviour { // Use this for initialization void Start() { DontDestroyOnLoad(this); } // Update is called once per frame void Update() { } } After you create the script go ahead and attach it to your character model prefab in the scene. Not bad, let's do a quick recap of what we have done so far. Recap By now you should have three scenes that are functional. We have our scene that represents the main menu, we have our scene that represents our initial level, and we just created a scene that is used for character customization. Here is the flow of our game thus far: We start the game, see the main menu, select theStart Gamebutton to enter the character customization scene, do our customization, and when we click theSavebutton we loadlevel 1. For this to work, we have created the following C# scripts: GameMaster.cs:used as the main script to keep track of our game state CharacterCustomization.cs:used exclusively for customizing our character DoNotDestroy.cs:used to save the state of a given object CharacterController.cs:used to control the motion of our character IKHandle.cs:used to implement inverse kinematics for the foot When you combine all of this together you now have a good framework and flow that can be extended and improved as we go along. Summary We covered some very important topics and concepts in the article that can be used and enhanced for your games. We started the article by looking into how to customize your player character. The concepts you take away from the article can be applied to a wide variety of scenarios. We look at how to understand the structure of your character model so that you can better determine the customization methods. These are the different types of weapons, clothing, armour, shields and so on... We then looked at how to create a user interface to help enable us with the customization of our player character during gameplay. We also learned that the tool we developed can be used to quickly create several different character models (customized) and store them as Prefabs for later use! Great time saver!!! We also learned how to preserve the state of our player character after customization for gameplay. You should now have an idea of how to approach your project. Resources for Article: Further resources on this subject: Animations Sprites [article] Development Tricks with Unreal Engine 4 [article] The Game World [article]
Read more
  • 0
  • 0
  • 44983

article-image-top-5-free-business-intelligence-tools
Amey Varangaonkar
02 Apr 2018
7 min read
Save for later

Top 5 free Business Intelligence tools

Amey Varangaonkar
02 Apr 2018
7 min read
There is no shortage of business intelligence tools available to modern businesses today. But they're not always easy on the pocket. Great functionality, stylish UI and ease of use always comes with a price tag. If you can afford it, great - if not, it's time to start thinking about open source and free business intelligence tools.  Free business intelligence tools can power your business Take a look at 5 of the best free or open source business intelligence tools. They're all as effective and powerful as anything you'd pay a premium for. You simply need to know what you're doing with them. BIRT BIRT (Business Intelligence and Reporting Tools) is an open-source project that offers industry-standard reporting and BI capabilities. It's available as both a desktop and web application. As a top-level project within the umbrella of the Eclipse Foundation, it's got a good pedigree that means you can be confident in its potency. BIRT is especially useful for businesses which have a working environment built around Java and Java EE, as its reporting and charting engines can integrate seamlessly with Java. From creating a range of reports to different types of charts and graphs, BIRT can also be used for advanced analytical tasks. You can learn about the impressive reporting capabilities that BIRT offers on its official features page. Pros: The BIRT platform is one of the most popularly used open source business intelligence tools across the world, with more than 12 million downloads and 2.5 million users across more than 150 countries. With a large community of users, getting started with this tool, or getting solutions to problems that you might come across should be easy. Cons: Some programming experience, preferably in Java, is required to make the best use of this tool. The complex functions and features may not be easy to grasp for absolute beginners. Jaspersoft Community Jaspersoft, formerly known as Panscopic, is one of the leading open source suites of tools for a variety of reporting and business intelligence tasks. It was acquired by TIBCO in 2014 in a deal worth approximately $185 million, and has grown in popularity ever since. Jaspersoft began with the promise of “saving the world from the oppression of complex, heavyweight business intelligence”, and the Community edition offers the following set of tools for easier reporting and analytics: JasperReports Server: This tool is used for designing standalone or embeddable reports which can be used across third party applications JasperReports Library: You can design pixel-perfect reports from different kinds of datasets Jaspersoft ETL: This is a popular warehousing tool powered by Talend for extracting useful insights from a variety of data sources Jaspersoft Studio: Eclipse-based report designer for JasperReports and JasperReports Server Visualize.js: A JavaScript-based framework to embed Jaspersoft applications Pros: Jaspersoft, like BIRT, has a large community of developers looking to actively solve any problem they might come across. More often than not, your queries are bound to be answered satisfactorily. Cons: Absolute beginners might struggle with the variety of offerings and their applications. The suite of Jaspersoft tools is more suited for someone with an intermediate programming experience. KNIME KNIME is a free, open-source data analytics and business intelligence company that offers a robust platform for reporting and data integration. Used commonly by data scientists and analysts, KNIME offers features for data mining, machine learning and data visualization in order to build effective end-to-end data pipelines. There are 2 major product offerings from KNIME: KNIME Analytics Platform KNIME Cloud Analytics Platform Considered to be one of the most established players in the Analytics and business intelligence market, KNIME has customers in over 60 countries worldwide. You can often find KNIME featured as a ‘Leader’ in the Gartner Magic Quadrant. It finds applications in a variety of enterprise use-cases, including pharma, CRM, finance, and more. Pros: If you want to leverage the power of predictive analytics and machine learning, KNIME offers you just the perfect environment to build industry-standard, accurate models. You can create a wide variety of visualizations including complex plots and charts, and perform complex ETL tasks with relative ease. Cons: KNIME is not suited for beginners. It's built instead for established professionals such as data scientists and analysts who want to conduct analyses quickly and efficiently. Tableau Public Tableau Public’s promise is simple - “Visualize and share your data in minutes - for free”. Tableau is one of the most popular business intelligence tools out there, rivalling the likes of Qlik, Spotfire, Power BI among others. Along with its enterprise edition which offers premium analytics, reporting and dashboarding features, Tableau also offers a freely available Public version for effective visual analytics. Last year, Tableau released an announcement that the interactive stories and reports published on the Tableau Public platform had received more than 1 billion views worldwide. Leading news organizations around the world, including BBC and CNBC, use Tableau Public for data visualization. Pros: Tableau Public is a very popular tool with a very large community of users. If you find yourself struggling to understand or execute any feature on this platform, there are ample number of solutions available on the community forums and also on forums such as Stack Overflow. The quality of visualizations is industry-standard, and you can publish them anywhere on the web without any hassle. Cons:It’s quite difficult to think of any drawback of using Tableau Public, to be honest. Having limited features as compared to the enterprise edition of Tableau is obviously a shortcoming, though. [box type="info" align="" class="" width=""]Editor’s tip: If you want to get started with Tableau Public and create interesting data stories using it, Creating Data Stories with Tableau Public is one book you do not want to miss out on![/box] Microsoft Power BI Microsoft Power BI is a paid, enterprise-ready offering by Microsoft to empower businesses to find intuitive data insights across a variety of data formats. Microsoft also offers a stripped-down version of Power BI with limited Business Intelligence capabilities called as Power BI Desktop. In this free version, users are offered up to 1 GB of data to work on, and the ability to create different kinds of visualizations on CSV data as well as Excel spreadsheets. The reports and visualizations built using Power BI Desktop can be viewed on mobile devices as well as on browsers, and can be updated on the go. Pros: Free, very easy to use. Power BI Desktop allows you to create intuitive visualizations and reports. For beginners looking to learn the basics of Business Intelligence and data visualization, this is a great tool to use. You can also work with any kind of data and connect it to the Power BI Desktop effortlessly. Cons: You don’t get the full suite of features on Power BI Desktop which make Power BI such an elegant and wonderful Business Intelligence tool. Also, new reports and dashboards cannot be created via the mobile platform. [box type="info" align="" class="" width=""]Editor’s Tip: If you want to get started with Microsoft Power BI, or want handy tips on using Power BI effectively, our Microsoft Power BI Cookbook will prove to be of great use! [/box] There are a few other free and open source tools which are quite effective and find a honorary mention in this article. We were absolutely spoilt for choices, and choosing the top 5 tools list among all these options was a lot of hard work! Some other tools which deserve a honorary mention are - Dataiku Free Edition, Pentaho Community Edition, QlikView Personal Edition, Rapidminer, among others. You may want to check them out as well. What do you think about this list? Are there any other free/open source business intelligence tools which should’ve made it into list?
Read more
  • 0
  • 0
  • 44955
article-image-create-your-first-openai-gym-environment-tutorial
Savia Lobo
19 Sep 2018
7 min read
Save for later

Create your first OpenAI Gym environment [Tutorial]

Savia Lobo
19 Sep 2018
7 min read
OpenAI Gym is an open source toolkit that provides a diverse collection of tasks, called environments, with a common interface for developing and testing your intelligent agent algorithms. The toolkit introduces a standard Application Programming Interface (API) for interfacing with environments designed for reinforcement learning. Each environment has a version attached to it, which ensures meaningful comparisons and reproducible results with the evolving algorithms and the environments themselves. This article is an excerpt taken from the book, Hands-On Intelligent Agents with OpenAI Gym, written by Praveen Palanisamy. In this article, you will get to know what OpenAI Gym is, its features, and later create your own OpenAI Gym environment. The Gym toolkit, through its various environments, provides an episodic setting for reinforcement learning, where an agent's experience is broken down into a series of episodes. In each episode, the initial state of the agent is randomly sampled from a distribution, and the interaction between the agent and the environment proceeds until the environment reaches a terminal state. Do not worry if you are not familiar with reinforcement learning. Some of the basic environments available in the OpenAI Gym library are shown in the following screenshot: Examples of basic environments available in the OpenAI Gym with a short description of the task The OpenAI Gym natively has about 797 environments spread over different categories of tasks. The famous Atari category has the largest share with about 116 (half with screen inputs and half with RAM inputs) environments! The categories of tasks/environments supported by the toolkit are listed here: Algorithmic Atari Board games Box2D Classic control Doom (unofficial) Minecraft (unofficial) MuJoCo Soccer Toy text Robotics (newly added) The various types of environment (or tasks) available under the different categories, along with a brief description of each environment, is given next. Keep in mind that you may need some additional tools and packages installed on your system to run environments in each of these categories. To have a detailed overview of each of these categories, head over to the book. With that, you have a very good overview of all the different categories and types of environment that are available as part of the OpenAI Gym toolkit. It is worth noting that the release of the OpenAI Gym toolkit was accompanied by an OpenAI Gym website (gym.openai.com), which maintained a scoreboard for every algorithm that was submitted for evaluation. It showcased the performance of user-submitted algorithms, and some submissions were also accompanied by detailed explanations and source code. Unfortunately, OpenAI decided to withdraw support for the evaluation website. The service went offline in September 2017. Now you have a good picture of the various categories of environment available in OpenAI Gym and what each category provides you with. Next, we will look at the key features of OpenAI Gym that make it an indispensable component in many of today's advancements in intelligent agent development, especially those that use reinforcement learning or deep reinforcement learning. Understanding the features of OpenAI Gym Here, we will take a look at the key features that have made the OpenAI Gym toolkit very popular in the reinforcement learning community and led to it becoming widely adopted. Simple environment interface OpenAI Gym provides a simple and common Python interface to environments. Specifically, it takes an action as input and provides observation, reward, done and an optional info object, based on the action as the output at each step. If this does not make perfect sense to you yet, do not worry. We will go over the interface again in a more detailed manner to help you understand. This paragraph is just to give you an overview of the interface to make it clear how simple it is. This provides great flexibility for users as they can design and develop their agent algorithms based on any paradigm they like, and not be constrained to use any particular paradigm because of this simple and convenient interface. Comparability and reproducibility We intuitively feel that we should be able to compare the performance of an agent or an algorithm in a particular task to the performance of another agent or algorithm in the same task. For example, if an agent gets a score of 1,000 on average in the Atari game of Space Invaders, we should be able to tell that this agent is performing worse than an agent that scores 5000 on average in the Space Invaders game in the same amount of training time. But what happens if the scoring system for the game is slightly changed? Or if the environment interface was modified to include additional information about the game states that will provide an advantage to the second agent? This would make the score-to-score comparison unfair, right? To handle such changes in the environment, OpenAI Gym uses strict versioning for environments. The toolkit guarantees that if there is any change to an environment, it will be accompanied by a different version number. Therefore, if the original version of the Atari Space Invaders game environment was named SpaceInvaders-v0 and there were some changes made to the environment to provide more information about the game states, then the environment's name would be changed to SpaceInvaders-v1. This simple versioning system makes sure we are always comparing performance measured on the exact same environment setup. This way, the results obtained are comparable and reproducible. Ability to monitor progress All the environments available as part of the Gym toolkit are equipped with a monitor. This monitor logs every time step of the simulation and every reset of the environment. What this means is that the environment automatically keeps track of how our agent is learning and adapting with every step. You can even configure the monitor to automatically record videos of the game while your agent is learning to play. Creating your first OpenAI Gym environment This section provides a quick way to get started with the OpenAI Gym Python API on Linux and macOS using virtualenv so that you can get a sneak peak into the Gym! MacOS and Ubuntu Linux systems come with Python installed by default. You can check which version of Python is installed by running python --version from a terminal window. If this returns python followed by a version number, then you are good to proceed to the next steps! If you get an error saying the Python command was not found, then you have to install Python. Install virtualenv: $pip install virtualenv If pip is not installed on your system, you can install it by typing sudo easy_install pip. Create a virtual environment named openai-gym using the virtualenv tool: $virtualenv openai-gym Activate the openai-gym virtual environment: $source openai-gym/bin/activate Install all the packages for the Gym toolkit from upstream: $pip install -U gym If you get permission denied or failed with error code 1 when you run the pip install command, it is most likely because the permissions on the directory you are trying to install the package to (the openai-gym directory inside virtualenv in this case) needs special/root privileges. You can either run sudo -H pip install -U gym[all] to solve the issue or change permissions on the openai-gym directory by running sudo chmod -R o+rw ~/openai-gym. Test to make sure the installation is successful: $python -c 'import gym; gym.make("CartPole-v0");' Creating and visualizing a new Gym environment In just a minute or two, you have created an instance of an OpenAI Gym environment to get started! Let's open a new Python prompt and import the gym module: >>import gym Once the gym module is imported, we can use the gym.make method to create our new environment like this: >>env = gym.make('CartPole-v0') >>env.reset() env.render() This will bring up a window like this: Hooray! Summary In this post, you learned what OpenAI Gym is, its features, and created your first OpenAI Gym environment. You now have a very good idea about OpenAI Gym. If you've enjoyed this post, head over to the book, Hands-On Intelligent Agents with OpenAI Gym, to know about other latest learning environments and learning algorithms. Extending OpenAI Gym environments with Wrappers and Monitors [Tutorial] How to build a cartpole game using OpenAI Gym Top 5 tools for reinforcement learningTop 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 44873

article-image-google-employees-walkout-for-real-change-today-these-are-their-demands
Natasha Mathur
01 Nov 2018
5 min read
Save for later

Google employees ‘Walkout for Real Change’ today. These are their demands.

Natasha Mathur
01 Nov 2018
5 min read
More than 1500 Google employees, around the world, are planning to walk out of their respective Google offices today, to protest against Google’s handling of sexual misconduct within the workplace, according to the New York Times. This is a part of the “women’s walkout” that was organized by more than 200 Google engineers, earlier this week as a response to Google’s handling of sexual misconduct in the recent past, that employees found as inadequate. The planning for the walkout was done last Friday, where Claire Stapleton, product marketing manager at Google’s YouTube created an internal mailing list to organize the walkout according to the New York Times. As the walkout was organized, more than 200 employees had joined in over the weekend, which has since grown to more than 1,500. The organizers took to Twitter, yesterday, to lay out five demands for change within the workplace. The protest has already started at Google’s Tokyo and Singapore office. Google employees and contractors, across the globe, will be leaving work at 11:10 AM in their respective time zones.   Here are some glimpses from the walkout: https://twitter.com/GoogleWalkout/status/1058199862502612993 https://twitter.com/EmmaThomson2/status/1058180157804994562 https://twitter.com/GoogleWalkout/status/1058018104930897920 https://twitter.com/GoogleWalkout/status/1058010748444700672 https://twitter.com/GoogleWalkout/status/1058003099581853697 The demands laid out by the Google employees are as follows: An end to Forced Arbitration in cases of harassment and discrimination for all current and future employees. This means that Google should no longer require people to waive their right to sue. In fact, every co-worker should be given the right to bring a co-worker, representative, or supporter of their choice when meeting with HR for filing a harassment claim. A commitment to end pay and opportunity inequity. This includes making sure that there are women of color at all the levels of the organization. There should also be transparent data on the gender, race, and ethnicity compensation gap, across both level and years of industry experience.  The methods and techniques that have been used to aggregate such data should also be transparent. A publicly disclosed sexual harassment transparency report. This includes the number of harassment claims at Google over time, types of claims submitted, how many victims and accused have left Google, details about exit packages and their worth. A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously. This is because the current process in place is not working. HR’s performance is assessed by senior management and directors, which forces them to put the management’s interest ahead of the employees that report harassment and discrimination. Accountability, safety, and ability to report regarding unsafe working conditions should not be dictated by the employment status. Elevate the Chief Diversity Officer to answer directly to the CEO and make recommendations directly to the Board of Directors. Appoint an Employee Rep to the Board. The frustration among the Google employees surfaced after the New York Times report brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. As per the report, Rubin was accused of misbehavior in 2014 and the allegations were confirmed by Google. Due to this, he was asked to leave by former Google CEO, Mr.Page, but what’s discreditable is the fact that Google paid him $90 million as an exit package. Moreover,  he also received a high profile well-respected farewell by Google in October 2014. Also, the fact that senior executives such as Drummond, Chief Legal Officer, Alphabet, who were mentioned in the NY times report for indulging in “inappropriate relationships” within the organization continues to work in highly placed positions at Google and haven’t faced any real punitive action by Google for their past behavior. “We don’t want to feel that we’re unequal or we’re not respected anymore. Google’s famous for its culture. But in reality, we’re not even meeting the basics of respect, justice, and fairness for every single person here”, Stapleton told the NY Times. Google CEO Sundar Pichai had sent an email to all the Google employees, last Thursday, clarifying that the company has fired 48 people over the last two years for sexual harassment, out of whom, 13  were “senior managers and above”. He also mentioned how none of them received any exit packages. Sundar Pichai, Google’s CEO, further apologized in an email obtained by Axios this Tuesday, saying that the “apology at TGIF didn’t come through, and it wasn’t enough”. Pichai also mentioned that he supports the engineers at Google who have organized a “walkout”. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. The very same day, news of Richard DeVaul, a director at unit X of Alphabet (Google’s parent company) whose name was also mentioned in the New York Times report, resigning from the company came to light. DeVaul had been accused of sexually harassing Star Simpson, a hardware engineer. DeVaul did not receive any exit package on his resignation. Public response to the walkout has been largely positive: https://twitter.com/lizthegrey/status/1057859226100355072 https://twitter.com/amrtgaber/status/1057822987527761920 https://twitter.com/sparker2/status/1057846019122069508 https://twitter.com/LisaIronTongue/status/1057852658948595712 Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 44872
Modal Close icon
Modal Close icon