Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-creating-sprites
Packt
15 Jun 2017
6 min read
Save for later

Creating Sprites

Packt
15 Jun 2017
6 min read
In this article by Nikhil Malankar, author of Learning Android Game Development, We have almost learnt everything about the basics that we need to create various components in Android and so we can now move on to do some more exciting stuff. Now at this point we will start working on a proper 2D game. It will be a small 2D side scroller game like Mario. But before we do that, lets first talk about games as a development concept. In order to understand more about games, you will need to understand a bit of Game Theory. So before we proceed with creating images and backgrounds on screen lets dive into some game theory. Here are a list of topics we will be covering in this article: Game Theory Working with colors Creating images on screen Making a continuous scrolling background (For more resources related to this topic, see here.) Lets start with the first one. Game Theory If you observe a game carefully in its source code level, you will observe that a game is just a set of illusions to create certain effects and display them on screen. Perhaps, the best example of this can be the game that we are about to develop itself. In order to make your character move ahead you can do either of the two things: Make the character move ahead Make the background move behind Illusions Either of the two things mentioned above will give you an illusion that the character is moving in a certain direction. Also, if you remember Mario properly then you will notice that the clouds and grasses are one and the same. Only their colors were changed. This was because of the memory limitations of the console platform at the time: Image Source: http://hub.packtpub.com/wp-content/uploads/2017/06/mario-clouds-is-grass.jpg Game developers use many such cheats in order to get their game running. Of course, in today's times we don't have to worry much about memory limitations because our mobile device today has the capability of the Apollo 11 rocket which landed on the Moon. Now, keep in mind the above two scenarios, we are going to use one of them in our game to make our character move. We also have to understand that every game is a loop of activities. Unlike an app, you need to draw your game resources every frame per second which will give you the illusion of moving or any effect on screen. This concept is called as Frames Per Second(FPS). Its almost similar to that of the concept of old films where a huge film used to be projected on screen by rolling per frame. Take a look at the following image to understand this concept better: Sprite Sheet of a game character As you can see above, a sprite sheet is simply an image consisting of multiple images within themselves in order to create an animation and thereby a sprite is simply an image. If we want to make our character run we will simply read the file Run_000 and play it all the way sequentially through to Run_009 which will make it appear as though the character is running. Majority of the things that you will be working with when making a game would be based on manipulating your movements. And so, you will need to be clear about your coordinates system because it will come in handy. Be it for firing a bullet out of a gun, character movement or simply turning around to look here and there. All of it is based on the simple component of movement. Game Loop In its core, every game is basically just a loop of events. It is a set up to give calls to various functions and code blocks to execute in order to have draw calls on your screen and thereby making the game playable. Mostly, your game loop comprises of 3 parts: Initialize Update Draw Initializing the game means to set an entry point to your game through which the other two parts can be called. Once your game is initialized you need to start giving calls to your events which can be managed through your update function. The draw function is responsible for drawing all your image data on screen. Everything you see on the screen including your backgrounds, images or even your GUI is the responsibility of the draw method. To say the least, your game loop is the heart of your game. This is just a basic overview of the game loop and there is much more complexity you can add to it. But for now, this much information is sufficient for you to get started. The following image perfectly illustrates what a game loop is: Image Source: https://gamedevelopment.tutsplus.com/articles/gamedev-glossary-what-is-the-game-loop--gamedev-2469 Game Design Document Also, before starting a game it is very essential to create a Game Design Document(GDD). This document serves as a groundwork for the game you will be making. 99% of times when we start making a game we lose track of the features planned for it and deviate from the core game experience. So, it is always recommended to have a GDD in place in order to keep focus. A GDD consists of the following things: Game play mechanics Story (if any) Level design Sound and music UI planning and game controls You can read more about the Game Design Document by going to the following link: https://en.wikipedia.org/wiki/Game_design_document Prototyping When making a game we need to test it simultaneously. A game is one of the most complex pieces of software and if we mess up on one part there is a chance that it might break the entire game as a whole. This process can be called as Prototyping. Making a prototype of your game is one of THE most important aspects of a game because this is where you test out the basic mechanics of your game. A prototype should be a simple working model of your game with basic functionality. It can also be termed as a stripped down version of your game. Summary Congratulations! You have successfully learnt how to create images and work with colors in Android Studio. Resources for Article: Further resources on this subject: Android Game Development with Unity3D [article] Optimizing Games for Android [article] Drawing and Drawables in Android Canvas [article]
Read more
  • 0
  • 0
  • 39440

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 39378

article-image-how-to-stop-hackers-from-messing-with-your-home-network-iot
Guest Contributor
16 Oct 2018
8 min read
Save for later

How to stop hackers from messing with your home network (IoT)

Guest Contributor
16 Oct 2018
8 min read
This week, NCCIC, in collaboration with cybersecurity authorities of Australia, Canada, New Zealand, the United Kingdom, and the United States released a joint ‘Activity Alert Report’. What is alarming in the findings is that a majority of sophisticated exploits on secure networks are being carried out by attackers using freely available tools that find loopholes in security systems. The Internet of Things (IoT) is broader than most people realize. It involves diverse elements that make it possible to transfer data from a point of origin to a destination. Various Internet-ready mechanical devices, computers, and phones are part of your IoT, including servers, networks, cryptocurrency sites, down to the tracking chip in your pet’s collar. Your IoT does not require a person to person interaction. It also doesn’t require a person to device interaction, but it does require device to device connections and interactions. What does all this mean to you? It means hackers have more points of entry into your personal IoT that you ever dreamed of. Here are some of the ways they can infiltrate your personal IoT devices along with some suggestions on how to keep them out. Your home network How many functions are controlled via a home network? From your security system to activating lighting at dusk to changing the setting on the thermostat, many people set up automatic tasks or use remote access to manually adjust so many things. It’s convenient, but it comes with a degree of risk. (Image courtesy of HotForSecurity.BitDefender.com) Hackers who are able to detect and penetrate the wireless home network via your thermostat or the lighting system eventually burrow into other areas, like the hard drive where you keep financial documents. Before you know it, you're a ransomware victim. Too many people think their OS firewall will protect them but by the time a hacker runs into that, they’re already in your computer and can jump out to the wireless devices we’ve been talking about. What can you do about it? Take a cue from your bank. Have you ever tried to access your account from a device that the bank doesn’t recognize? If so, then you know the bank’s system requires you to provide additional means of identification, like a fingerprint scan or answering a security question. That process is called multifactor authentication. Unless the hacker can provide more information, the system blocks the attempt. Make sure your home network is setup to require additional authentication when any device other than your phone, home computer, or tablet is used. Spyware/Malware from websites and email attachments Hacking via email attachments or picking up spyware and malware by visits to unsecured sites are still possible. Since these typically download to your hard drive and run in the background, you may not notice anything at all for a time. All the while, your data is being harvested. You can do something about it. Keep your security software up to date and always load the latest release of your firewall. Never open attachments with unusual extensions even if they appear to be from someone you know. Always use your security software to scan attachments of any kind rather than relying solely on the security measures employed by your email client. Only visit secure sites. If the site address begins with “http” rather than “https” that’s a sign you need to leave it alone. Remember to update your security software at least once a week. Automatic updates are a good thing. Don’t forget to initiate a full system scan at least once a week, even if there are no apparent problems. Do so after making sure you've downloaded and installed the latest security updates. Your pet’s microchip The point of a pet chip is to help you find your pet if it wanders away or is stolen. While not GPS-enabled, it’s possible to scan the chip on an animal who ends up in an animal shelter or clinic and confirm a match. Unfortunately, that function is managed over a network. That means hackers can use it as a gateway. (Image courtesy of HowStuffWorks.com) Network security determines how vulnerable you are in terms of who can access the databases and come up with a match. Your best bet is to find out what security measures the chip manufacturer employs, including how often those measures are updated. If you don’t get straight answers, go with a competitor’s chip. Your child’s toy Have you ever heard of a doll named Cayla? It’s popular in Germany and also happens to be Wi-Fi enabled. That means hackers can gain access to the camera and microphone included in the doll design. Wherever the doll is carried, it’s possible to gather data that can be used for all sorts of activities. That includes capturing information about passwords, PIN codes, and anything else that’s in the range of the camera or the microphone. Internet-enabled toys need to be checked for spyware regularly. More manufacturers provide detectors in the toy designs. You may still need to initiate those scans and keep the software updated. This increases the odds that the toy remains a toy and doesn’t become a spy for some hacker. Infection from trading electronic files It seems pretty harmless to accept a digital music file from a friend. In most cases, there is no harm. Unfortunately, your friend’s digital music collection may already be corrupted. Once you load that corrupted file onto your hard drive, your computer is now part of a botnet network running behind your own home network. (Image courtesy of: AlienVault.com) Whenever you receive a digital file, either via email or by someone stopping by with a jump drive, always use your security software to scan it before downloading it into your system. The software should be able to catch the infection. If you find anything, let your friend know so he or she can take steps to debug the original file. These are only a few examples of how your IoT can be hacked and lead to data theft or corruption. As with any type of internet-based infection, there are new strategies developed daily. How Blockchain might help There’s one major IoT design flaw that allows hackers to easily get into a system and that is the centralized nature of the network’s decision-making. There is a single point of control through which all requests are routed and decisions are made. A hacker has only to penetrate this singular authority to take control of everything because individual devices can’t decide on their own what constitutes a threat. Interestingly enough, the blockchain technology that underpins Bitcoin and many other cryptocurrencies might eventually provide a solution to the extremely hackable state of the IoT as presently configured. While not a perfect solution, the decentralized nature of blockchain has a lot of companies spending plenty on research and development for eventual deployment to a host of uses, including the IoT. The advantage blockchain technology offers to IoT is that it removes the single point of control and allows each device on a network to work in conjunction with the others to detect and thwart hack attempts. Blockchain works through group consensus. This means that in order to take control of a system, a bad actor would have to be able to take control of a majority of the devices all at once, which is an exponentially harder task than breaking through the single point of control model. If a blockchain-powered IoT network detects an intrusion attempt and the group decides it is malicious, it can be quarantined so that no damage occurs to the network. That’s the theory anyway. Since blockchain is an almost brand new technology, there are hurdles to be overcome before it can be deployed on a wide scale as a solution to IoT security problems. Here are a few: Computing power costs - It takes plenty of computer resources to run a blockchain. More than the average household owner is willing to pay right now. That’s why the focus is on industrial IoT uses at present. Legal issues - If you have AI-powered devices making decisions on their own, who will bear ultimate responsibility when things go wrong? Volatility - The development around blockchain is young and unpredictable. Investing in a solution right now might mean having to buy all new equipment in a year. Final Thoughts One thing is certain. We have a huge problem (IoT security) and what might eventually offer a solid solution (blockchain technology). Expect the path to get from here to there to be filled with potholes and dead ends but stay tuned. The potential for a truly revolutionary technology to come into its own is definitely in the mix. About Gary Stevens Gary Stevens is a front-end developer. He's a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Defending your business from the next wave of cyberwar: IoT Threats. 5 DIY IoT projects you can build under $50 IoT botnets Mirai and Gafgyt target vulnerabilities in Apache Struts and SonicWall
Read more
  • 0
  • 0
  • 39337

article-image-network-programming-gawk
Pavan Ramchandani
31 May 2018
12 min read
Save for later

Network programming 101 with GAWK (GNU AWK)

Pavan Ramchandani
31 May 2018
12 min read
In today's tutorial, we will learn about the networking aspects, for example working with TCP/IP for both client-side and server-side. We will also explore HTTP services to help you get going with networking in AWK. This tutorial is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. The AWK programming language was developed as a pattern-matching language for text manipulation; however, GAWK has advanced features, such as file-like handling of network connections. We can perform simple TCP/IP connection handling in GAWK with the help of special filenames. GAWK extends the two-way I/O mechanism used with the |& operator to simple networking using these special filenames that hide the complex details of socket programming to the programmer. The special filename for network communication is made up of multiple fields, all of which are mandatory. The following is the syntax of creating a filename for network communication: /net-type/protocol/local-port/remote-host/remote-port Each field is separated from another with a forward slash. Specifying all of the fields is mandatory. If any of the field is not valid for any protocol or you want the system to pick a default value for that field, it is set as 0. The following list illustrates the meaning of different fields used in creating the file for network communication: net-type: Its value is inet4 for IPv4, inet6 for IPv6, or inet to use the system default (which is generally IPv4). protocol: It is either tcp or udp for a TCP or UDP IP connection. It is advised you use the TCP protocol for networking. UDP is used when low overhead is a priority. local-port: Its value decides which port on the local machine is used for communication with the remote system. On the client side, its value is generally set to 0 to indicate any free port to be picked up by the system itself. On the server side, its value is other than 0 because the service is provided to a specific publicly known port number or service name, such as http, smtp, and so on. remote-host: It is the remote hostname which is to be at the other end of the connection. For the server side, its value is set to 0 to indicate the server is open for all other hosts for connection. For the client side, its value is fixed to one remote host and hence, it is always different from 0. This name can either be represented through symbols, such as www.google.com, or numbers, 123.45.67.89. remote-port: It is the port on which the remote machine will communicate across the network. For clients, its value is other than 0, to indicate to which port they are connecting to the remote machine. For servers, its value is the port on which they want connection from the client to be established. We can use a service name here such as ftp, http, or a port number such as 80, 21, and so on. TCP client and server (/inet/tcp) TCP gaurantees that data is received at the other end and in the same order as it was transmitted, so always use TCP. In the following example, we will create a tcp-server (sender) to send the current date time of the server to the client. The server uses the strftime() function with the coprocess operator to send to the GAWK server, listening on the 8080 port. The remote host and remote port could be any client, so its value is kept as 0. The server connection is closed by passing the special filename to the close() function for closing the file as follows: $ vi tcpserver.awk #TCP-Server BEGIN { print strftime() |& "/inet/tcp/8080/0/0" close("/inet/tcp/8080/0/0") } Now, open one Terminal and run this program before running the client program as follows: $ awk -f tcpserver.awk Next, we create the tcpclient (receiver) to receive the data sent by the tcpserver. Here, we first create the client connection and pass the received data to the getline() using the coprocess operator. Here the local-port value is set to 0 to be automatically chosen by the system, the remote-host is set to the localhost, and the remote-port is set to the tcp-server port, 8080. After that, the received message is printed, using the print $0 command, and finally, the client connection is closed using the close command, as follows: $ vi tcpclient.awk #TCP-client BEGIN { "/inet/tcp/0/localhost/8080" |& getline print $0 close("/inet/tcp/0/localhost/8080") } Now, execute the tcpclient program in another Terminal as follows : $ awk -f tcpclient.awk The output of the previous code is as follows : Fri Feb 9 09:42:22 IST 2018 UDP client and server ( /inet/udp ) The server and client programs that use the UDP protocol for communication are almost identical to their TCP counterparts, with the only difference being that the protocol is changed to udp from tcp. So, the UDP-server and UDP-client program can be written as follows: $ vi udpserver.awk #UDP-Server BEGIN { print strftime() |& "/inet/udp/8080/0/0" "/inet/udp/8080/0/0" |& getline print $0 close("/inet/udp/8080/0/0") } $ awk -f udpserver.awk Here, only one addition has been made to the client program. In the client, we send the message hello from client ! to the server. So when we execute this program on the receiving Terminal, where the udpclient.awk program is run, we get the remote system date time. And on the Terminal where the udpserver.awk program is run, we get the hello message from the client: $ vi udpclient.awk #UDP-client BEGIN { print "hello from client!" |& "/inet/udp/0/localhost/8080" "/inet/udp/0/localhost/8080" |& getline print $0 close("/inet/udp/0/localhost/8080") } $ awk -f udpclient.awk GAWK can be used to open direct sockets only. Currently, there is no way to access services available over an SSL connection such as https, smtps, pop3s, imaps, and so on. Reading a web page using HttpService To read a web page, we use the Hypertext Transfer Protocol (HTTP ) service which runs on port number 80. First, we redefine the record separators RS and ORS because HTTP requires CR-LF to separate lines. The program requests to the IP address 35.164.82.168 ( www.grymoire.com ) of a static website which, in turn, makes a GET request to the web page: http://35.164.82.168/Unix/donate.html . HTTP calls the GET request, a method which tells the web server to transmit the web page donate.html. The output is stored in the getline function using the co-process operator and printed on the screen, line by line, using the while loop. Finally, we close the http service connection. The following is the program to retrieve the web page: $ vi view_webpage.awk BEGIN { RS=ORS="rn" http = "/inet/tcp/0/35.164.82.168/80" print "GET http://35.164.82.168/Unix/donate.html" |& http while ((http |& getline) > 0) print $0 close(http) } $ awk -f view_webpage.awk Upon executing the program, it fills the screen with the source code of the page on the screen as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML lang="en-US"> <HEAD> <TITLE> Welcome to The UNIX Grymoire!</TITLE> <meta name="keywords" content="grymoire, donate, unix, tutorials, sed, awk"> <META NAME="Description" CONTENT="Please donate to the Unix Grymoire" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="myCSS.css" rel="stylesheet" type="text/css"> <!-- Place this tag in your head or just before your close body tag --> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <link rel="canonical" href="http://www.grymoire.com/Unix/donate.html"> <link href="myCSS.css" rel="stylesheet" type="text/css"> ........ ........ Profiling in GAWK Profiling of code is done for code optimization. In GAWK, we can do profiling by supplying a profile option to GAWK while running the GAWK program. On execution of the GAWK program with that option, it creates a file with the name awkprof.out. Since GAWK is performing profiling of the code, the program execution is up to 45% slower than the speed at which GAWK normally executes. Let's understand profiling by looking at some examples. In the following example, we create a program that has four functions; two arithmetic functions, one function prints an array, and one function calls all of them. Our program also contains two BEGIN and two END statements. First, the BEGIN and END statement and then it contains a pattern action rule, then the second BEGIN and END statement, as follows: $ vi codeprof.awk func z_array(){ arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for ( v in arr ) print v, arr[v] print "Array Ends...!" print "=====================" } function mul(num1, num2){ result = num1 * num2 printf ("Multiplication of %d * %d : %dn", num1,num2,result) } function all(){ add(30,10) mul(5,6) z_array() } BEGIN { print "First BEGIN statement" print "=====================" } END { print "First END statement " print "=====================" } /maruti/{print $0 } BEGIN { print "Second BEGIN statement" print "=====================" all() } END { print "Second END statement" print "=====================" } function add(num1, num2){ result = num1 + num2 printf ("Addition of %d + %d : %dn", num1,num2,result) } $ awk -- prof -f codeprof.awk cars.dat The output of the previous code is as follows: First BEGIN statement ===================== Second BEGIN statement ===================== Addition of 30 + 10 : 40 Multiplication of 5 * 6 : 30 Array begins...! ===================== 1 audi 2 bmw 3 ferrari 4 toyota 5 volvo Array Ends...! ===================== maruti swift 2007 50000 5 maruti dezire 2009 3100 6 maruti swift 2009 4100 5 maruti esteem 1997 98000 1 First END statement ===================== Second END statement ===================== Execution of the previous program also creates a file with the name awkprof.out. If we want to create this profile file with a custom name, then we can specify the filename as an argument to the --profile option as follows: $ awk --prof=codeprof.prof -f codeprof.awk cars.dat Now, upon execution of the preceding code we get a new file with the name codeprof.prof. Let's try to understand the contents of the file codeprof.prof created by the profiles as follows: # gawk profile, created Fri Feb 9 11:01:41 2018 # BEGIN rule(s) BEGIN { 1 print "First BEGIN statement" 1 print "=====================" } BEGIN { 1 print "Second BEGIN statement" 1 print "=====================" 1 all() } # Rule(s) 12 /maruti/ { # 4 4 print $0 } # END rule(s) END { 1 print "First END statement " 1 print "=====================" } END { 1 print "Second END statement" 1 print "=====================" } # Functions, listed alphabetically 1 function add(num1, num2) { 1 result = num1 + num2 1 printf "Addition of %d + %d : %dn", num1, num2, result } 1 function all() { 1 add(30, 10) 1 mul(5, 6) 1 z_array() } 1 function mul(num1, num2) { 1 result = num1 * num2 1 printf "Multiplication of %d * %d : %dn", num1, num2, result } 1 function z_array() { 1 arr[30] = "volvo" 1 arr[10] = "bmw" 1 arr[20] = "audi" 1 arr[50] = "toyota" 1 arr["car"] = "ferrari" 1 n = asort(arr) 1 print "Array begins...!" 1 print "=====================" 5 for (v in arr) { 5 print v, arr[v] } 1 print "Array Ends...!" 1 print "=====================" } This profiling example explains the various basic features of profiling in GAWK. They are as follows: The first look at the file from top to bottom explains the order of the program in which various rules are executed. First, the BEGIN rules are listed followed by the BEGINFILE rule, if any. Then pattern-action rules are listed. Thereafter, ENDFILE rules and END rules are printed. Finally, functions are listed in alphabetical order. Multiple BEGIN and END rules retain their places as separate identities. The same is also true for the BEGINFILE and ENDFILE rules. The pattern-action rules have two counts. The first number, to the left of the rule, tells how many times the rule's pattern was tested for the input file/record. The second number, to the right of the rule's opening left brace, with a comment, shows how many times the rule's action was executed when the rule evaluated to true. The difference between the two indicates how many times the rules pattern evaluated to false. If there is an if-else statement then the number shows how many times the condition was tested. At the right of the opening left brace for its body is a count showing how many times the condition was true. The count for the else statement tells how many times the test failed.  The count at the beginning of a loop header (for or while loop) shows how many times the loop conditional-expression was executed. In user-defined functions, the count before the function keyword tells how many times the function was called. The counts next to the statements in the body show how many times those statements were executed. The layout of each block uses C-style tabs for code alignment. Braces are used to mark the opening and closing of a code block, similar to C-style. Parentheses are used as per the precedence rule and the structure of the program, but only when needed. Printf or print statement arguments are enclosed in parentheses, only if the statement is followed by redirection. GAWK also gives leading comments before rules, such as before BEGIN and END rules, BEGINFILE and ENDFILE rules, and pattern-action rules and before functions. GAWK provides standard representation in a profiled version of the program. GAWK also accepts another option, --pretty-print. The following is an example of a pretty-printing AWK program: $ awk --pretty-print -f codeprof.awk cars.dat When GAWK is called with pretty-print, the program generates awkprof.out, but this time without any execution counts in the output. Pretty-print output also preserves any original comments if they are given in a program while the profile option omits the original program’s comments. The file created on execution of the program with --pretty-print option is as follows: # gawk profile, created Fri Feb 9 11:04:19 2018 # BEGIN rule(s) BEGIN { print "First BEGIN statement" print "=====================" } BEGIN { print "Second BEGIN statement" print "=====================" all() } # Rule(s) /maruti/ { print $0 } # END rule(s) END { print "First END statement " print "=====================" } END { print "Second END statement" print "=====================" } # Functions, listed alphabetically function add(num1, num2) { result = num1 + num2 printf "Addition of %d + %d : %dn", num1, num2, result } function all() { add(30, 10) mul(5, 6) z_array() } function mul(num1, num2) { result = num1 * num2 printf "Multiplication of %d * %d : %dn", num1, num2, result } function z_array() { arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for (v in arr) { print v, arr[v] } print "Array Ends...!" print "=====================" } To summarize, we looked at the basics of network programming and GAWK's built-in command line debugger. Do check out the book Learning AWK Programming to know more about the intricacies of AWK programming for text processing. 20 ways to describe programming in 5 words What is Mob Programming?
Read more
  • 0
  • 0
  • 39123

article-image-implementing-an-ai-in-unreal-engine-4-with-ai-perception-components-tutorial
Natasha Mathur
11 Sep 2018
8 min read
Save for later

Implementing an AI in Unreal Engine 4 with AI Perception components [Tutorial]

Natasha Mathur
11 Sep 2018
8 min read
AI Perception is a system within Unreal Engine 4 that allows sources to register their senses to create stimuli, and then other listeners are periodically updated as the sense stimuli is created within the system. This works wonders for creating a reusable system that can react to an array of customizable sensors. In this tutorial, we will explore different components available within Unreal Engine 4 to enable artificial intelligence sensing within our games. We will do this by taking advantage of a system within Unreal Engine called AI Perception components. These components can be customized and even scripted to introduce new behavior by extending the current sensing interface. This tutorial is an excerpt taken from the book ‘Unreal Engine 4 AI Programming Essentials’ written by Peter L. Newton, Jie Feng. Let’s now have a look at AI sensing. Implementing AI sensing in Unreal Engine 4 Let's start by bringing up Unreal Engine 4 and open our New Project window. Then, perform the following steps: First, name our new project AI Sense and hit create project. After it finishes loading, we want to start by creating a new AIController that will be responsible for sending our AI the appropriate instructions. Let's navigate to the Blueprint folder and create a new AIController class, naming it EnemyPatrol. Now, to assign EnemyPatrol, we need to place a pawn into the world then assign the controller to it. After placing the pawn, click on the Details tab within the editor. Next, we want to search for AI Controller. By default, it is the parent class AI Controller, but we want this to be EnemyPatrol: Next, we will create a new PlayerController named PlayerSense. Then, we need to introduce the AI Perception component to those who we want to be seen by or to see. Let's open the PlayerSense controller first and then add the necessary components. Building AI Perception components There are two components that are currently available within the Unreal Engine Framework. The first one is the AI Perception component that listens for perception of stimulants (sight, hearing, etc.) The other is the AIPerceptionStimuliSource component. It is used to easily register the pawn as a source of stimuli, allowing it to be detected by other AI Perception components. This comes in handy, particularly in our case. Now, follow these steps: With PlayerSense open, let's add a new component called AIPerceptionStimuliSource. Then, under the Details tab, let's select AutoRegister as Source. Next, we want to add new senses to create a source for. So, looking at Register as Source for Senses, there is an AISense array. Populate this array with the AISense_Sight blueprint in order to be detected by sight by other AI Perception components. You will note that there are also other senses to choose from—for example, AISense_Hearing, AISense_Touch, and so on. The complete settings are shown in the following screenshot: This was pretty straightforward considering our next process. This allows our player pawn to be detected by Enemy AI whenever we get within their sense's configured range. Next, let's open our EnemyPatrol class and add the other AI Perception components to our AI. This component is called AIPerception and contains many other configurations, allowing you to customize and tailor the AI for different scenarios: Clicking on the AI Perception component, you will notice that under the AI section, everything is grayed out. This is because we have configurations specific to each sense. This also goes if you create your own AI Sense classes. Let's focus on two sections within this component: the first is the AI Perception settings, and the other is the event provided with this component: The AI Perception section should look similar to the same section on AIPerceptionStimuliSource. The differences are that you have to register your senses, and you can also specify a dominant sense. The dominant sense takes precedence of other senses determined in the same location. Let's look at the Senses configuration and add a new element. This will populate the array with a new sense configuration, which you can then modify. For now, let's select the AI Sight configuration, and then we can leave the default values as the same. In the game, we are able to visualize the configurations, allowing us to have more control over our senses. There is another configuration that allows you to specify affiliation, but at the time of writing this, these options aren't available. When you click on Detection by Affiliation, you must select Detect Neutrals to detect any pawn with Sight Sense Source. Next, we need to be able to notify our AI of a new target. We will do this by utilizing the Event we saw as part of the AI Perception component. By navigating there, we can see an event called OnPerceptionUpdated.  This will be updated when there are changes in the sensory state which makes the tracking of senses easy and straightforward. Let's move toward the OnPerceptionUpdated event and perform the following: Click on OnPerceptionUpdated and create it within the EventGraph. Now, within the EventGraph, whenever this event is called, changes will be made to the senses, and it will return the available sensed actors, as shown in the following screenshot: Now that we understand how we will obtain our referenced sensed actors, we should create a way for our pawn to maintain different states of being similar to what we would do in Behavior Tree. Let's first establish a home location for our pawn to run to when the player is no longer detected by the AI. In the same Blueprint folder, we will create a subclass of Target Point. Let's name this Waypoint and place it at an appropriate location within the world. Now, we need to open this Waypoint subclass and create additional variables to maintain traversable routes. We can do this by defining the next waypoint within a waypoint, allowing us to create what programmers call a linked list. This results in the AI being able to continuously move to the next available route after reaching the destination of its current route. With Waypoint open, add a new variable named NextWaypoint and make the type of this be the same as that of the Waypoint class we created. Navigate back to our Content Browser. Now, within our EnemyPatrol AIController, let's focus on Event Begin in EventGraph. We have to grab the reference to the waypoint we created earlier and store it within our AIController. So, let's create a new waypoint variable type and name it CurrentPoint. Now, on Event Begin Play, the first thing we need is the AIController, which is the self -reference for this EventGraph because we are in the AIController Class. So, let's grab our self-reference and check whether it is valid. Safety first! Next, we will get our AIController from our self-reference. Then, again for safety, let's check whether our AIController is valid.How does our AI sense? Next, we want to create a Get all Actors Of Class node and set the Actor class to Waypoint. Now, we need to convert a few instructions into a macro because we will use the instructions throughout the project. So, let's select the nodes shown as follows and hit convert to macro. Lastly, rename this variable getAIController. You can see the final nodes in the following screenshot: Next, we want our AI to grab a random new route and set it as a new variable. So, let's first get the length of the array of actors returned. Then, we want to subtract 1 from this length, and this will give us the range of our array. From there, we want to pull from Subtract and get Random Integer. Then, from our array, we want to get the Get node and pump our Random Integer node into the index to retrieve. Next, pull the returned available variable from the Get node and promote it to a local variable. This will automatically create the type dragged from thepin, and we want to rename this Current Point to understand why this variable exists. Then, from our getAIController macro, we want to assign the ReceiveMoveCompleted event. This is done so that when our AI successfully moves to the next route, we can update the information and tell our AI to move to the next route. We learned AI sensing in Unreal Engine 4 with the help of a system within Unreal Engine called AI Perception components. We also explored different components within that system. If you found this post useful, be sure to check out the book, ‘Unreal Engine 4 AI Programming Essentials’ for more concepts on AI sensing in Unreal Engine. Development Tricks with Unreal Engine 4 What’s new in Unreal Engine 4.19? Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices    
Read more
  • 0
  • 0
  • 39120

article-image-how-connect-your-vim-editor-ipython
Arnaldo Russo
19 Feb 2016
5 min read
Save for later

How to connect your Vim editor to IPython

Arnaldo Russo
19 Feb 2016
5 min read
We will talk here how to send your codes throughout VI (Vi IMproved) to IPython a.k.a jupyter. When you install IPython, it also creates a symlink to Jupyter, which is the "new" name for IPython. It has changed since Jupyter is compatible with many scientific computing languages that are exclusively Python, so now the word IPython is a little bit deprecated instead of jupyter. IPython itself is focused on interactive Python, part of which is providing a Python kernel for Jupyter. To start my Python codes I have been using VIM for better usability and facility to reproduce my customizations and installing procedures and also using VIM as a remarkable IDE (see here a better description for this customization). If you don't know VIM (Vi IMproved), it is a lightweight and fast editor, which can make your programing time much more efficient with its integrated shortcuts and many plugins that can be coupled to VIM, speeding up many functionalities. One of these functionalities was written by Paul Ivanov, named vim-ipython which makes possible a conversation between IPython and VIM. Installing A simple apt-get install vim will not solve this issue, because vim needs to be compatible with your Python distribution or any other accessory language that you use (e.g. Ruby, Python, etc). First of all you need to download the vim source code, to compile it against your Python distribution. I have been using python 2.7 for scientific programing, but it is also possible to compile vim for other Python versions (e.g. 2.6 or 3.x). An important observation is that your vim must be compiled considering your Python version, usually the version chosen when a virtualenv (Virtual Environment) is created. Most of the time different projects require different dependencies, so isolating environments is the best way to keep your executables apart from system wide. This tutorial instructs you how to use virtualenv. Inside your virtualenv, you should install IPython as follows: $ pip install pyside $ pip install "ipython[all]" It is important to install pyside to make possible execute ipython qtconsole instead of simple ipython (or jupyter). After installing IPython inside your virtualenv, you can start the vim compilation. To avoid conflicts between dependencies I use the method that removes all your previous vim installations. sudo apt-get remove vim-common vim-runtime Install the dependencies required to compile vim: sudo apt-get build-dep vim Download vim source code from Github repository: git clone https://github.com/vim/vim.git And now just compile vim including your preferences. If you want more customizations you can also uncomment lines in the Makefile at vim/src before starting the steps below. cd vim/src ./configure --enable-pythoninterp --with-features=huge --prefix=$HOME/opt/vim make && make install echo 'export PATH=$HOME/opt/vim/bin:PATH' >> $HOME/.bashrc The vim-ipython plugin To install the vim-ipython plugin you need to download the plugin and copy it to ~/.vim/ftplugin/python to load automatically. There are simpler ways to install and manage vim plugins, using vundle. With this system you just include the name of github repository inside your .vimrc file and it will be possible to install other plugins. You can read more about the vundle usage and the installation here. Interacting with vim and IPython After opening a terminal, you need to be working inside your virtualenv to let IPython recognize the plugins you have previously installed inside your environment. ipython qtconsole If you prefer to use IPython notebooks: ipython notebook Both initializations will let vim to catch the running IPython kernel and be able to interact with. An interesting note about the second approach (e.g. running ipython notebook) is that you must open an existing ipython notebook file (.ipynb) or start a new one at the new icon at the upper right corner. If you use ipython qtconsole it will display a single window outside your terminal. The second step is opening a .py file with your vim editor from a second tab of your terminal. When or .py file is opened, you can execute the vim command: :IPython And the vim-ipython plugin will recognize the existing IPython kernel. The next step is to send lines of your codes to IPython or entire blocks of code, selecting it with visual mode throughout . To execute the entire file just use the key (this is similar to use the magic %run inside IPython). After sending lines to IPython, your ipython qtconsole will remain silent, and your vim window will split vertically showing the lines executed. You can also get back object introspection and word completions in ipython qtconsole, like what you get with: object? and object. in IPython. When you insert new variables at ipython notebook it will be displayed inside your vim spilted window and will show a message: "vim-ipython shell updated on (idle)". More specific usage of this amazing tool can be read at vim-ipython github repository including customizations. Have a nice coding time coupling your vim with IPython. Now that your vim and IPython are successfully coupled strike while the iron is hot and discover how to do more with Jupyter Notebook in this brilliant article. About the author Arnaldo D'Amaral Pereira Granja Russo is PhD in Biological Oceanography and researcher at Instituto Ambiental Boto Flipper. While not teaching or programming he is surfing, climbing or paragliding. You can find him at www.ciclotux.org or at his Github page.
Read more
  • 0
  • 1
  • 39093
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-migrating-version-3
Packt
11 Aug 2016
11 min read
Save for later

Migrating from Version 3

Packt
11 Aug 2016
11 min read
In this article by Matt Lambert, the author of the book Learning Bootstrap 4, has covered how to migrate your Bootstrap 3 project to Version 4 of Bootstrap is a major update. Almost the entire framework has been rewritten to improve code quality, add new components, simplify complex components, and make the tool easier to use overall. We've seen the introduction of new components like Cards and the removal of a number of basic components that weren't heavily used. In some cases, Cards present a better way of assembling a layout than a number of the removed components. Let's jump into this article by showing some specific class and behavioral changes to Bootstrap in version 4. (For more resources related to this topic, see here.) Browser support Before we jump into the component details, let's review the new browser support. If you are currently running on version 3 and support some older browsers, you may need to adjust your support level when migrating to Bootstrap 4. For desktop browsers, Internet Explorer version 8 support has been dropped. The new minimum Internet Explorer version that is supported is version 9. Switching to mobile, iOS version 6 support has been dropped. The minimum iOS supported is now version 7. The Bootstrap team has also added support for Android v5.0 Lollipop's browser and WebView. Earlier versions of the Android Browser and WebView are not officially supported by Bootstrap. Big changes in version 4 Let's continue by going over the biggest changes to the Bootstrap framework in version 4. Switching to Sass Perhaps the biggest change in Bootstrap 4 is the switch from Less to Sass. This will also likely be the biggest migration job you will need to take care of. The good news is you can use the sample code we've created in the book as a starting place. Luckily, the syntax for the two CSS pre-processors is not that different. If you haven't used Sass before, there isn't a huge learning curve that you need to worry about. Let's cover some of the key things you'll need to know when updating your stylesheets for Sass. Updating your variables The main difference in variables is the symbol used to denote one. In Less we use an @ symbol for our variables, while in Sass you use the $ symbol. Here are a couple of examples for you: /* LESS */ @red: #c00; @black: #000; @white: #fff; /* SASS */ $red: #c00; $black: #000; $white: #fff; As you can see, that is pretty easy to do. A simple find and replace should do most of the work for you. However, if you are using @import in your stylesheets, make sure there remains an @ symbol. Updating @import statements Another small change in Sass is how you import different stylesheets using the @import keyword. First, let's take a look at how you do this in Less: @import "components/_buttons.less"; Now let's compare how we do this using Sass: @import "components/_buttons.scss"; As you can see, it's almost identical. You just need to make sure you name all your files with the .scss extension. Then update your file names in the @import to use .scss and not .less. Updating mixins One of the biggest differences between Less and Sass are mixins. Here we'll need to do a little more heavy lifting when we update the code to work for Sass. First, let's take a look at how we would create a border-radius, or round corners, mixin in Less: .border-radius (@radius: 2px) { -moz-border-radius: @radius; -ms-border-radius: @radius; border-radius: @radius; } In Less, all elements that use the border-radius mixin will have a border radius of 2px. That is added to a component, like this: button { .border-radius } Now let's compare how you would do the same thing using Sass. Check out the mixin code: @mixin border-radius($radius) { -webkit-border-radius: $radius; -moz-border-radius: $radius; -ms-border-radius: $radius; border-radius: $radius; } There are a few differences here that you need to note: You need to use the @mixin keyword to initialize any mixin We don't actually define a global value to use with the mixin To use the mixin with a component, you would code it like this: button { @include border-radius(2px); } This is also different from Less in a few ways: First, you need to insert the @include keyword to call the mixin Next, you use the mixin name you defined earlier, in this case, border-radius Finally, you need to set the value for the border-radius for each element, in this case, 2px Personally, I prefer the Less method as you can set the value once and then forget about it. However, since Bootstrap has moved to Sass, we have to learn and use the new syntax. That concludes the main differences that you will likely encounter. There are other differences and if you would like to research them more, I would check out this page: http://sass-lang.com/guide. Additional global changes The change to Sass is one of the bigger global differences in version 4 of Bootstrap. Let's take a look at a few others you should be aware of.  Using REM units In Bootstrap 4, px has been replaced with rem for the primary unit of measure. If you are unfamiliar with rem it stands for root em. Rem is a relative unit of measure where pixels are fixed. Rem looks at the value for font-size on the root element in your stylesheet. It then uses your value declaration, in rems, to determine the computer pixel value. Let's use an example to make this easier to understand: html { font-size: 24px; } p { font-size: 2rem; } In this case, the computed font-size for the <p> tag would be 48px. This is different from the em unit because ems will be affected by wrapping elements that may have a different size. Whereas rem takes a simpler approach and just calculates everything from the root HTML element. It removes the size cascading that can occur when using ems and nested, complicated elements. This may sound confusing, but it is actually easier to use em units. Just remember your root font-size and use that when figuring out your rem values. What this means for migration is that you will need to go through your stylesheet and change any px or em values to use ems. You'll need to recalculate everything to make sure it fits the new format if you want to maintain the same look and feel for your project. Other font updates The trend for a long while has been to make text on a screen larger and easier to read for all users. In the past, we used tons of small typefaces that might have looked cool but were hard to read for anyone visually challenged. To that end, the base font-size for Bootstrap has been changed from 14px to 16px. This is also the standard size for most browsers and makes the readability of text better. Again, from a migration standpoint, you'll need to review your components to ensure they still look correct with the increased font size. You may need to make some changes if you have components that were based off the 14px default font-size in Bootstrap 3. New grid size With the increased use of mobile devices, Bootstrap 4 includes a new smaller grid tier for small screen devices. The new grid tier is called extra small and is configured for devices under 480px in width. For the migration story this shouldn't have a big effect. What it does do is allow you a new breakpoint if you want to further optimize your project for smaller screens. That concludes the main global changes to Bootstrap that you should be aware of when migrating your projects. Next, let's take a look at components. Migrating components With the release of Bootstrap 4, a few components have been dropped and a couple new ones have been added. The most significant change is around the new Cards component. Let's start by breaking down this new option. Migrating to the Cards component With the release of the Cards component, the Panels, Thumbnails, and Wells components have been removed from Bootstrap 4. Cards combines the best of these elements into one and even adds some new functionality that is really useful. If you are migrating from a Bootstrap 3 project, you'll need to update any Panels, Thumbnails, or Wells to use the Cards component instead. Since the markup is a bit different, I would recommend just removing the old components all together, and then recode them using the same content as Cards. Using icon fonts The Glyph-icons icon font has been removed from Bootstrap 4. I'm guessing this is due to licensing reasons as the library was not fully open source. If you don't want to update your icon code, simply download the library from the Glyph-icons website at: http://glyphicons.com/ The other option would be to change the icon library to a different one like Font Awesome. If you go down this route, you'll need to update all of your <i> tags to use the proper CSS class to render the icon. There is a quick reference tool that will allow you to do this called GlyphSearch. This tool supports a number of icon libraries and I use it all the time. Check it out at: http://glyphsearch.com/. Those are the key components you need to be aware of. Next let's go over what's different in JavaScript. Migrating JavaScript The JavaScript components have been totally rewritten in Bootstrap 4. Everything is now coded in ES6 and compiled with Babel which makes it easier and faster to use. On the component size, the biggest difference is the Tooltips component. The Tooltip is now dependant on an external library called Tether, which you can download from: http://github.hubspot.com/tether/. If you are using Tooltips, make sure you include this library in your template. The actual markup looks to be the same for calling a Tooltip but you must include the new library when migrating from version 3 to 4. Miscellaneous migration changes Aside from what I've gone over already, there are a number of other changes you need to be aware of when migrating to Bootstrap 4. Let's go through them all below. Migrating typography The .page-header class has been dropped from version 4. Instead, you should look at using the new display CSS classes on your headers if you want to give them a heading look and feel. Migrating images If you've ever used responsive images in the past, the class name has changed. Previously, the class name was .image-responsive but it is now named .image-fluid. You'll need to update that class anywhere it is used. Migrating tables For the table component, a few class names have changed and there are some new classes you can use. If you would like to create a responsive table, you can now simply add the class .table-responsive to the <table> tag. Previously, you had to wrap the class around the <table> tag. If migrating, you'll need to update your HTML markup to the new format. The .table-condensed class has been renamed to .table-sm. You'll need to update that class anywhere it is used. There are a couple of new table styles you can add called .table-inverse or .table-reflow. Migrating forms Forms are always a complicated component to code. In Bootstrap 4, some of the class names have changed to be more consistent. Here's a list of the differences you need to know about: control-label is now .form-control-label input-lg and .input-sm are now .form-control-lg and .form-control-sm The .form-group class has been dropped and you should instead use .form-control You likely have these classes throughout most of your forms. You'll need to update them anywhere they are used. Migrating buttons There are some minor CSS class name changes that you need to be aware of: btn-default is now .btn-secondary The .btn-xs class has been dropped from Bootstrap 4 Again, you'll need to update these classes when migrating to the new version of Bootstrap. There are some other minor changes when migrating on components that aren't as commonly used. I'm confident my explanation will cover the majority of use cases when using Bootstrap 4. However, if you would like to see the full list of changes, please visit: http://v4-alpha.getbootstrap.com/migration/. Summary That brings this article to a close! Hope you are able to migrate your Bootstrap 3 project to Bootstrap 4. Resources for Article: Further resources on this subject: Responsive Visualizations Using D3.js and Bootstrap [article] Advanced Bootstrap Development Tools [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 39091

article-image-implementing-hashing-algorithms-in-golang-tutorial
Sugandha Lahoti
27 May 2019
4 min read
Save for later

Implementing hashing algorithms in Golang [Tutorial]

Sugandha Lahoti
27 May 2019
4 min read
A hashing algorithm is a cryptographic hash technique. It is a scientific calculation that maps data with a subjective size to a hash with a settled size. It's intended to be a single direction function, that you cannot alter. This article covers hash functions and the implementation of a hashing algorithm in Golang. This article is taken from the book Learn Data Structures and Algorithms with Golang by Bhagvan Kommadi. Complete with hands-on tutorials, this book will guide you in using the best data structures and algorithms for problem-solving in Golang. Install Go version 1.10 for your OS. The GitHub URL for the code in this article is available here. The hash functions Hash functions are used in cryptography and other areas. These data structures are presented with code examples related to cryptography. There are two ways to implement a hash function in Go: with crc32 or sha256. Marshaling (changing the string to an encoded form) saves the internal state, which is used for other purposes later. A BinaryMarshaler (converting the string into binary form) example is explained in this section: //main package has examples shown // in Hands-On Data Structures and algorithms with Go book package main // importing bytes, crpto/sha256, encoding, fmt and log package import ( "bytes" "crypto/sha256" "encoding" "fmt" "log" "hash" ) The main method creates a binary marshaled hash of two example strings. The hashes of the two strings are printed. The sum of the first hash is compared with the second hash using the equals method on bytes. This is shown in the following code: //main method func main() { const ( example1 = "this is a example " example2 = "second example" ) var firstHash hash.Hash firstHash = sha256.New() firstHash.Write([]byte(example1)) var marshaler encoding.BinaryMarshaler var ok bool marshaler, ok = firstHash.(encoding.BinaryMarshaler) if !ok { log.Fatal("first Hash is not generated by encoding.BinaryMarshaler") } var data []byte var err error data, err = marshaler.MarshalBinary() if err != nil { log.Fatal("failure to create first Hash:", err) } var secondHash hash.Hash secondHash = sha256.New() var unmarshaler encoding.BinaryUnmarshaler unmarshaler, ok = secondHash.(encoding.BinaryUnmarshaler) if !ok { log.Fatal("second Hash is not generated by encoding.BinaryUnmarshaler") } if err := unmarshaler.UnmarshalBinary(data); err != nil { log.Fatal("failure to create hash:", err) } firstHash.Write([]byte(example2)) secondHash.Write([]byte(example2)) fmt.Printf("%x\n", firstHash.Sum(nil)) fmt.Println(bytes.Equal(firstHash.Sum(nil), secondHash.Sum(nil))) } Run the following command to execute the hash.go file: go run hash.go The output is as follows: Hash implementation in Go Hash implementation in Go has crc32 and sha256 implementations. An implementation of a hashing algorithm with multiple values using an XOR transformation is shown in the following code snippet. The CreateHash function takes a byte array, byteStr, as a parameter and returns the sha256 checksum of the byte array: //main package has examples shown // in Go Data Structures and algorithms book package main // importing fmt package import ( "fmt" "crypto/sha1" "hash" ) //CreateHash method func CreateHash(byteStr []byte) []byte { var hashVal hash.Hash hashVal = sha1.New() hashVal.Write(byteStr) var bytes []byte bytes = hashVal.Sum(nil) return bytes } In the following sections, we will discuss the different methods of hash algorithms. The CreateHashMutliple method The CreateHashMutliple method takes the byteStr1 and byteStr2 byte arrays as parameters and returns the XOR-transformed bytes value, as follows: // Create hash for Multiple Values method func CreateHashMultiple(byteStr1 []byte, byteStr2 []byte) []byte { return xor(CreateHash(byteStr1), CreateHash(byteStr2)) } The XOR method The xor method takes the byteStr1 and byteStr2 byte arrays as parameters and returns the XOR-transformation result, as follows: The main method The main method invokes the createHashMutliple method, passing Check and Hash as string parameters, and prints the hash value of the strings, as follows: // main method func main() { var bytes []byte bytes = CreateHashMultiple([]byte("Check"), []byte("Hash")) fmt.Printf("%x\n", bytes) } Run the following command to execute the hash.go file: go run hash.go The output is as follows: In this article, we discussed hashing algorithms in Golang alongside code examples and performance analysis. To know more about network representation using graphs and sparse matrix representation using a list of lists in Go, read our book Learn Data Structures and Algorithms with Golang. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. State of Go February 2019 - Golang developments report for this month released The Golang team has started working on Go 2 proposals
Read more
  • 0
  • 0
  • 38998

article-image-storing-passwords-using-freeradius-authentication
Packt
08 Sep 2011
6 min read
Save for later

Storing Passwords Using FreeRADIUS Authentication

Packt
08 Sep 2011
6 min read
In the previous article we covered the Authentication Methods used while working with FreeRADIUS. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches methods for storing passwords and how they work. Passwords do not need to be stored in clear text and it is better to store them in a hashed format. There are, however, limitations to the kind of authentication protocols that can be used when the passwords are stored as a hash which we will explore in this article. (For more resources on this subject, see here.) Storing passwords Username and password combinations have to be stored somewhere. The following list mentions some of the popular places: Text files: You should be familiar with this method by now. SQL databases: FreeRADIUS includes modules to interact with SQL databases. MySQL is very popular and widely used with FreeRADIUS. Directories: Microsoft's Active Directory or Novell's e-Directory are typical enterprise-size directories. OpenLDAP is a popular open source alternative. The users file and the SQL database that can be used by FreeRADIUS store the username and password as AVPs. When the value of this AVP is in clear text, it can be dangerous if the wrong person gets hold of it. Let's see how this risk can be minimized. Hash formats To reduce this risk, we can store the passwords in a hashed format. A hashed format of a password is like a digital fingerprint of that password's text value. There are many different ways to calculate this hash, for example MD5 or SHA1. The end result of a hash should be a one-way fixed-length encrypted string that uniquely represents the password. It should be impossible to retrieve the original password out of the hash. To make the hash even more secure and more immune to dictionary attacks we can add a salt to the function that generates the hash. A salt is randomly generated bits to be used in combination with the password as input to the one way hash function. With FreeRADIUS we store the salt along with the hash. It is therefore essential to have a random salt with each hash to make a rainbow table attack difficult. The pap module, which is used for PAP authentication, can use passwords stored in the following hash formats to authenticate users: Both MD5 and SSH1 hash functions can be used with a salt to make it more secure. Time for action – hashing our password We will replace the Cleartext-Password AVP in the users file with a more secure hashed password AVP in this section. There seems to be a general confusion on how the hashed password should be created and presented. We will help you clarify this issue in order to produce working hashes for each format. A valuable URL to assist us with the hashes is the OpenLDAP FAQ: http://www.openldap.org/faq/data/cache/419.html There are a few sections that show how to create different types of password hashes. We can adapt this for our own use in FreeRADIUS. Crypt-Password Crypt password hashes have their origins in Unix computing. Stronger hashing methods are preferred over crypt, although crypt is still widely used. The following Perl one-liner will produce a crypt password for passme with the salt value of 'salt': #> perl -e 'print(crypt("passme","salt")."n");' Use this output and change Alice's check entry in the users file from: "alice" Cleartext-Password := "passme" to: "alice" Crypt-Password := "sa85/iGj2UWlA" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the crypt password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using CRYPT password "sa85/iGj2UWlA" MD5-Password The MD5 hash is often used to check the integrity of a file. When downloading a Linux ISO image you are also typically supplied with the MD5 sum of the file. You can then confirm the integrity of the file by using the md5sum command. We can also generate an MD5 hash from a password. We will use Perl to generate and encode the MD5 hash in the correct format that is required by the pap module. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. The following steps will show you how: Create a Perl script with the following contents; we'll name it 4088_04_md5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless($ARGV[0]){ print "Please supply a password to create a MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);print encode_base64($ctx->digest,'')."n"; Make the 4088_04_md5.pl file executable: chmod 755 4088_04_md5.pl Get the MD5 password for passme: ./4088_04_md5.pl passme Use this output and update Alice's entry in the user's file to: "alice" MD5-Password := "ugGBYPwm4MwukpuOBx8FLQ==" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the MD5 password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using MD5 encryption. SMD5-Password This is an MD5 password with salt. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. Create a Perl script with the following contents; we'll name it 4088_04_smd5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless(($ARGV[0])&&($ARGV[1])){ print "Please supply a password and salt to create a salted MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);my $salt = $ARGV[1];$ctx->add($salt);print encode_base64($ctx->digest . $salt ,'')."n"; Make the 4088_04_smd5.pl file executable: chmod 755 4088_04_smd5.pl Get the SMD5 value for passme using a salt value of 'salt': ./4088_04_smd5.pl passme salt Remember that you should use a random value for the salt. We only used salt here for the demonstration. Use this output and update Alice's entry in the user's file to: "alice" SMD5-Password := "Vr6uPTrGykq4yKig67v5kHNhbHQ=" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the SMD5 password by looking for the following line in the FreeRADIUS debug feedback. [pap] Using SMD5 encryption.
Read more
  • 0
  • 0
  • 38979

article-image-getting-started-with-google-data-studio-an-intuitive-tool-for-visualizing-bigquery-data
Sugandha Lahoti
16 May 2018
8 min read
Save for later

Getting started with Google Data Studio: An intuitive tool for visualizing BigQuery Data

Sugandha Lahoti
16 May 2018
8 min read
Google Data Studio is one of the most popular tools for visualizing data. It can be used to pull data directly out of Google's suite of marketing tools, including Google Analytics, Google AdWords, and Google Search Console. It also supports connectors for database tools such as PostgreSQL and BigQuery, it can be accessed at datastudio.google.com. In this article, we will learn to visualize BigQuery Data with Google Data Studio. [box type="note" align="" class="" width=""]This article is an excerpt from the book, Learning Google BigQuery, written by Thirukkumaran Haridass and Eric Brown. This book will serve as a comprehensive guide to mastering BigQuery, and utilizing it to get useful insights from your Big Data.[/box] The following steps explain how to get started in Google Data Studio and access BigQuery data from Data Studio: Setting up an account: Account setup is extremely easy for Data Studio. Any user with a Google account is eligible to use all Data Studio features for free: Accessing BigQuery data: Once logged in, the next step is to connect to BigQuery. This can be done by clicking on the DATA SOURCES button on the left-hand-side navigation: You'll be prompted to create a data source by clicking on the large plus sign to the bottom-right of the screen. On the right-hand-side navigation, you'll get a list of all of the connectors available to you. Select BigQuery: At this point, you'll be prompted to select from your projects, shared projects, a custom query, or public datasets. Since you are querying the Google Analytics BigQuery Export test data, select Custom Query. Select the project you would like to use. In the Enter Custom Query prompt, add this query and click on the Connect button on the top right: SELECT trafficsource.medium as Medium, COUNT(visitId) as Visits FROM `google.com:analytics- bigquery.LondonCycleHelmet.ga_sessions_20130910` GROUP BY Medium This query will pull the count of sessions for traffic source mediums for the Google Analytics account that has been exported. The next screen shows the schema of the data source you have created. Here, you can make changes to each field of your data, such as changing text fields to date fields or creating calculated metrics: Click on Create Report. Then click on Add to Report. At this point, you will land on your report dashboard. Here, you can begin to create charts using the data you've just pulled from BigQuery. Icons for all the chart types available are shown near the top of the page. Hover over the chart types and click on the chart labeled Bar Chart; then in the grid, hold your right-click button to draw a rectangle. A bar chart should appear, with the Traffic Source Medium and Visit data from the query you ran: A properties prompt should also show on the right-hand side of the page: Here, a number of properties can be selected for your chart, including the dimension, metric, and many style settings. Once you've completed your first chart, more charts can be added to a single page to show other metrics if needed. For many situations, a single bar graph will answer the question at hand. Some situations may require more exploration. In such cases, an analyst might want to know whether the visit metric influences other metrics such as the number of transactions. A scatterplot with visits on the x axis and transactions on the y axis can be used to easily visualize this relationship. Making a scatterplot in Data Studio The following steps show how to make a scatterplot in Data Studio with the data from BigQuery: Update the original query by adding the transaction metric. In the edit screen of your report, click on the bar chart to bring up the chart options on the right-hand- side navigation. Click on the pencil icon next to the data source titled BigQuery to edit the data source. Click on the left-hand-side arrow icon titled Edit Connection: 3. In the dialog titled Enter Custom Query, add this query: SELECT trafficsource.medium as Medium, COUNT(visitId) as Visits, SUM(totals.transactions) AS Transactions FROM `google.com:analytics- bigquery.LondonCycleHelmet.ga_sessions_20130910` GROUP BY Medium Click on the button titled Reconnect in order to reprocess the query. A prompt should emerge, asking whether you'd like to add a new field titled Transactions. Click on Apply. Click on Done. Once you return to the report edit screen, click on the Scatter Chart button() and use your mouse to draw a square in the report space: The report should autoselect the two metrics you've created. Click on the chart to bring up the chart edit screen on the right-hand-side navigation; then click on the Style tab. Click on the dropdown under the Trendline option and select Linear to add a linear trend line, also known as linear regression line. The graph will default to blue, so use the pencil icon on the right to select red as the line color: Making a map in Data Studio Data Studio includes a map chart type that can be used to create simple maps. In order to create maps, a map dimension will need to be included in your data, along with a metric. Here, we will use the Google BigQuery public dataset for Medicare data. You'll need to create a new data source: Accessing BigQuery data: Once logged in, the next step is to connect to BigQuery. This can be done by clicking on the DATA SOURCES button on the left-hand-side navigation. You'll be prompted to create a data source by clicking on the large plus sign to the bottom-right of the screen. On the right-hand-side navigation, you'll get a list of all of the connectors available to you. Select BigQuery. At this point, you'll be prompted to select from your projects, shared projects, a custom query, or public datasets. Since you are querying the Google Analytics BigQuery Export test data, select Custom Query. Select the project you would like to use. In the Enter Custom Query prompt, add this query and click on the Connect button on the top right: SELECT CONCAT(provider_city,", ",provider_state) city, AVG(average_estimated_submitted_charges) avg_sub_charges FROM `bigquery-public-data.medicare.outpatient_charges_2014` WHERE apc = '0267 - Level III Diagnostic and Screening Ultrasound' GROUP BY 1 ORDER BY 2 desc This query will pull the average of submitted charges for diagnostic ultrasounds by city in the United States. This is the most submitted charge in the 2014 Medicaid data. The next screen shows the schema of the data source you have created. Here, you can make changes to each field of your data, such as changing text fields to date fields or creating calculated metrics: Click on Create Report. Then click on Add to Report. At this point, you will land on your report dashboard. Here, you can begin to create charts using the data you've just pulled from BigQuery. Icons for all the chart types available are shown near the top of the page. Hover over the chart types and click on the chart labeled Map  Chart; then in the grid, hold your right-click button to draw a rectangle. Click on the chart to bring up the Dimension Picker on the right-hand-side navigation, and click on Create New Dimension: Right click on the City dimension and select the Geo type and City subtype. Here, we can also choose other sub-types (Latitude, Longitude, Metro, Country, and so on). Data Studio will plot the top 500 rows of data (in this case, the top 500 cities in the results set). Hovering over each city brings up detailed data: Data Studio can also be used to roll up geographic data. In this case, we'll roll city data up to state data. From the edit screen, click on the map to bring up the Dimension Picker and click on Create New Dimension in the right-hand-side navigation. Right-click on the City dimension and select the Geo type and Region subtype. Google uses the term Region to signify states: Once completed, the map will be rolled up to the state level instead of the city level. This functionality is very handy when data has not been rolled up prior to being inserted into BigQuery: Other features of Data Studio Filtering: Filtering can be added to your visualizations based on dimensions or metrics as long as the data is available in the data source Data joins: Data for multiple sources can be joined to create new, calculated metrics Turnkey integrations with many Google Marketing Suite tools such as Adwords and Search Console We explored various features of Google Data Studio and learnt to use them for visualizing BigQuery data.To know about other third party tools for reporting and visualization purpose such as R and Tableau, check out the book Learning Google BigQuery. Getting Started with Data Storytelling What is Seaborn and why should you use it for data visualization? Pandas is an effective tool to explore and analyze data - Interview Insights
Read more
  • 0
  • 2
  • 38941
article-image-creators-of-python-java-c-and-perl-discuss-the-evolution-and-future-of-programming-language-design-at-puppy
Bhagyashree R
08 Apr 2019
11 min read
Save for later

Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy

Bhagyashree R
08 Apr 2019
11 min read
At the first annual charity event conducted by Puget Sound Programming Python (PuPPy) last Tuesday, four legendary language creators came together to discuss the past and future of language design. This event was organized to raise funds for Computer Science for All (CSforALL), an organization which aims to make CS an integral part of the educational experience. Among the panelists were the creators of some of the most popular programming languages: Guido van Rossum, the creator of Python James Gosling, the founder, and lead designer behind the Java programming language Anders Hejlsberg, the original author of Turbo Pascal who has also worked on the development of C# and TypeScript Larry Wall, the creator of Perl The discussion was moderated by Carol Willing, who is currently a Steering Council member and developer for Project Jupyter. She is also a member of the inaugural Python Steering Council, a Python Software Foundation Fellow and former Director. Key principles of language design The first question thrown at the panelists was, “What are the principles of language design?” Guido van Rossum believes: [box type="shadow" align="" class="" width=""]Designing a programming language is very similar to the way JK Rowling writes her books, the Harry Potter series.[/box] When asked how he says JK Rowling is a genius in the way that some details that she mentioned in her first Harry Potter book ended up playing an important plot point in part six and seven. Explaining how this relates to language design he adds, “In language design often that's exactly how things go”. When designing a language we start with committing to certain details like the keywords we want to use, the style of coding we want to follow, etc. But, whatever we decide on we are stuck with them and in the future, we need to find new ways to use those details, just like Rowling. “The craft of designing a language is, in one hand, picking your initial set of choices so that's there are a lot of possible continuations of the story. The other half of the art of language design is going back to your story and inventing creative ways of continuing it in a way that you had not thought of,” he adds. When James Gosling was asked how Java came into existence and what were the design principles he abided by, he simply said, “it didn’t come out of like a personal passion project or something. It was actually from trying to build a prototype.” James Gosling and his team were working on a project that involved understanding the domain of embedded systems. For this, they spoke to a lot of developers who built software for embedded systems to know how their process works. This project had about a dozen people on it and Gosling was responsible for making things much easier from a programming language point of view. “It started out as kind of doing better C and then it got out of control that the rest of the project really ended up just providing the context”, he adds. In the end, the only thing out of that project survived was “Java”. It was basically designed to solve the problems of people who are living outside of data centers, people who are getting shredded by problems with networking, security, and reliability. Larry Wall calls himself a “linguist” rather than a computer scientist. He wanted to create a language that was more like a natural language. Explaining through an example, he said, “Instead of putting people in a university campus and deciding where they go we're just gonna see where people want to walk and then put shortcuts in all those places.” A basic principle behind creating Perl was to provide APIs to everything. It was aimed to be both a good text processing language linguistically but also a glue language. Wall further shares that in the 90s the language was stabilizing, but it did have some issues. So, in the year 2000, the Perl team basically decided to break everything and came up with a whole new set of design principles. And, based on these principles Perl was redesigned into Perl 6. Some of these principles were picking the right default, conserve your brackets because even Unicode does not have enough brackets, don't reinvent object orientation poorly, etc. He adds, [box type="shadow" align="" class="" width=""]“A great deal of the redesign was to say okay what is the right peg to hang everything on? Is it object-oriented? Is it something in the lexical scope or in the larger scope? What does the right peg to hang each piece of information on and if we don't have that peg how do we create it?”[/box] Anders Hejlsberg shares that he follows a common principle in all the languages he has worked on and that is “there's only one way to do a particular thing.” He believes that if a developer is provided with four different ways he may end up choosing the wrong path and realize it later in the development. According to Hejlsberg, this is why often developers end up creating something called “simplexity” which means taking something complex and wrapping a single wrapper on top it so that the complexity goes away. Similar to the views of Guido van Rossum, he further adds that any decision that you make when designing a language you have to live with it. When designing a language you need to be very careful about reasoning over what “not” to introduce in the language. Often, people will come to you with their suggestions for updates, but you cannot really change the nature of the programming language. Though you cannot really change the basic nature of a language, you can definitely extend it through extensions. You essentially have two options, either stay true to the nature of the language or you develop a new one. The type system of programming languages Guido van Rossum, when asked about the typing approach in Python, shared how it was when Python was first introduced. Earlier, int was not a class it was actually a little conversion function. If you wanted to convert a string to an integer you can do that with a built-in function. Later on, Guido realized that this was a mistake. “We had a bunch of those functions and we realized that we had made a mistake, we have given users classes that were different from the built-in object types.” That's where the Python team decided to reinvent the whole approach to types in Python and did a bunch of cleanups. So, they changed the function int into a designator for the class int. Now, calling the class means constructing an instance of the class. James Gosling shared that his focus has always been performance and one factor for improving performance is the type system. It is really useful for things like building optimizing compilers and doing ahead of time correctness checking. Having the type system also helps in cases where you are targeting small footprint devices. “To do that kind of compaction you need every kind of hope that it gives you, every last drop of information and, the earlier you know it, the better job you do,” he adds. Anders Hejlsberg looks at type systems as a tooling feature. Developers love their IDEs, they are accustomed to things like statement completion, refactoring, and code navigation. These features are enabled by the semantic knowledge of your code and this semantic knowledge is provided by a compiler with a type system. Hejlsberg believes that adding types can dramatically increase the productivity of developers, which is a counterintuitive thought. “We think that dynamic languages were easier to approach because you've got rid of the types which was a bother all the time. It turns out that you can actually be more productive by adding types if you do it in a non-intrusive manner and if you work hard on doing good type inference and so forth,” he adds. Talking about the type system in Perl, Wall started off by saying that Perl 5 and Perl 6 had very different type systems. In Perl 5, everything was treated as a string even if it is a number or a floating point. The team wanted to keep this feature in Perl 6 as part of the redesign, but they realized that “it's fine if the new user is confused about the interchangeability but it's not so good if the computer is confused about things.” For Perl 6, Wall and his team envisioned to make it a better object-oriented as well as a better functional programming language. To achieve this goal, it is important to have a very sound type system of a sound meta object model underneath. And, you also need to take the slogans like “everything is an object, everything is a closure” very seriously. What makes a programming language maintainable Guido van Rossum believes that to make a programming language maintainable it is important to hit the right balance between the flexible and disciplined approach. While dynamic typing is great for small programs, large programs require a much-disciplined approach. And, it is better if the language itself enables that discipline rather than giving you the full freedom of doing whatever you want. This is why Guido is planning to add a very similar technology like TypeScript to Python. He adds: [box type="shadow" align="" class="" width=""]“TypeScript is actually incredibly useful and so we're adding a very similar idea to Python. We are adding it in a slightly different way because we have a different context."[/box] Along with type system, refactoring engines can also prove to be very helpful. It will make it easier to perform large scale refactorings like millions of lines of code at once. Often, people do not rename methods because it is really hard to go over a piece of code and rename exactly this right variable. If you are provided with a refactoring engine, you just need to press a couple of buttons, type in the new name, and it will be refactored in maybe just 30 seconds. The origin of the TypeScript project was these enormous JavaScript codebases. As these codebases became bigger and bigger, it became quite difficult to maintain them. These codebases basically became “write-only code” shared Anders Hejlsberg. He adds that this is why we need a semantic understanding of the code, which makes refactoring much easier. “This semantic understanding requires a type system to be in place and once you start adding that you add documentation to the code,” added Hejlsberg. Wall also supports the same thought that “good lexical scoping helps with refactoring”. The future of programming language design When asked about the future of programming design, James Gosling shared that a very underexplored area in programming is writing code for GPUs. He highlights the fact that currently, we do not have any programming language that works like a charm with GPUs and much work is needed to be done in that area. Anders Hejlsberg rightly mentioned that programming languages do not move with the same speed as hardware or all the other technologies. In terms of evolution, programming languages are more like maths and the human brain. He said, “We're still programming in languages that were invented 50 years ago, all of the principles of functional programming were thought of more than 50 years ago.” But, he does believe that instead of segregating into separate categories like object-oriented or functional programming, now languages are becoming multi-paradigm. [box type="shadow" align="" class="" width=""]“Languages are becoming more multi-paradigm. I think it is wrong to talk about oh I only like object-oriented programming, or imperative programming, or functional programming language.”[/box] Now, it is important to be aware of the recent researches, the new thinking, and the new paradigms. Then we need to incorporate them in our programming style, but tastefully. Watch this talk conducted by PuPPy to know more in detail. Python 3.8 alpha 2 is now available for testing ISO C++ Committee announces that C++20 design is now feature complete Using lambda expressions in Java 11 [Tutorial]
Read more
  • 0
  • 0
  • 38890

article-image-getting-started-with-q-learning-using-tensorflow
Savia Lobo
14 Mar 2018
9 min read
Save for later

Getting started with Q-learning using TensorFlow

Savia Lobo
14 Mar 2018
9 min read
[box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango. This book will help you master advanced concepts of deep learning such as transfer learning, reinforcement learning, generative models and more, using TensorFlow and Keras.[/box] In this tutorial, we will learn about Q-learning and how to implement it using deep reinforcement learning. Q-Learning is a model-free method of finding the optimal policy that can maximize the reward of an agent. During initial gameplay, the agent learns a Q value for each pair of (state, action), also known as the exploration strategy. Once the Q values are learned, then the optimal policy will be to select an action with the largest Q-value in every state, also known as the exploitation strategy. The learning algorithm may end in locally optimal solutions, hence we keep using the exploration policy by setting an exploration_rate parameter. The Q-Learning algorithm is as follows: initialize  Q(shape=[#s,#a])  to  random  values  or  zeroes Repeat  (for  each  episode) observe  current  state  s Repeat select  an  action  a  (apply  explore  or  exploit  strategy) observe  state  s_next  as  a  result  of  action  a update  the  Q-Table  using  bellman's  equation set  current  state  s  =  s_next until  the  episode  ends  or  a  max  reward  /  max  steps  condition  is  reached Until  a  number  of  episodes  or  a  condition  is  reached (such  as  max  consecutive  wins) Q(s, a) in the preceding algorithm represents the Q function. The values of this function are used for selecting the action instead of the rewards, thus this function represents the reward or discounted rewards. The values for the Q-function are updated using the values of the Q function in the future state. The well- known bellman equation captures this update: This basically means that at time step t, in state s, for action a, the maximum future reward (Q) is equal to the reward from the current state plus the max future reward from the next state. Q(s,a) can be implemented as a Q-Table or as a neural network known as a Q-Network. In both cases, the task of the Q-Table or the Q-Network is to provide the best possible action based on the Q value of the given input. The Q-Table-based approach generally becomes intractable as the Q-Table becomes large, thus making neural networks the best candidate for approximating the Q-function through Q-Network. Let us look at both of these approaches in action. Initializing and discretizing for Q-Learning The observations returned by the pole-cart environment involves the state of the environment. The state of pole-cart is represented by continuous values that we need to discretize. If we discretize these values into small state-space, then the agent gets trained faster, but with the caveat of risking the convergence to the optimal policy. We use the following helper function to discretize the state-space of the pole-cart environment: #  discretize  the  value  to  a  state  space def  discretize(val,bounds,n_states): discrete_val  =  0 if  val  <=  bounds[0]: discrete_val  =  0 elif  val  >=  bounds[1]: discrete_val  =  n_states-1 else:                       discrete_val  =  int(round(  (n_states-1)  * ((val-bounds[0])/                                                                                         (bounds[1]-bounds[0])) return  discrete_val def  discretize_state(vals,s_bounds,n_s): discrete_vals  =  [] for  i  in  range(len(n_s)): discrete_vals.append(discretize(vals[i],s_bounds[i],n_s[i])) return  np.array(discrete_vals,dtype=np.int) We discretize the space into 10 units for each of the observation dimensions. You may want to try out different discretization spaces. After the discretization, we find the upper and lower bounds of the observations, and change the bounds of velocity and angular velocity to be between -1 and +1, instead of -Inf and +Inf. The code is as follows: env  =  gym.make('CartPole-v0') n_a  =  env.action_space.n #  number  of  discrete  states  for  each  observation  dimension n_s  =  np.array([10,10,10,10])                  #  position,  velocity,  angle,  angular velocity s_bounds  =  np.array(list(zip(env.observation_space.low, env.observation_space.high))) #  the  velocity  and  angular  velocity  bounds  are #  too  high  so  we  bound  between  -1,  +1 s_bounds[1]  =  (-1.0,1.0) s_bounds[3]  =  (-1.0,1.0) Q-Learning with Q-Table Since our discretised space is of the dimensions [10,10,10,10], our Q-Table is of [10,10,10,10,2] dimensions: #  create  a  Q-Table  of  shape  (10,10,10,10,  2)  representing  S  X  A  ->  R q_table  =  np.zeros(shape  =  np.append(n_s,n_a)) We define a Q-Table policy that exploits or explores based on the exploration_rate: def  policy_q_table(state,  env): #  Exploration  strategy  -  Select  a  random  action if  np.random.random()  <  explore_rate: action  =  env.action_space.sample() #  Exploitation  strategy  -  Select  the  action  with  the  highest  q else: action  =  np.argmax(q_table[tuple(state)]) return  action Define the episode() function that runs a single episode as follows:  Start with initializing the variables and the first state: obs  =  env.reset() state_prev  =  discretize_state(obs,s_bounds,n_s) episode_reward  =  0 done  =  False t  =  0  Select the action and observe the next state: action  =  policy(state_prev,  env) obs,  reward,  done,  info  =  env.step(action) state_new  =  discretize_state(obs,s_bounds,n_s)  Update the Q-Table: best_q  =  np.amax(q_table[tuple(state_new)]) bellman_q  =  reward  +  discount_rate  *  best_q indices  =  tuple(np.append(state_prev,action)) q_table[indices]  +=  learning_rate*(  bellman_q  -  q_table[indices])  Set the next state as the previous state and add the rewards to the episode's rewards: state_prev  =  state_new episode_reward  +=  reward The experiment() function calls the episode function and accumulates the rewards for reporting. You may want to modify the function to check for consecutive wins and other logic specific to your play or games: #  collect  observations  and  rewards  for  each  episode def  experiment(env,  policy,  n_episodes,r_max=0,  t_max=0): rewards=np.empty(shape=[n_episodes]) for  i  in  range(n_episodes): val  =  episode(env,  policy,  r_max,  t_max) rewards[i]=val print('Policy:{},  Min  reward:{},  Max  reward:{},  Average  reward:{}' .format(policy.     name     , np.min(rewards), np.max(rewards), np.mean(rewards))) Now, all we have to do is define the parameters, such as learning_rate, discount_rate, and explore_rate, and run the experiment() function as follows: learning_rate  =  0.8 discount_rate  =  0.9 explore_rate  =  0.2 n_episodes  =  1000 experiment(env,  policy_q_table,  n_episodes) For 1000 episodes, the Q-Table-based policy's maximum reward is 180 based on our simple implementation: Policy:policy_q_table,  Min  reward:8.0,  Max  reward:180.0,  Average reward:17.592 Our implementation of the algorithm is very simple to explain. However, you can modify the code to set the explore rate high initially and then decay as the time-steps pass. Similarly, you can also implement the decay logic for the learning and discount rates. Let us see if we can get a higher reward with fewer episodes as our Q function learns faster. Q-Learning with Q-Network  or Deep Q Network (DQN)  In the DQN, we replace the Q-Table with a neural network (Q-Network) that will learn to respond with the optimal action as we train it continuously with the explored states and their Q-Values. Thus, for training the network we need a place to store the game memory:  Implement the game memory using a deque of size 1000: memory  =  deque(maxlen=1000)  Next, build a simple hidden layer neural network model, q_nn: from  keras.models  import  Sequential from  keras.layers  import  Dense model  =  Sequential() model.add(Dense(8,input_dim=4,  activation='relu')) model.add(Dense(2,  activation='linear')) model.compile(loss='mse',optimizer='adam') model.summary() q_nn  =  model The Q-Network looks like this: Layer  (type)                                           Output  Shape                                 Param  # ================================================================= dense_1  (Dense)                                 (None,  8)                                         40 dense_2  (Dense)                                 (None,  2)                                         18 ================================================================= Total  params:  58 Trainable  params:  58 Non-trainable  params:  0 The episode() function that executes one episode of the game, incorporates the following changes for the Q-Network-based algorithm:  After generating the next state, add the states, action, and rewards to the game memory: action  =  policy(state_prev,  env) obs,  reward,  done,  info  =  env.step(action) state_next  =  discretize_state(obs,s_bounds,n_s) #  add  the  state_prev,  action,  reward,  state_new,  done  to  memory memory.append([state_prev,action,reward,state_next,done])   Generate and update the q_values with the maximum future rewards using the bellman function: states  =  np.array([x[0]  for  x  in  memory]) states_next  =  np.array([np.zeros(4)  if  x[4]  else  x[3]  for  x  in memory]) q_values  =  q_nn.predict(states) q_values_next  =  q_nn.predict(states_next) for  i  in  range(len(memory)): state_prev,action,reward,state_next,done  =  memory[i] if  done: q_values[i,action]  =  reward else: best_q  =  np.amax(q_values_next[i]) bellman_q  =  reward  +  discount_rate  *  best_q q_values[i,action]  =  bellman_q  Train the q_nn with the states and the q_values we received from memory: q_nn.fit(states,q_values,epochs=1,batch_size=50,verbose=0) The process of saving gameplay in memory and using it to train the model is also known as memory replay in deep reinforcement learning literature. Let us run our DQN-based gameplay as follows: learning_rate  =  0.8 discount_rate  =  0.9 explore_rate  =  0.2 n_episodes  =  100 experiment(env,  policy_q_nn,  n_episodes) We get a max reward of 150 that you can improve upon with hyper-parameter tuning, network tuning, and by using rate decay for the discount rate and explore rate: Policy:policy_q_nn,  Min  reward:8.0,  Max  reward:150.0,  Average  reward:41.27 To summarize, we calculated and trained the model at every step. One can change the code to discard the memory replay and retrain the model for the episodes that return smaller rewards. However, implement this option with caution as it may slow down your learning as initial gameplay would generate smaller rewards more often. Do check out the book Mastering TensorFlow 1.x  to explore advanced features of TensorFlow 1.x and gain insight into TensorFlow Core, Keras, TF Estimators, TFLearn, TF Slim, Pretty Tensor, and Sonnet.
Read more
  • 0
  • 0
  • 38886

article-image-how-to-extract-sim-card-data-from-android-devices-tutorial
Sugandha Lahoti
03 Feb 2019
9 min read
Save for later

How to extract SIM card data from Android devices [Tutorial]

Sugandha Lahoti
03 Feb 2019
9 min read
This tutorial discusses logical data extraction, and one of its subtopics Android SIM card extractions. This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book explore open source and commercial forensic tools and teaches readers the basic skills of Android malware identification and analysis. Logical extraction overview In digital forensics, the term logical extraction is typically used to refer to extractions that don't recover deleted data or do not include a full bit-by-bit copy of the evidence. However, a more correct definition of logical extraction is any method that requires communication with the base operating system. Because of this interaction with the operating system, a forensic examiner cannot be sure that they have recovered all of the data possible; the operating system is choosing which data it allows the examiner to access. In traditional computer forensics, logical extraction is analogous to copying and pasting a folder in order to extract data from a system; this process will only copy files that the user can access and see. If any hidden or deleted files are present in the folder being copied, they won't be in the pasted version of the folder. As you'll see, however, the line between logical and physical extractions in mobile forensics is somewhat blurrier than in traditional computer forensics. For example, deleted data can routinely be recovered from logical extractions on mobile devices due to the prevalence of SQLite databases being used to store data. Furthermore, almost every mobile extraction will require some form of interaction with the operating Android OS; there's no simple equivalent to pulling a hard drive and imaging it without booting the drive. What data can be recovered logically? For the most part, any and all user data may be recovered logically: Contacts Call logs SMS/MMS Application data System logs and information The bulk of this data is stored in SQLite databases, so it's even possible to recover large amounts of deleted data through a logical extraction. Root access When forensically analyzing an Android device, the limiting factor is often not the type of data being sought, but rather whether or not the examiner has the ability to access the data. All of the data listed previously, when stored on the internal flash memory, is protected and requires root access to read. The exception to this is application data that is stored on the SD card, which will be discussed later in this book. Without root access, a forensic examiner cannot simply copy information from the /data partition. The examiner will have to find some method of escalating privileges in order to gain access to the contacts, call logs, SMS/MMS, and application data. These methods often carry many risks, such as the potential to destroy or brick the device (making it unable to boot), and may alter data on the device in order to gain permanence. The methods commonly vary from device to device, and there is no universal, one-click method to gain root access to every device. Commercial mobile forensic tools such as Oxygen Forensic Detective and Cellebrite UFED have built-in capabilities to temporarily and safely root many devices but do not cover the wide range of all Android devices. The decision to root a device should be in accordance with your local operating procedures and court opinions in your jurisdiction. The legal acceptance of evidence obtained by rooting varies by jurisdiction. Android SIM card extractions Traditionally, SIM cards were used for transferring data between devices. SIM cards in the past were used to store many different types of data, such as the following: User data Contacts SMS messages Dialed calls Network data Integrated Circuit Card Identifier (ICCID): Serial number of the SIM International Mobile Subscriber Identity (IMSI): Identifier that ties the SIM to a specific user account MSISDN: Phone number assigned to the SIM Location Area Identity (LAI): Identifies the cell that a user is in Authentication Key (Ki): Used to authenticate the mobile network Various other network-specific information With the rise in capacity of device storage, SD cards, and cloud backups, the necessity for storing data on a SIM card has decreased. As such, most modern smartphones typically do not store much, if any, user data on the SIM card. All network data listed previously does still reside on the SIM, as a SIM is necessary to connect to all modern (4G) cellular networks. As with all Android devices, though, there is no concrete stipulation that user data can't be stored on a SIM; it simply doesn't happen by default. Individual device manufacturers can easily decide to write user data to the SIM, and individual users can download applications to provide that functionality. This means that a device's SIM card should always be examined during a forensic examination. It is a very quick process, and should never be overlooked. Acquiring SIM card data The SIM card should always be removed from the device and examined separately. While some tools claim to read the SIM card through the device interface, this may not recover deleted data or all data on the SIM; the only way for an examiner to be certain all data was acquired is to read the SIM through a standalone SIM card reader with a tool that has been tested and verified. The location of the SIM will vary by device but is typically either stored beneath the battery or in a tray located on the side of the device. Once the SIM is removed, it should be placed in a SIM card reader. There are hundreds of SIM card readers available in the marketplace, and all major mobile forensics tools come with an included reader that will work with their software. Oftentimes, the forensic tools will also support third-party SIM readers as well. There is a surprising lack of thorough, free SIM card reading software available. Any software used should always be tested and validated on a SIM card that has been populated with known data prior to being used in an actual forensic investigation. Also, keep in mind that much of the free software available works for older 2G/3G SIMs, but may not work properly on a modern 4G SIM. We used the Mobiledit! Lite, a free version of Mobiledit!, for the following screenshots. It is available at: http://www.mobiledit.com/downloads. The following is a sample 4G SIM card extraction from an Android phone running version 4.4.4; note that nothing that could be considered user data was acquired despite the SIM being used actively for over a year, though fields such as the ICCID, IMSI, and MSISDN (own phone number) could be useful for subpoenas/warrants or other aspects of an investigation: SIM card extraction overview The following screenshot highlights SMS messages on the SIM card: The following screenshot highlights the phonebook of the SIM card: The following screenshot highlights the phone number of the SIM card (also called the MSISDN): SIM Security Due to the fact that SIM cards conform to established, international standards, all SIM cards provide the same security functionality: a 4- to 8-digit PIN. Generally, this PIN must be set through a menu on the device. On Android devices, this setting is found at Settings | Security | Set up SIM card lock. The SIM PIN is completely independent of any lock screen security settings and only has to be entered when the device boots. The SIM PIN only protects user data on the SIM; all network information is still recoverable even if the SIM is PIN locked. The SIM card will allow three attempts to enter the PIN; if one of these attempts are correct, the counter will reset. On the other hand, if all of these attempts are incorrect, the SIM will enter Personal Unblocking Key (PUK) mode. The PUK is an 8-digit number assigned by the carrier and is frequently found on documentation when the SIM is purchased. Bypassing a PUK is not possible with any commercial forensic software; because of this, an examiner should never attempt to enter the PIN on the device as the device will not indicate how many attempts remain before the PUK is activated. An examiner could unwittingly PUK lock the SIM and be unable to access the device. Forensic tools, however, will show how many attempts remain before the PUK is activated, as seen in the previous screenshots. Common carrier defaults for SIM PINs are 0000 and 1234. If three tries remain before activating the PUK, an examiner may successfully unlock the SIM with one of these defaults. Carriers frequently retain PUK keys when a SIM is issued. These may be available through a subpoena or warrant issued to the carrier. SIM cloning The SIM PIN itself provides almost no additional security, and can easily be bypassed through SIM cloning. SIM cloning is a feature provided in almost all commercial mobile forensic software, although the term cloning is somewhat misleading. SIM cloning, in the case of mobile forensics, is the process of copying the network data from a locked SIM onto a forensically sterile SIM that does not have the PIN activated. The phone will identify the cloned SIM based on this network data (typically the ICCID and IMSI) and think that it is the same SIM that was inserted previously, but this time there will be no SIM PIN. This cloned SIM will also be unable to access the cellular network, which makes it an effective solution similar to Airplane Mode. Therefore, SIM cloning will allow an examiner to access the device, but the user data on the original SIM is still inaccessible as it remains protected by the PIN. We are unaware of any free software that performs forensic SIM cloning. It is supported by almost all commercial mobile forensic kits, however. These kits will typically include a SIM card reader, software to perform the clone, as well as multiple blank SIM cards for the cloning process. This article has covered SIM card extraction, which is a subtopic of logical extractions of Android devices. To know more about the other methods of logical extractions in Android devices, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 38832
article-image-diy-selfie-drone-arduino-esp8266
Vijin Boricha
29 May 2018
10 min read
Save for later

How to assemble a DIY selfie drone with Arduino and ESP8266

Vijin Boricha
29 May 2018
10 min read
Have you ever thought of something that can take a photo from the air, or perhaps take a selfie from it? How about we build a drone for taking selfies and recording videos from the air? Taking photos from the sky is one of the most exciting things in photography this year. You can shoot from helicopters, planes, or even from satellites. Unless you own a personal air vehicle or someone you know does, you know this is a costly affair sure to burn through your pockets. Drones can come in handy here. Have ever googled drone photography? If you did, I am sure you'd want to build or buy a drone for photography, because of the amazing views of the common subjects taken from the sky. Today, we will learn to build a drone for aerial photography and videography. This tutorial is an excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. Assuming you know how to build your customized frame if not you can refer to our book, or you may buy HobbyKing X930 glass fiber frame and connect the parts together, as directed in the manual. However, I have a few suggestions to help you carry out a better assembly of the frame: Firstly, connect the motor mounted with the legs or wings or arms of the frame. Tighten them firmly, as they will carry and hold the most important equipment of the drone. Then, connect them to the base and, later other parts with firm connections. Now, we will calibrate our ESCs. We will take the signal cable from an ESC (the motor is plugged into the ESC; careful, don't connect the propeller) and connect it to the throttle pins on the radio. Make sure the transmitter is turned on and the throttle is in the lowest position. Now, plug the battery into the ESC and you will hear a beep. Now, gradually increase the throttle from the transmitter. Your motor will start spinning at any position. This is because the ESC is not calibrated. So, you need to tell the ESC where the high point and the low point of the throttle are. Disconnect the battery first. Increase the throttle of the transmitter to the highest position and power the ESC. Your ESC will now beep once and beep 3 times in every 4 seconds. Now, move the throttle to the bottommost position and you will hear the ESC beep as if it is ready and calibrated. Now, you can increase the throttle of the transmitter and will see from lower to higher, the throttle will work. Now, mount the motors, connect them to the ESCs, and then connect them to the ArduPilot, changing the pins gradually. Now, connect your GPS to the ArduPilot and calibrate it. Now, our drone is ready to fly. I would suggest you fly the drone for about 10-15 minutes before connecting the camera. Connecting the camera For a photography drone, connecting the camera and controlling the camera is one of the most important things. Your pictures and videos will be spoiled if you cannot adjust the camera and stabilize it properly. In our case, we will use a camera gimbal to hold the camera and move it from the ground. Choosing a gimbal The camera gimbal holds the camera for you and can move the camera direction according to your command. There are a number of camera gimbals out there. You can choose any type, depending on your demand and camera size and specification. If you want to use a DSLR camera, you should use a bigger gimbal and, if you use a point and shoot type camera or action camera, you may use small- or medium-sized gimbals. There are two types of gimbals, a brushless gimbal, and a standard gimbal. The standard gimbal has servo motors and gears. If you use an FPV camera, then a standard gimbal with a 2-axis manual mount is the best option. The standard gimbal is not heavy; it is lightweight and not expensive. The best thing is you will not need an external controller board for your standard camera gimbal. The brushless gimbal is for professional aero photographers. It is smooth and can shoot videos or photos with better quality. The brushless gimbal will need an external controller board for your drone and the brushless gimbal is heavier than the standard gimbal. Choosing the best gimbal is one of the hard things for a photographer, as the stabilization of the image is a must for photoshoots. If you cannot control the camera from the ground, then using a gimbal is worthless. The following picture shows a number of gimbals: After choosing your camera and the gimbal, the first thing is to mount the gimbal and the camera to the drone. Make sure the mount is firm, but not too hard, because it will make the camera shake while flying the drone. You may use the Styrofoam or rubber pieces that came with the gimbal to reduce the vibration and make the image stable. Configuring the camera with the ArduPilot Configuring the camera with the ArduPilot is easy. Before going any further, let us learn a few things about the camera gimbal's Euler angels: Tilt: This moves the camera sloping position (range -90 degrees to +90 degrees), it is the motion (clockwise-anticlockwise) with the vertical axis Roll: This is a motion ranging from 0 degrees to 360 degrees parallel to the horizontal axis Pan: This is the same type motion of roll ranging from 0 degrees to 360 degrees but in the vertical axis Shutter: This is a switch that triggers a click or sends a signal Firstly, we are going to use the standard gimbal. Basically, there are two servos in a standard gimbal. One is for pitch or tilt and another is for the roll. So, a standard gimbal gives you a two-dimensional motion with the camera viewpoint. Connection Follow these steps to connect the camera to the ArduPilot: Take the pitch servo's signal pin and connect it to the 11th pin of the ArduPilot (A11) and the roll signal to the 10th pin (A10). Make sure you connect only the signal (S pin) cable of the servos to the pin, not the other two pins (ground and the VCC). The signal cables must be connected to the innermost pins of the A11 and A10 pins (two pins make a raw; see the following picture for clarification): My suggestion is adding an extra battery for your gimbal's servos. If you want to connect your servo directly to the ArduPilot, your ArduPilot will not perform well, as the servos will draw power. Now, connect your ArduPilot to your PC using wire or telemetry. Go to the Initial Setup menu and, under Optional Hardware, you will find another option called Camera Gimbal. Click on this and you will see the following screen: For the Tilt, change the pin to RC11; for the Roll, change the pin to RC10; and for Shutter, change it to CH7. If you want to change the Tilt during the flight from the transmitter, you need to change the Input Ch of the Tilt. See the following screenshot: Now, you need to change an option in the Configuration | Extended Tuning page. Set Ch6 Opt to None, as in the following screenshot, and hit the Write Params button: We need to align the minimum and maximum PWM values for the servos of the gimbal. To do that, we can tilt the frame of the gimbal to the leftmost position and from the transmitter, move the knob to the minimum position and start increasing, your servo will start to move at any time, then stop moving the knob. For the maximum calibration, move the Tilt to the rightmost position and do the same thing for the knob with the maximum position. Do the same thing for the pitch with the forward and backward motion. We also need to level the gimbal for better performance. To do that, you need to keep the gimbal frame level to the ground and set the Camera Gimbal option, the Servo Limits, and the Angle Limits. Change them as per the level of the frame. Controlling the camera Controlling the camera to take selfies or record video is easy. You can use the shutter pin we used before or the camera's mobile app for controlling the camera. My suggestion is to use the camera's app to take shots because you will get a live preview of what you are shooting and it will be easy to control the camera shots. However, if you want to use the Shutter button manually from the transmitter then you can do this too. We have connected the RC7 pin for controlling a servo. You can use a servo or a receiver switch for your camera to manually trigger the shutter. To do that, you can buy a receiver controller on/off switch. You can use this switch for various purposes. Clicking the shutter of your camera is one of them. Manually triggering the camera is easy. It is usually done for point and shoot cameras. To do that, you need to update the firmware of your cameras. You can do this in many ways, but the easiest one will be discussed here. Your RECEIVER CONTROLLED ON/OFF SWITCH may look like the following: You can see five wires in the picture. The three wires together are, as usual, pins of the servo motor. Take out the signal cable (in this case, this is the yellow cable) and connect it to the RC7 pin of the ArduPilot. Then, connect the positive to one of the thick red wires. Take the camera's data cable and connect the other tick wire to the positive of the USB cable and the negative wire will be connected to the negative of the three connected wires. Then, an output of the positive and negative wire will go to the battery (an external battery is suggested for the camera). To upgrade the camera firmware, you need to go to the camera's website and upgrade the firmware for the remote shutter option. In my case, the website is http://chdk.wikia.com/wiki/CHDK . I have downloaded it for a Canon point and shoot camera. You can also use action cameras for your drones. They are cheap and can be controlled remotely via mobile applications. Flying and taking shots Flying the photography drone is not that difficult. My suggestion is to lock the altitude and fly parallel to the ground. If you use a camera remote controller or an app, then it is really easy to take the photo or record a video. However, if you use the switch, as we discussed, then you need to open and connect your drone to the mission planner via telemetry. Go to the flight data, right click on the map, and then click the Trigger Camera Now option. It will trigger the Camera Shutter button and start recording or take a photo. You can do this when your drone is in a locked position and, using the timer, take a shot from above, which can be a selfie too. Let's try it. Let me know what happens and whether you like it or not. Next, learn to build other drones like a mission control drone or gliding drones from our book Building Smart Drones with ESP8266 and Arduino. Drones: Everything you ever wanted to know! How to build an Arduino based ‘follow me’ drone Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 38792

article-image-why-companies-that-dont-invest-in-technology-training-cant-compete
Richard Gall
17 Sep 2019
8 min read
Save for later

Why companies that don’t invest in technology training can’t compete

Richard Gall
17 Sep 2019
8 min read
Here’s the bad news: the technology skills gap is seriously harming many businesses. Without people who can help drive innovation, companies are going to struggle to compete on the global stage. Fortunately, there’s also some good news: we can all do something about it. Although it might seem that the tech skills gap is an issue that’s too big for employers to properly tackle on their own, by investing time and energy in technology training, employers can ensure that they have the skills and knowledge within their team to continue to innovate and build solutions to the conveyor belt of problems. Businesses ignore technology training at their peril. Sure, a great recruitment team is important, as are industry and academic connections. But without a serious focus on upskilling employees - software developers or otherwise - many companies won’t be able to compete. It’s not just about specific skills; it’s also about building a culture that is forward thinking and curious. A working environment that is committed to exploring new ways to solve problems. If companies aren’t serious about building this type of culture through technology training, they’re likely to run in a number of critical problems. These could have a long term impact on growth and business success. If you don’t invest in training and resources it's harder to retain employees Something that I find amusing is seeing business leaders complain about the dearth of talent in their industry or region, while also failing to engage or develop the talent they already have at their disposal. It sounds obvious, but it’s something that is often missed in conversations about the skills gap. If it’s so hard to find the talent you need, make sure you train to retain. The consequences could be significant. If you have talented employees that are willing and eager to learn but you’re not prepared to support them, with either resources or time, you can bet that they’ll be looking for other jobs. For employees, a company that doesn’t invest in their skills and knowledge is demoralising in multiple ways - on the one hand it signals that the company doesn’t trust or value them, while on the other it leaves them doing work that doesn’t push or engage them. If someone can’t do something, your first thought shouldn’t be to turn to a recruitment manager - you should instead be asking if someone in your team can do it. If they can’t, then ask yourself if they could with some training. This doesn’t mean that your HR problems can always be solved internally, but it does mean that if you take technology training seriously you can solve it much faster, and maybe even cheaper. Indeed, there’s another important point here. There’s sometimes an assumption that the right person is out there who can do the job you need. Someone with a perfect background, all the skills, all the knowledge and expertise you need. This person almost certainly doesn’t exist. And, while you were dreaming of this perfect employee, your other developers all got jobs elsewhere. That leaves you in a tricky position. All the initiatives you wanted to achieve in the second half of the year now have to be pushed back. Good technology training and team-based resources can attract talent Although it’s important to invest in tech training and resources to develop and retain employees, a considered approach to technology training can also be useful in attracting people to your company. Think of companies like Google and Facebook. One of the reasons they’re so attractive to many talented tech professionals (as well as the impressive salaries…) is that they offer so much scope to solve interesting problems and work on complex projects. Now, you probably can’t offer either the inflated salaries or the possibility of working on industry defining technology, but by making technology training a critical part of your ‘employer brand’, you’ll find it much easier to make talented tech professionals pay attention to you. It’s also a way of showing prospective employees that you’re serious about their personal development, and that you understand just how important learning is in tech. There are a number of ways that this could play out - from highlighting the resources you make available and give software engineering teams access to, to qualifications, and even just the opportunity to learn for a set period every week. Ultimately, by having a clear and concerted tech learning initiative, you can both motivate and develop your existing employees while also attracting new ones. This means that you’ll have a healthy developer culture, and be viewed as a developer and tech-focused organization. Read next: 5 barriers to learning and technology training for small software development teams Organizations that invest in tech training and resources encourage employee flexibility and curiosity To follow on from the point above, to successfully leverage technology, you need to build a culture where change is embraced. You need to value curiosity as a means of uncovering innovative solutions to problems. The way to encourage this is through technology training. Failing to provide even a basic level of support is not only unhelpful in a practical sense, it also sends a negative message to your employees: change doesn’t really matter, we always do things this way, so why does anyone need to spend time learning new things? By showing your engineering employees that yes, their curiosity is important, you can ensure that you have people who are taking full ownership of problems. Instead of people saying that’s how things have always been done, employees will be asking if there's another way. Ultimately that should lead to improved productivity and better outcomes. In turn, this means that your tech employees will become more flexible and adaptable without even realising it. Without technology training and high-quality resources, businesses can’t evolve quickly This brings us neatly to the next point. If your employees aren’t flexible and haven’t been encouraged to explore and learn new technologies and topics the business will suffer. The speed at which it can adapt to changes in the market and deliver new products will harm the bottom line. All too often we think about business agility in an abstract way. But if you aren’t able to support your employees in embracing change and learning new skills, you simply can’t have business agility. Of course, you might imagine a counterargument popping up here - couldn’t individual development harm business goals? True, most offices aren’t study spaces. But while business objectives and deadlines should always be the priority, if learning is constantly pushed to the bottom of the pile, you will find that the business, and your team, are going to hit a brick wall. The status quo is rarely sustainable. There will always be a point at which something won’t work, or something becomes out of date. If everyone is looking at immediate problems and tasks, they’ll never see the bigger picture. Technology training and learning resources can help to remove silos and improve collaboration Silos are an inevitable reality in many modern businesses. This is particularly true in tech teams. Often they arise because of good intentions. Splitting people up and getting them to focus on different things isn’t exactly the worst idea in the world, right? Today, however, silos are significant barriers to innovation and change. Change happens when people break out of their silos and share knowledge. It happens when people develop new ways of working that are free from traditional constraints. This isn’t something that’s easy to accomplish. It requires a lot of work on the culture and processes of a team. But one element that’s often overlooked when it comes to tackling silos is training and resources. If silos exist because people feel comfortable with specialization and focus, by offering wide-reaching resources and training materials you will be going a long way to helping your employees to get outside of their comfort zone. Indeed, in some respects we’re not talking about sustained training courses. Instead, it’s just as valuable - and more cost-effective - to provide people with resources that allow them to share their frame of reference. That might feel insignificant in the face of technological change beyond the business, and strategic visions inside it. However, it’s impossible to overestimate the importance of a shared language or understanding. It’s a critical part of getting people to work together, and getting people to think with more clarity about how they work, and the tools they use. Make sure your team has the resources they need to solve problems quickly and keep their skills up to date. Learn more about Packt for Teams here.
Read more
  • 0
  • 0
  • 38744
Modal Close icon
Modal Close icon