Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
Packt
31 Dec 2015
6 min read
Save for later

Spacecraft – Adding Details

Packt
31 Dec 2015
6 min read
In this article by Christopher Kuhn, the author of the book Blender 3D Incredible Machines, we'll model our Spacecraft. As we do so, we'll cover a few new tools and techniques and apply things in different ways to create a final, complex model: Do it yourself—completing the body Building the landing gear (For more resources related to this topic, see here.) We'll work though the spacecraft one section at a time by adding the details. Do it yourself – completing the body Next, let's take a look at the key areas that we have left to model: The bottom of the ship and the sensor suite (on the nose) are good opportunities to practice on your own. They use identical techniques to the areas of the ship that we've already done. Go ahead and see what you can do! For the record, here's what I ended up doing with the sensor suite: Here's what I did with the bottom. You can see that I copied the circular piece that was at the top of the engine area: One of the nice things about a project as this is that you can start to copy parts from one area to another. It's unlikely that both the top and bottom of the ship would be shown in the same render (or shot), so you can probably get away with borrowing quite a bit. Even if you did see them simultaneously, it's not unreasonable to think that a ship would have more than one of certain components. Of course, this is just a way to make things quicker (and easier). If you'd like everything to be 100% original, you're certainly free to do so. Building the landing gear We'll do the landing struts together, but you can feel free to finish off the actual skids yourself: I kept mine pretty simple compared to the other parts of the ship: Once you've got the skid plate done, make sure to make it a separate object (if it's not already). We're going to use a neat trick to finish this up. Make a copy of the landing gear part and move it to the rear section (or front if you have modeled the rear). Then, under your mesh tab, you can assign both of these objects the same mesh data: Now, whenever you make a change to one of them, the change will carry over to the other as well. Of course, you could just model one and then duplicate it, but sometimes, it's nice to see how the part will look in multiple locations. For instance, the cutouts are slightly different between the front and back of the ship. As you model it, you'll want to make sure that it will fit both areas. The first detail that we'll add is a mounting bracket for our struts to go on: Then, we'll add a small cylinder (at this point, the large one is just a placeholder): We'll rotate it just a bit: From this, it's pretty easy to create a rear mounting piece. Once you've done this, go ahead and add a shock absorber for the front (leave room for the springs, which we'll add next): To create the spring, we'll start with a small (12-sided) circle. We'll make it so small because just like the cable reel on the grabbling gun there will naturally be a lot of geometry, and we want to keep the polygon count as low as possible. Then, in edit mode, move the whole circle away from its original center point: Having done this, you can now add a screw modifier. Right away, you'll see the effect: There are a couple of settings you'll want to make note of here. The Screw value controls the vertical gap or distance of your spring: The Angle and Steps values control the number of turns and smoothness respectively: Go ahead and play with these until you're happy. Then, move and scale your spring into a position. Once it's the way you like it, go ahead and apply the screw modifier (but don't join it to the shock absorber just yet): None of my existing materials seemed right for the spring. So, I went ahead and added one that I called Blue Plastic. At this point, we have a bit of a problem. We want to join the spring to the landing gear but we can't. The landing gear has an edge split modifier with a split angle value of 30, and the spring has a value of 46. If we join them right now, the smooth edges on the spring will become sharp. We don't want this. Instead, we'll go to our shock absorber. Using the Select menu, we'll pick the Sharp Edges option: By default, it will select all edges with an angle of 30 degrees or higher. Once you do this, go ahead and mark these edges as sharp: Because all the thirty degree angles are marked sharp, we no longer need the Edge Angle option on our edge split modifier. You can disable it by unchecking it, and the landing gear remains exactly the same: Now, you can join the spring to it without a problem: Of course, this does mean that when you create new edges in your landing gear, you'll now have to mark them as sharp. Alternatively, you can keep the Edge Angle option selected and just turn it up to 46 degrees—your choice. Next, we'll just pull in the ends of our spring a little, so they don't stick out: Maybe we'll duplicate it. After all, this is a big, heavy vehicle, so maybe, it needs multiple shock absorbers: This is a good place to leave our landing gear for now. Summary In this article, we finished modeling our Spaceship's landing gear. We used a few new tools within Blender, but mostly, we focused on workflow and technique. Resources for Article: Further resources on this subject: Blender 3D 2.49: Quick Start[article] Blender 3D 2.49: Working with Textures[article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 11797

article-image-advanced-user-management
Packt
30 Dec 2015
20 min read
Save for later

Advanced User Management

Packt
30 Dec 2015
20 min read
In this article written by Bhaskarjyoti Roy, author of the book Mastering CentOS 7 Linux Server, will introduce some advanced user and group management scenarios along with some examples on how to handle advanced level options such as password aging, managing sudoers, and so on, on a day to day basis. Here, we are assuming that we have already successfully installed CentOS 7 along with a root and user credentials as we do in the traditional format. Also, the command examples, in this chapter, assume you are logged in or switched to the root user. (For more resources related to this topic, see here.)  The following topics will be covered: User and group management from GUI and command line Quotas Password aging Sudoers Managing users and groups from GUI and command line We can add a user to the system using useradd from the command line with a simple command as follows: useradd testuser This creates a user entry in the /etc/passwd file and automatically creates the home directory for the user in /home. The /etc/passwd entry looks like this: testuser:x:1001:1001::/home/testuser:/bin/bash But, as we all know, the user is in a locked state and cannot login to the system unless we add a password for the user using the command: passwd testuser This will, in turn, modify the /etc/shadow file, at the same time unlock the user, and the user will be able to login to the system. By default, the preceding set of commands will create both a user and a group for the testuser on the system. What if we want a certain set of users to be a part of a common group? We will use the -g option along with the useradd command to define the group for the user, but we have to make sure that the group already exists. So, to create users such as testuser1, testuser2, and testuser3 and make them part of a common group called testgroup, we will first create the group and then we create the users using the -g or -G switch. So we will do this: # To create the group : groupadd testgroup # To create the user with the above group and provide password and unlock user at the same time : useradd testuser1 -G testgroup passwd testuser1 useradd testuser2 -g 1002 passwd testuser2 Here, we have used both -g and -G. The difference between them is: with -G, we create the user with its default group and assign the user to the common testgroup as well, but with -g, we created the user as part of the testgroup only. In both cases, we can use either the gid or the group name obtained from the /etc/group file. There are a couple of more options that we can use for an advanced level user creation, for example, for system users with uid less than 500, we have to use the -r option, which will create a user on the system but the uid will be less than 500. We also can use -u to define a specific uid, which must be unique and greater than 499. Common options that we can use with the useradd command are: -c: This option is used for comments, generally to define the user's real name such as -c "John Doe". -d: This option is used to define home-dir; by default, the home directory is created in /home such as -d /var/<user name>. -g: This option is used for the group name or the group number for the user's default group. The group must already have been created earlier. -G: This option is used for additional group names or group numbers, separated by commas, of which the user is a member. Again, these groups must also have been created earlier. -r: This option is used to create a system account with a UID less than 500 and without a home directory. -u: This option is the user ID for the user. It must be unique and greater than 499. There are few quick options that we use with the passwd command as well. These are: -l: This option is to lock the password for the user's account -u: This option is to unlock the password for the user's account -e: This option is to expire the password for the user -x: This option is to define the maximum days for the password lifetime -n: This option is to define the minimum days for the password lifetime Quotas In order to control the disk space used in the Linux filesystem, we must use quota, which enables us to control the disk space and thus helps us resolve low disk space issues to a great extent. For this, we have to enable user and group quota on the Linux system. In CentOS 7, the user and group quota are not enabled by default so we have to enable them first. To check whether quota is enabled, or not, we issue the following command: mount | grep ' / ' The image shows that the root filesystem is enabled without quota as mentioned by the noquota in the output. Now, we have to enable quota on the root (/) filesystem, and to do that, we have to first edit the file /etc/default/grub and add the following to the GRUB_CMDLINE_LINUX: rootflags=usrquota,grpquota The GRUB_CMDLINE_LINUX line should read as follows: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet rootflags=usrquota,grpquota" The /etc/default/grub should like the following screenshot: Since we have to reflect the changes we just made, we should backup the grub configuration using the following command: cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.original Now, we have to rebuild the grub with the changes we just made using the command: grub2-mkconfig -o /boot/grub2/grub.cfg Next, reboot the system. Once it's up, login and verify that the quota is enabled using the command we used before: mount | grep ' / ' It should now show us that the quota is enabled and will show us an output as follows: /dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota) Now, since quota is enabled, we will further install quota using the following to operate quota for different users and groups, and so on: yum -y install quota Once quota is installed, we check the current quota for users using the following command: repquota -as The preceding command will report user quotas in a human readable format. From the preceding screenshot, there are two ways we can limit quota for users and groups, one is setting soft and hard limits for the size of disk space used or another is limiting the user or group by limiting the number of files they can create. In both cases, soft and hard limits are used. A soft limit is something that warns the user when the soft limit is reached and the hard limit is the limit that they cannot bypass. We will use the following command to modify a user quota: edquota -u username Now, we will use the following command to modify the group quota: edquota -g groupname If you have other partitions mounted separately, you have to modify the /etc/fstab to enable quota on the filesystem by adding usrquota and grpquota after the defaults for that specific partition as in the following screenshot, where we have enabled the quota for the /var partition: Once you are finished enabling quota, remount the filesystem and run the following commands: To remount /var : mount -o remount /var To enable quota : quotacheck -avugm quotaon -avug Quota is something all system admins use to handle disk space consumed on a server by users or groups and limit over usage of the space. It thus helps them manage the disk space usage on the system. In this regard, it should be noted that you plan before your installation and create partitions accordingly as well so that the disk space is used properly. Multiple separate partitions such as /var and /home etc are always suggested, as generally, these are the partitions, which consume most space on a Linux system. So, if we keep them on a separate partition, it will not eat up the root ('/') filesystem space and will be more failsafe than using an entire filesystem mounted as only root. Password aging It is a good policy to have password aging so that the users are forced to change their password at a certain interval. This, in turn, helps to keep the security of the system as well. We can use chage to configure the password to expire the first time the user logs in to the system. Note: This process will not work if the user logs in to the system using SSH. This method of using chage will ensure that the user is forced to change the password right away. If we use only chage <username>, it will display the current password aging value for the specified user and will allow them to be changed interactively. The following steps need to be performed to accomplish password aging: Lock the user. If the user doesn't exist, we will use the useradd command to create the user. However, we will not assign any password to the user so that it remains locked. But, if the user already exists on the system, we will use the usermod command to lock the user: Usermod -L <username> Force immediate password change using the following command: chage -d 0 <username> Unlock the account, This can be achieved in two ways. One is to assign an initial password and the other way is to assign a null password. We will take the first approach as the second one, though possible, is not a good practice in terms of security. Therefore, here is what we do to assign an initial password: Use the python command to start the command-line python interpreter: import crypt; print crypt.crypt("Q!W@E#R$","Bing0000/") Here, we have used the Q!W@E#R$ password with a salt combination of the alphanumeric character: Bing0000 followed by a (/) character. The output is the encrypted password, similar to 'BiagqBsi6gl1o'. Press Ctrl + D to exit the Python interpreter. At the shell, enter the following command with the encrypted output of the Python interpreter: usermod -p "<encrypted-password>" <username> So, here, in our case, if the username is testuser, we will use the following command: usermod -p "BiagqBsi6gl1o" testuser Now, upon initial login using the "Q!W@E#R$" password, the user will be prompted for a new password. Setting the password policy This is a set of rules defined in some files, which have to be followed when a system user is setting up. It's an important factor in security because one of the many security breach histories was started with hacking user passwords. This is the reason why most organizations set a password policy for their users. All usernames and passwords must comply with this. A password policy usually is defined by the following: Password aging Password length Password complexity Limit login failures Limit prior password reuse Configuring password aging and password length Password aging and password length are defined in /etc/login.defs. Aging basically means the maximum number of days a password might be used, minimum number of days allowed between password changes, and number of warnings before the password expires. Length refers to the number of characters required for creating the password. To configure password aging and length, we should edit the /etc/login.defs file and set different PASS values according to the policy set by the organization. Note: The password aging controls defined here does not affect existing users; it only affects the newly created users. So, we must set these policies when setting up the system or the server at the beginning. The values we modify are: PASS_MAX_DAYS: The maximum number of days a password can be used PASS_MIN_DAYS: The minimum number of days allowed between password changes PASS_MIN_LEN: The minimum acceptable password length PASS_WARN_AGE: The number of days warning to be given before a password expires Let's take a look at a sample configuration of the login.defs file: Configuring password complexity and limiting reused password usage By editing the /etc/pam.d/system-auth file, we can configure the password complexity and the number of reused passwords to be denied. A password complexity refers to the complexity of the characters used in the password, and the reused password deny refers to denying the desired number of passwords the user used in the past. By setting the complexity, we force the usage of the desired number of capital characters, lowercase characters, numbers, and symbols in a password. The password will be denied by the system until and unless the complexity set by the rules are met. We do this using the following terms: Force capital characters in passwords: ucredit=-X, where X is the number of capital characters required in the password Force lower case characters in passwords: lcredit=-X, where X is the number of lower case characters required in the password Force numbers in passwords: dcredit=-X, where X is the number numbers required in the password Force the use of symbols in passwords: ocredit=-X, where X is the number of symbols required in the password For example: password requisite pam_cracklib.so try_first_pass retry=3 type= ucredit=-2 lcredit=-2 dcredit=-2 ocredit=-2 Deny reused passwords: remember=X, where X is the number of past passwords to be denied For example: password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5 Let's now take a look at a sample configuration of /etc/pam.d/system-auth: Configuring login failures We set the number of login failures allowed by a user in the /etc/pam.d/password-auth, /etc/pam.d/system-auth, and /etc/pam.d/login files. When a user's failed login attempts are higher than the number defined here, the account is locked and only a system administrator can unlock the account. To configure this, make the following additions to the files. The following deny=X parameter configures this, where X is the number of failed login attempts allowed: Add these two lines to the /etc/pam.d/password-auth and /etc/pam.d/system-auth files and only the first line to the /etc/pam.d/login file: auth        required    pam_tally2.so file=/var/log/tallylog deny=3 no_magic_root unlock_time=300 account     required    pam_tally2.so The following screenshot is a sample /etc/pam.d/system-auth file: The following is a sample /etc/pam.d/login file: To see failures, use the following command: pam_tally2 –user=<User Name> To reset the failure attempts and to enable the user to login again, use the following command: pam_tally2 –user=<User Name> --reset Sudoers Separation of user privilege is one of the main features in Linux operating systems. Normal users operate in limited privileged sessions to limit the scope of their influence on the entire system. One special user exists on Linux that we know about already is root, which has super-user privileges. This account doesn't have any restrictions that are present to normal users. Users can execute commands with super-user or root privileges in a number of different ways. There are mainly three different ways to obtain root privileges on a system: Login to the system as root Login to the system as any user and then use the su - command. This will ask you for the root password and once authenticated, will give you the root shell session. We can disconnect this root shell using Ctrl + D or using the command exit. Once exited, we will come back to our normal user shell. Run commands with root privileges using sudo without spawning a root shell or logging in as root. This sudo command works as follows: sudo <command to execute> Unlike su, sudo will request the password of the user calling the command, not the root password. Sudo doesn't work by default and requires to be set up before it functions correctly. In the following section, we will see how to configure sudo and modify the /etc/sudoers file so that it works the way we want it to. visudo Sudo is modified or implemented using the /etc/sudoers file, and visudo is the command that enables us to edit the file. Note: This file should not be edited using a normal text editor to avoid potential race conditions in updating the file with other processes. Instead, the visudo command should be used. The visudo command opens a text editor normally, but then validates the syntax of the file upon saving. This prevents configuration errors from blocking sudo operations. By default, visudo opens the /etc/sudoers file in Vi Editor, but we can configure it to use the nano text editor instead. For that, we have to make sure nano is already installed or we can install nano using: yum install nano -y Now, we can change it to use nano by editing the ~/.bashrc file: export EDITOR=/usr/bin/nano Then, source the file using: . ~/.bashrc Now, we can use visudo with nano to edit the /etc/sudoers file. So, let's open the /etc/sudoers file using visudo and learn a few things. We can use different kinds of aliases for different set of commands, software, services, users, groups, and so on. For example: Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig and many more ... We can use these aliases to assign a set of command execution rights to a user or a group. For example, if we want to assign the NETWORKING set of commands to the group netadmin we will define: %netadmin ALL = NETWORKING Otherwise, if we want to allow the wheel group users to run all the commands, we use the following command: %wheel  ALL=(ALL)  ALL If we want a specific user, john, to get access to all commands we use the following command: john  ALL=(ALL)  ALL We can create different groups of users, with overlapping membership: User_Alias      GROUPONE = abby, brent, carl User_Alias      GROUPTWO = brent, doris, eric, User_Alias      GROUPTHREE = doris, felicia, grant Group names must start with a capital letter. We can then allow members of GROUPTWO to update the yum database and all the commands assigned to the preceding software by creating a rule like this: GROUPTWO    ALL = SOFTWARE If we do not specify a user/group to run, sudo defaults to the root user. We can allow members of GROUPTHREE to shutdown and reboot the machine by creating a command alias and using that in a rule for GROUPTHREE: Cmnd_Alias      POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart GROUPTHREE  ALL = POWER We create a command alias called POWER that contains commands to power off and reboot the machine. We then allow the members of GROUPTHREE to execute these commands. We can also create Run as aliases, which can replace the portion of the rule that specifies to the user to execute the command as: Runas_Alias     WEB = www-data, apache GROUPONE    ALL = (WEB) ALL This will allow anyone who is a member of GROUPONE to execute commands as the www-data user or the apache user. Just keep in mind that later, rules will override previous rules when there is a conflict between the two. There are a number of ways that you can achieve more control over how sudo handles a command. Here are some examples: The updatedb command associated with the mlocate package is relatively harmless. If we want to allow users to execute it with root privileges without having to type a password, we can make a rule like this: GROUPONE    ALL = NOPASSWD: /usr/bin/updatedb NOPASSWD is a tag that means no password will be requested. It has a companion command called PASSWD, which is the default behavior. A tag is relevant for the rest of the rule unless overruled by its twin tag later down the line. For instance, we can have a line like this: GROUPTWO    ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill In this case, a user can run the updatedb command without a password as the root user, but entering the root password will be required for running the kill command. Another helpful tag is NOEXEC, which can be used to prevent some dangerous behavior in certain programs. For example, some programs, such as less, can spawn other commands by typing this from within their interface: !command_to_run This basically executes any command the user gives it with the same permissions that less is running under, which can be quite dangerous. To restrict this, we could use a line like this: username    ALL = NOEXEC: /usr/bin/less We should now have clear understanding of what sudo is and how do we modify and provide access rights using visudo. There are many more things left here. You can check the default /etc/sudoers file, which has a good number of examples, using the visudo command, or you can read the sudoers manual as well. One point to remember is that root privileges are not given to regular users often. It is important for us to understand what these command do when you execute with root privileges. Do not take the responsibility lightly. Learn the best way to use these tools for your use case, and lock down any functionality that is not needed. Reference Now, let's take a look at the major reference used throughout the chapter: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/index.html Summary In all we learned about some advanced user management and how to manage users through the command line, along with password aging, quota, exposure to /etc/sudoers, and how to modify them using visudo. User and password management is a regular task that a system administrator performs on servers, and it has a very important role in the overall security of the system. Resources for Article: Further resources on this subject: SELinux - Highly Secured Web Hosting for Python-based Web Applications [article] A Peek Under the Hood – Facts, Types, and Providers [article] Puppet Language and Style [article]
Read more
  • 0
  • 0
  • 8945

article-image-video-surveillance-background-modeling
Packt
30 Dec 2015
7 min read
Save for later

Video Surveillance, Background Modeling

Packt
30 Dec 2015
7 min read
In this article by David Millán Escrivá, Prateek Joshi and Vinícius Godoy the authors of the book OpenCV By Example, willIn order to detect moving objects, we first need to build a model of the background. This is not the same as the direct frame differencing because we are actually modeling the background and using this model to detect moving objects. When we say that we are modeling the background, we are basically building a mathematical formulation that can be used to represent the background. So, this performs in a much better way than the simple frame differencing technique. This technique tries to detect static parts of the scene and then includes builds (updates?) in the background model. This background model is then used to detect background pixels. So, it's an adaptive technique that can adjust according to the scene. (For more resources related to this topic, see here.) Naive background subtraction Let's start the discussion from the beginning. What does a background subtraction process look like? Consider the following image: The preceding image represents the background scene. Now, let's introduce a new object into this scene: As shown in the preceding image, there is a new object in the scene. So, if we compute the difference between this image and our background model, you should be able to identify the location of the TV remote: The overall process looks like this: Does it work well? There's a reason why we call it the naive approach. It works under ideal conditions, and as we know, nothing is ideal in the real world. It does a reasonably good job of computing the shape of the given object, but it does so under some constraints. One of the main requirements of this approach is that the color and intensity of the object should be sufficiently different from that of the background. Some of the factors that affect these kinds of algorithms are image noise, lighting conditions, autofocus in cameras, and so on. Once a new object enters our scene and stays there, it will be difficult to detect new objects that are in front of it. This is because we don't update our background model, and the new object is now part of our background. Consider the following image: Now, let's say a new object enters our scene: We identify this to be a new object, which is fine. Let's say another object comes into the scene: It will be difficult to identify the location of these two different objects because their locations overlap. Here's what we get after subtracting the background and applying the threshold: In this approach, we assume that the background is static. If some parts of our background start moving, then those parts will start getting detected as new objects. So, even if the movements are minor, say a waving flag, it will cause problems in our detection algorithm. This approach is also sensitive to changes in illumination, and it cannot handle any camera movement. Needless to say, it's a delicate approach! We need something that can handle all these things in the real world. Frame differencing We know that we cannot keep a static background image that can be used to detect objects. So, one of the ways to fix this would be to use frame differencing. It is one of the simplest techniques that we can use to see what parts of the video are moving. When we consider a live video stream, the difference between successive frames gives a lot of information. The concept is fairly straightforward. We just take the difference between successive frames and display the difference. If I move my laptop rapidly, we can see something like this: Instead of the laptop, let's move the object and see what happens. If I rapidly shake my head, it will look something like this: As you can see in the preceding images, only the moving parts of the video get highlighted. This gives us a good starting point to see the areas that are moving in the video. Let's take a look at the function to compute the frame difference: Mat frameDiff(Mat prevFrame, Mat curFrame, Mat nextFrame) { Mat diffFrames1, diffFrames2, output; // Compute absolute difference between current frame and the next frame absdiff(nextFrame, curFrame, diffFrames1); // Compute absolute difference between current frame and the previous frame absdiff(curFrame, prevFrame, diffFrames2); // Bitwise "AND" operation between the above two diff images bitwise_and(diffFrames1, diffFrames2, output); return output; } Frame differencing is fairly straightforward. You compute the absolute difference between the current frame and previous frame and between the current frame and next frame. We then take these frame differences and apply bitwise AND operator. This will highlight the moving parts in the image. If you just compute the difference between the current frame and previous frame, it tends to be noisy. Hence, we need to use the bitwise AND operator between successive frame differences to get some stability when we see the moving objects. Let's take a look at the function that can extract and return a frame from the webcam: Mat getFrame(VideoCapture cap, float scalingFactor) { //float scalingFactor = 0.5; Mat frame, output; // Capture the current frame cap >> frame; // Resize the frame resize(frame, frame, Size(), scalingFactor, scalingFactor, INTER_AREA); // Convert to grayscale cvtColor(frame, output, CV_BGR2GRAY); return output; } As we can see, it's pretty straightforward. We just need to resize the frame and convert it to grayscale. Now that we have the helper functions ready, let's take a look at the main function and see how it all comes together: int main(int argc, char* argv[]) { Mat frame, prevFrame, curFrame, nextFrame; char ch; // Create the capture object // 0 -> input arg that specifies it should take the input from the webcam VideoCapture cap(0); // If you cannot open the webcam, stop the execution! if( !cap.isOpened() ) return -1; //create GUI windows namedWindow("Frame"); // Scaling factor to resize the input frames from the webcam float scalingFactor = 0.75; prevFrame = getFrame(cap, scalingFactor); curFrame = getFrame(cap, scalingFactor); nextFrame = getFrame(cap, scalingFactor); // Iterate until the user presses the Esc key while(true) { // Show the object movement imshow("Object Movement", frameDiff(prevFrame, curFrame, nextFrame)); // Update the variables and grab the next frame prevFrame = curFrame; curFrame = nextFrame; nextFrame = getFrame(cap, scalingFactor); // Get the keyboard input and check if it's 'Esc' // 27 -> ASCII value of 'Esc' key ch = waitKey( 30 ); if (ch == 27) { break; } } // Release the video capture object cap.release(); // Close all windows destroyAllWindows(); return 1; } How well does it work? As we can see, frame differencing addresses a couple of important problems that we faced earlier. It can quickly adapt to lighting changes or camera movements. If an object comes in the frame and stays there, it will not be detected in the future frames. One of the main concerns of this approach is about detecting uniformly colored objects. It can only detect the edges of a uniformly colored object. This is because a large portion of this object will result in very low pixel differences, as shown in the following image: Let's say this object moved slightly. If we compare this with the previous frame, it will look like this: Hence, we have very few pixels that are labeled on that object. Another concern is that it is difficult to detect whether an object is moving toward the camera or away from it. Resources for Article: Further resources on this subject: Tracking Objects in Videos [article] Detecting Shapes Employing Hough Transform [article] Hand Gesture Recognition Using a Kinect Depth Sensor [article]
Read more
  • 0
  • 0
  • 2709

article-image-courses-users-and-roles
Packt
30 Dec 2015
9 min read
Save for later

Courses, Users, and Roles

Packt
30 Dec 2015
9 min read
In this article by Alex Büchner, the author of Moodle 3 Administration, Third Edition, gives an overview of Moodle courses, users, and roles. The three concepts are inherently intertwined and any one of these cannot be used without the other two. We will deal with the basics of the three core elements and show how they work together. Let's see what they are: Moodle courses: Courses are central to Moodle as this is where learning takes place. Teachers upload their learning resources, create activities, assist in learning and grade work, monitor progress, and so on. Students, on the other hand, read, listen to or watch learning resources, participate in activities, submit work, collaborate with others, and so on. Moodle users: These are individuals accessing our Moodle system. Typical users are students and teachers/trainers, but also there are others such as teaching assistants, managers, parents, assessors, examiners, or guests. Oh, and the administrator, of course! Moodle roles: Roles are effectively permissions that specify which features users are allowed to access and, also, where and when (in Moodle) they can access them. Bear in mind that this articleonly covers the basic concepts of these three core elements. (For more resources related to this topic, see here.) A high-level overview To give you an overview of courses, users, and roles, let's have a look at the following diagram. It shows nicely how central the three concepts are and also how other features are related to them. Again, all of their intricacies will be dealt with in due course, so for now, just start getting familiar with some Moodle terminology. Let's start at the bottom-left and cycle through the pyramid clockwise. Users have to go through an Authentication process to get access to Moodle. They then have to go through theEnrolments step to be able to participate in Courses, which themselves are organized into Categories. Groups & Cohorts are different ways to group users at course level or site-wide. Users are granted Roles in particular Contexts. Which role is allowed to do what and which isn't, depends entirely on the Permissions set within that role. The diagram also demonstrates a catch-22 situation. If we start with users, we have no courses to enroll them in to (except the front page); if we start with courses, we have no users who can participate in them. Not to worry though. Moodle lets us go back and forth between any administrative areas and, often, perform multiple tasks at once. Moodle courses Moodle manages activities and stores resources in courses, and this is where learning and collaboration takes place. Courses themselves belong to categories, which are organized hierarchically, similar to folders on our local hard drive. Moodle comes with a default category called Miscellaneous, which is sufficient to show the basics of courses. Moodle is a course-centric system. To begin with, let's create the first course. To do so, go to Courses|Managecourses and categories. Here, select the Miscellaneous category. Then, select the Create newcourse link, and you will be directed to the screen where course details have to be entered. For now, let's focus on the two compulsory fields, namely Coursefullname and Courseshortname. The former is displayed at various places in Moodle, whereas the latter is, by default,used to identify the course and is also shown in the breadcrumb trail. For now, we leave all other fields empty or at their default values and save the course by clicking on the Savechanges button at the bottom. The screen displayed after clicking onSavechanges shows enrolled users, if any. Since we just created the course, there are no users present in the course yet. In fact, except the administrator account we are currently using, there are no users at all on our Moodle system. So, we leave the course without users for now and add some users to our LMS before we come back to this screen (select the Home link in the breadcrumb). Moodle users Moodle users, or rather their user accounts, are dealt within Users|Accounts. Before we start, it is important to understand the difference between authentication and enrolment. Moodle users have to be authenticated in order to log in to the system. Authentication grants users access to the system through login where a username and password have to be given (this also applies to guest accounts where a username is allotted internally). Moodle supports a significant number of authentication mechanisms, which are discussed later in detail. Enrolment happens at course level. However, a user has to be authenticated to the system before enrolment to a course can take place. So, a typical workflow is as follows (there are exceptions as always, but we will deal with them when we get there): Create your users Create your courses (and categories) Associate users to courses and assign roles Again, this sequence demonstrates nicely how intertwined courses, users, and roles are in Moodle. Another way of looking at the difference between authentication and enrolment is how a user will get access to a course. Please bear in mind that this is a very simplistic view and it ignores the supported features such as external authentication, guest access, and self-enrolment. During the authentication phase, a user enters his credentials (username and password) or they are entered automatically via single sign-on. If the account exists locally, that is within Moodle, and the password is valid, he/she is granted access. The next phase is enrolment. If the user is enrolled and the enrolment hasn't expired, he/she is granted access to the course. You will come across a more detailed version of these graphics later on, but for now, it hopefully demonstrates the difference between authentication and enrolment. To add a user account manually, go to Users | Accounts|Addanewuser. As with courses, we will only focus on the mandatory fields, which should be self-explanatory: Username (has to be unique) New password (if a password policy has been set, certain rules might apply) Firstname Surname Email address Make sure you save the account information by selecting Create user at the bottom of the page. If any entered information is invalid, Moodle will display error messages right above the field. I have created a few more accounts; to see who has access to your Moodle system, go to Users|Accounts|Browselistofusers, where you will see all users. Actually, I did this via batch upload. Now that we have a few users on our system, let's go back to the course we created a minute ago and manually enroll new participants to it. To achieve this, go back to Courses|Manage courses and categories, select the Miscellaneous category again, and select the created demo course. Underneath the listed demo course, course details will be displayed alongside a number of options (on large screens, details are shown to the right). Here, select Enrolledusers. As expected, the list of enrolled users is still empty. Click on the Enrolusers button to change this. To grant users access to the course, select the Enrol button beside them and close the window. In the following screenshot, three users, participant01 to participant03 have already been enrolled to the course. Two more users, participant04 and participant05, have been selected for enrolment. You have probably spotted the Assignroles dropdown at the top of the pop-up window. This is where you select what role the selected user has, once he/she is enrolled in the course. For example, to give Tommy Teacher appropriate access to the course, we have to select the Teacher role first, before enrolling him to the course. This leads nicely to the third part of the pyramid, namely, roles. Moodle roles Roles define what users can or cannot see and do in your Moodle system. Moodle comes with a number of predefined roles—we already saw Student and Teacher—but it also allows us to create our own roles, for instance, for parents or external assessors. Each role has a certain scope (called context), which is defined by a set of permissions (expressed as capabilities). For example, a teacher is allowed to grade an assignment, whereas a student isn't. Or, a student is allowed to submit an assignment, whereas a teacher isn't. A role is assigned to a user in a context. Okay, so what is a context? A context is a ring-fenced area in Moodle where roles can be assigned to users. A user can be assigned different roles in different contexts, where the context can be a course, a category, an activity module, a user, a block, the front page, or Moodle itself. For instance, you are assigned the Administrator role for the entire system, but additionally, you might be assigned the Teacher role in any courses you are responsible for; or, a learner will be given the Student role in a course, but might have been granted the Teacher role in a forum to act as a moderator. To give you a feel of how a role is defined, let's go to Users |Permissions, where roles are managed, and select Defineroles. Click on the Teacher role and, after some general settings, you will see a (very) long list of capabilities: For now, we only want to stick with the example we used throughout the article. Now that we know what roles are, we can slightly rephrase what we have done. Instead of saying, "We have enrolled the user participant 01 in the demo course as a student", we would say, "We have assigned the studentrole to the user participant 01 in the context of the demo course." In fact, the term enrolment is a little bit of a legacy and goes back to the times when Moodle didn't have the customizable, finely-grained architecture of roles and permissions that it does now. One can speculate whether there are linguistic connotations between the terms role and enrolment. Summary In this article, we very briefly introduced the concepts of Moodle courses, users, and roles. We also saw how central they are to Moodle and how they are linked together. Any one of these concepts simply cannot exist without the other two, and this is something you should bear in mind throughout. Well, theoretically they can, but it would be rather impractical when you try to model your learning environment. If you haven't fully understood any of the three areas, don't worry. The intention was only to provide you with a high-level overview of the three core components and to touch upon the basics. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] Gamification with Moodle LMS [article] Moodle Plugins [article]
Read more
  • 0
  • 0
  • 6499

article-image-learning-xero-purchases
Packt
30 Dec 2015
14 min read
Save for later

Learning Xero Purchases

Packt
30 Dec 2015
14 min read
In this article written by Jon Jenkins, author of the book Learning Xero, the author wants us to learn all of the Xero core purchase processes from posting purchase bills and editing contacts to making supplier payments. You will learn how the purchase process works and how that impacts on inventory. By the end of this article,you will have a thorough understanding of the purchase dashboard and its component parts as well as the ability to easily navigate to the areas you need. These are the topics we'll cover in this article: Understanding the purchase dashboard layout Adding new contacts and posting bills Adding new inventory items to bills (For more resources related to this topic, see here.) Dashboard The purchases dashboard in Xero is your one-stop shop for everything you need to manage the purchases for the business. To get to the purchases dashboard,you can click on Bills you need to payon the main dashboard or navigate to Accounts|Purchases. We have highlighted the major elements that make up the following dashboard. On the right-hand side of the Dashboard, you will find a Search button. Use it to save time searching by bill number, reference, supplier, or amount, and drill down even further by selecting withina date range by due date or transaction date. On the left-hand side of the dashboard, you have the main controls for performing tasks such as adding a new bill, repeating bill, credit note, or purchase order. Under the main controls are the summary figures of the purchases, with the figure in brackets representing the number of transactions that make up the figure and the numerical value representing the total of those transactions. Clicking on any of these sections will drill down to the bills that make up that total. You can also see draft bills that need approval. Untilbills have been approved (so anything with a Draft or Awaiting Approval status), they will not be included in any reports you run within Xero, such as the profit and loss account.So, if you have an approval process within your business,ensure that people adhere to it to improve the level of accuracy of reports. Once you click on the summary, you will see the following table, which shows the bills making up that selection. You can also see tabs relating to the other criteria within Xero, making it easy to navigate between the various lists without having to perform searches or navigating away from this screen and breaking your workflow. This view also allows you to add new transactions and mark bills as Paid, providing you with an uninterrupted workflow across the process of making purchases. Addingcontacts You can add a contact in Xero when raising abill to avoid navigating away and coming back, but you cannot enter any of the main contact details at that point. If you need to enter a purchase order to a new supplier,we would recommend that you add the supplier first by navigating to Contacts|All Contacts|Add Contact, so you have all the correct details for issuing the document. Importing When you first start using Xero or even if you have been using it for a while, you may have a separate database elsewhere with your contacts. You can import these into Xero using the predetermined CSV file template. This will enable you to keep all of your records in sync. Navigate to the Contacts|All Contacts|Import as shown in the following screenshot: When you click on Import, this action will then take you to a screen where you can download the Xero contact import template. Download the file so that you can compile your file for importing. We would recommend doing a search in the Xero Help Guide at this point on Import Contact and take a look through the guide shown in the following screenshot before you attempt to import to ensure that you have configured the file correctly and there is a smaller chance of the file being rejected: Editing a contact Once a contact record has been set up, you can edit the details by navigating to Contacts|Suppliers. From here, you can find the supplier you need using the letters at the top or using the Search facility. When you click on the supplier, the Edit button will take you into the contact record so that you can make changes as required. Ensure that you save any changes you have made. A few of the options you may wish to complete here to make the processing of purchases a bit easier and quicker is to add defaults. The items that you can default are the pieces of account code where you would like to post the bills and whether the amounts are exclusive or inclusive of tax. These will help make it quicker to post bills and reduce the likelihood of mispostings. These can, of course, be overwritten when posting bills. You can also choose a default VAT tax rate, which again can resolve issues where people are not sure which tax rate to use. There are various other default settings you can choose in the contact record, and we do suggest that you take a look at these to see where you can both reduce the opportunity for mistakes being made while making it quicker to process bills. Purchasing Getting the payment of suppliers bills correct is fundamental to running a tight ship. Making mistakes when it comes to purchasing goods can lead to incorrect stock holding, overpayments to suppliers, or deliveries being put on hold, which can have a significant impact on the trading of the business. It is therefore crucial that you do things in the correct way. Standardbills There are many ways to create a bill in Xero. From the Bills you need to paysection on the right-hand side of the main dashboard when you first login, click on New bill, as shown in the following screenshot: Alternatively, you can do it by navigating to Accounts |Purchases | New. You may also notice that when you have drilled down into any of the summary items on the dashboard, above each tab, you will also see the option to add new bill, as shown in the following screenshot: As you can see, there are many ways to raise a new bill, but whichever way you do it, you will then be shown this screen: If you start typing the supplier name in the From field, you will be provided with a list of supplier names beginning with those letters to make it easier and quicker to make a selection. If there is no contact recordat this point, you can click on + add contact to add a new supplier name so that you can complete raising the bill. As you start typing,Xero will provide a list of possible matches, so be careful not to set up multiple supplier accounts. The Datefield of the bill will default to the day on which you raise the bill. You can change this if you wish to. If you tab through Due Date, it will default to the day the bill is raised. If you have selected business-wide default payment terms, then the date will default to what you have selected.Likewise, if you have set a supplier default credit term, then that will override the business default at this point. The Reference field is where you will enter the supplier invoice number. The paper icon allows you to upload items, suchas a copy of the bill, a purchase order, or a contract perhaps,and attach them to the bill. It is up to you, but attaching documents, such as bills, makes life a lot easier as clicking on the paper icon allows you to see the document on screen, so no more sifting through filing cabinets and folders. You can also attach more than one document if you need to. You can use the Total field as a double-check if you like, but you do not have to use it. If you prefer, you can enter the bill total in this field. However, if what you have entered does not total when you try to post the invoice, this action will tell you, and you can adjust accordingly. You can change the currencyof the bill at this point using the Currency field,but again, if you have suppliers that bill you a different currency, we would suggest setting this as a default in the contact record to avoid posting the supplier bill in the wrong currency. If you do not have multiple currencies set up in Xero, you will not see this option when posting a bill. The Amounts arefield can be used to toggle between tax-exclusive and tax-inclusive values, which makes it easier when posting bills.If you have the gross purchase figure, you would use the tax inclusive figure and allow Xero to automatically calculate the VAT for you without using a calculator. Inventory items Inventory items on a bill can save a lot of unnecessary data entry if you have standard services or products that you purchase within the business. You can add an inventory item when posting a bill, or you can add them by navigating to Settings|General Settings|Inventory Items. If you have decided to use inventory items, it would be sensible to do some from the start and use the Import file to set them up quickly and easily. Each time you post a bill, you can then select the inventory item;this will pull through the default description and amounts for that inventory item. You can then adjust the descriptions as necessary but it saves you from having to type your product descriptions over and over again. If you are using inventory items, the amount you have purchased will be recorded against the inventory item.If you useXero for inventory control, it will make the necessary adjustments between your inventory on hand and the cost of goods sold in the profit and loss account. If you do not use inventory items, as a minimum, you will need to enter a description, a quantity, the unit price, the purchases account you are posting it to, and the tax rate to be used. Along with making the data entry quicker, inventory items give you the ability to run purchase reports by items, meaning that you do not have to set up a different account code for each type of goods you wish to generate reportsfor. It is worth taking some time to think about what you want to report on before posting your first bills. Once you have finished entering your bill details, you can then save your billor approve your bill. The User Roles that have been assigned to the employees will drive the functionality they have available here. If you choose to save a bill this appears in the Draft section of the purchase summary on the dashboard. If you have a purchase workflow in your business where a member of staff can raise but not approve a bill, then Save would be the option for them. As you can see in the following screenshot,there are several options available. If you are posting several bills in one go, you would choose Save & add another as this saves you from being sent to the purchases dashboard each time and then having to navigate back to this section. In order for a bill to be posted the accounts ledgers, it will need to be approved. Once the bill has been approved, you will see that a few extra options have now appeared both above and below the invoice. Above the invoice, you now have the ability toprint the invoice as a PDF, attach a document, or edit the bill by clicking on Bill Options. Under the bill, you will now have a new box appear, which gives you the ability to mark the bill as paid if it has been paid. Under this, you now have the History & Notes section, which gives you a full audit trail of what has happened to the bill, includingif anyone has edited the bill. Repeatingbills The process for raising a repeating bill is exactly the same as for a standard bill except for completing the additional fields shown in the following screenshot. You can choose the frequency of the bill to repeat, the date it is to start from and the end date which is optional. The Due Datefield is set in the bill settings as default to your business default or the supplier default if you have a default set up in the Contact record. Before you can save the bill,you will have to select how you want the bill to be posted. If you select Save as Draft,someone will have to approve each bill. If you select Approve,the bill will get posted to the accounting records with no manual intervention. Here are a couple of points to note if using repeating bills: You would only want to use this for invoices where there are regular items to be posted to the same account code and for the same amount. It is easy to forget that you have set these up and end up posting the bill as well and duplicating transactions. You can set a reference for the repeating bill and this will be the same for all bills posted. If you were to select the Save as Draft option rather than the Approve option, it will give you a chance to amend the reference to the correct invoice number before posting. You can enter placeholders to add the Week, Month, Year or a combination of those to ensure that the correct narrative is used in the description.However, use this for reference and avoid the point raised previously about using Save as Draft instead of Approve. Xeronetwork key The Xero Network Keyallows you to receive your bill's data directly into your Xero draft purchases section from someone else's salesledger if they use Xero. This can be a massive time saver if you are doing lots of transactions with other Xero users. Each Xero subscription has its own unique Xero Network Key, which you can share with the other Xero business by entering it into the relevant field in the contact record. If you opt to do this, your bill data will be passed through to the Draft Purchases section in Xero and you will need to check the details, enter the account code to post it to, and then approve. Here, minimal data entry is required, and this is a much quicker process. To locate your Xero Network Key to be provided to the other Xero user,navigate to Settings|General Settings|Xero to Xero. Batch payments If you pay multiple invoices from a single supplier or your bank gives you the ability to produce a BACS file to import to your online banking, then you may wish to use batch payments. Batch Payments speed up the process of paying suppliers and reconciling the bank account. It can do this in three ways: You can mark lots of invoices from multiple suppliers as paid at the same time. You can create a file to be imported into your online banking. When the supplier payments appear on the bank feed, the amount paid from the online banking will match with what you have allocated in Xero, meaning that autosuggest comes into play and will make the correct suggestion rather than you having to use Find & Match to find all of the individual payments you allocated. This is time consuming and error prone. Summary We have successfully added a purchases bill and identified the different types of bills available in Xero. In this article, we have run through the major purchases functions and we setup Repeating Bills. We explored how to use inventory items to make the purchases invoicing process even easier and quicker. On top of that, we have also looked at how to navigate around the purchases dashboard, how to make changes, and also how to track exactly what has happened to a bill. Resources for Article: Further resources on this subject: Big Data Analytics [article] Why Big Data in the Financial Sector? [article] Big Data [article]
Read more
  • 0
  • 0
  • 2038

article-image-mastering-centos-7-linux-server-0
Packt
30 Dec 2015
19 min read
Save for later

Mastering CentOS 7 Linux Server

Packt
30 Dec 2015
19 min read
In this article written by Bhaskarjyoti Roy, author of the book Mastering CentOS 7 Linux Server, will introduce some advanced user and group management scenarios along with some examples on how to handle advanced level options such as password aging, managing sudoers, and so on, on a day to day basis. Here, we are assuming that we have already successfully installed CentOS 7 along with a root and user credentials as we do in the traditional format. Also, the command examples, in this chapter, assume you are logged in or switched to the root user. (For more resources related to this topic, see here.)  The following topics will be covered: User and group management from GUI and command line Quotas Password aging Sudoers Managing users and groups from GUI and command line We can add a user to the system using useradd from the command line with a simple command as follows: useradd testuser This creates a user entry in the /etc/passwd file and automatically creates the home directory for the user in /home. The /etc/passwd entry looks like this: testuser:x:1001:1001::/home/testuser:/bin/bash But, as we all know, the user is in a locked state and cannot login to the system unless we add a password for the user using the command: passwd testuser This will, in turn, modify the /etc/shadow file, at the same time unlock the user, and the user will be able to login to the system. By default, the preceding set of commands will create both a user and a group for the testuser on the system. What if we want a certain set of users to be a part of a common group? We will use the -g option along with the useradd command to define the group for the user, but we have to make sure that the group already exists. So, to create users such as testuser1, testuser2, and testuser3 and make them part of a common group called testgroup, we will first create the group and then we create the users using the -g or -G switch. So we will do this: # To create the group : groupadd testgroup # To create the user with the above group and provide password and unlock user at the same time : useradd testuser1 -G testgroup passwd testuser1 useradd testuser2 -g 1002 passwd testuser2 Here, we have used both -g and -G. The difference between them is: with -G, we create the user with its default group and assign the user to the common testgroup as well, but with -g, we created the user as part of the testgroup only. In both cases, we can use either the gid or the group name obtained from the /etc/group file. There are a couple of more options that we can use for an advanced level user creation, for example, for system users with uid less than 500, we have to use the -r option, which will create a user on the system but the uid will be less than 500. We also can use -u to define a specific uid, which must be unique and greater than 499. Common options that we can use with the useradd command are: -c: This option is used for comments, generally to define the user's real name such as -c "John Doe". -d: This option is used to define home-dir; by default, the home directory is created in /home such as -d /var/<user name>. -g: This option is used for the group name or the group number for the user's default group. The group must already have been created earlier. -G: This option is used for additional group names or group numbers, separated by commas, of which the user is a member. Again, these groups must also have been created earlier. -r: This option is used to create a system account with a UID less than 500 and without a home directory. -u: This option is the user ID for the user. It must be unique and greater than 499. There are few quick options that we use with the passwd command as well. These are: -l: This option is to lock the password for the user's account -u: This option is to unlock the password for the user's account -e: This option is to expire the password for the user -x: This option is to define the maximum days for the password lifetime -n: This option is to define the minimum days for the password lifetime Quotas In order to control the disk space used in the Linux filesystem, we must use quota, which enables us to control the disk space and thus helps us resolve low disk space issues to a great extent. For this, we have to enable user and group quota on the Linux system. In CentOS 7, the user and group quota are not enabled by default so we have to enable them first. To check whether quota is enabled, or not, we issue the following command: mount | grep ' / ' The image shows that the root filesystem is enabled without quota as mentioned by the noquota in the output. Now, we have to enable quota on the root (/) filesystem, and to do that, we have to first edit the file /etc/default/grub and add the following to the GRUB_CMDLINE_LINUX: rootflags=usrquota,grpquota The GRUB_CMDLINE_LINUX line should read as follows: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet rootflags=usrquota,grpquota" The /etc/default/grub should like the following screenshot: Since we have to reflect the changes we just made, we should backup the grub configuration using the following command: cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.original Now, we have to rebuild the grub with the changes we just made using the command: grub2-mkconfig -o /boot/grub2/grub.cfg Next, reboot the system. Once it's up, login and verify that the quota is enabled using the command we used before: mount | grep ' / ' It should now show us that the quota is enabled and will show us an output as follows: /dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota) Now, since quota is enabled, we will further install quota using the following to operate quota for different users and groups, and so on: yum -y install quota Once quota is installed, we check the current quota for users using the following command: repquota -as The preceding command will report user quotas in a human readable format. From the preceding screenshot, there are two ways we can limit quota for users and groups, one is setting soft and hard limits for the size of disk space used or another is limiting the user or group by limiting the number of files they can create. In both cases, soft and hard limits are used. A soft limit is something that warns the user when the soft limit is reached and the hard limit is the limit that they cannot bypass. We will use the following command to modify a user quota: edquota -u username Now, we will use the following command to modify the group quota: edquota -g groupname If you have other partitions mounted separately, you have to modify the /etc/fstab to enable quota on the filesystem by adding usrquota and grpquota after the defaults for that specific partition as in the following screenshot, where we have enabled the quota for the /var partition: Once you are finished enabling quota, remount the filesystem and run the following commands: To remount /var : mount -o remount /var To enable quota : quotacheck -avugm quotaon -avug Quota is something all system admins use to handle disk space consumed on a server by users or groups and limit over usage of the space. It thus helps them manage the disk space usage on the system. In this regard, it should be noted that you plan before your installation and create partitions accordingly as well so that the disk space is used properly. Multiple separate partitions such as /var and /home etc are always suggested, as generally, these are the partitions, which consume most space on a Linux system. So, if we keep them on a separate partition, it will not eat up the root ('/') filesystem space and will be more failsafe than using an entire filesystem mounted as only root. Password aging It is a good policy to have password aging so that the users are forced to change their password at a certain interval. This, in turn, helps to keep the security of the system as well. We can use chage to configure the password to expire the first time the user logs in to the system. Note: This process will not work if the user logs in to the system using SSH. This method of using chage will ensure that the user is forced to change the password right away. If we use only chage <username>, it will display the current password aging value for the specified user and will allow them to be changed interactively. The following steps need to be performed to accomplish password aging: Lock the user. If the user doesn't exist, we will use the useradd command to create the user. However, we will not assign any password to the user so that it remains locked. But, if the user already exists on the system, we will use the usermod command to lock the user: Usermod -L <username> Force immediate password change using the following command: chage -d 0 <username> Unlock the account, This can be achieved in two ways. One is to assign an initial password and the other way is to assign a null password. We will take the first approach as the second one, though possible, is not a good practice in terms of security. Therefore, here is what we do to assign an initial password: Use the python command to start the command-line python interpreter: import crypt; print crypt.crypt("Q!W@E#R$","Bing0000/") Here, we have used the Q!W@E#R$ password with a salt combination of the alphanumeric character: Bing0000 followed by a (/) character. The output is the encrypted password, similar to 'BiagqBsi6gl1o'. Press Ctrl + D to exit the Python interpreter. At the shell, enter the following command with the encrypted output of the Python interpreter: usermod -p "<encrypted-password>" <username> So, here, in our case, if the username is testuser, we will use the following command: usermod -p "BiagqBsi6gl1o" testuser Now, upon initial login using the "Q!W@E#R$" password, the user will be prompted for a new password. Setting the password policy This is a set of rules defined in some files, which have to be followed when a system user is setting up. It's an important factor in security because one of the many security breach histories was started with hacking user passwords. This is the reason why most organizations set a password policy for their users. All usernames and passwords must comply with this. A password policy usually is defined by the following: Password aging Password length Password complexity Limit login failures Limit prior password reuse Configuring password aging and password length Password aging and password length are defined in /etc/login.defs. Aging basically means the maximum number of days a password might be used, minimum number of days allowed between password changes, and number of warnings before the password expires. Length refers to the number of characters required for creating the password. To configure password aging and length, we should edit the /etc/login.defs file and set different PASS values according to the policy set by the organization. Note: The password aging controls defined here does not affect existing users; it only affects the newly created users. So, we must set these policies when setting up the system or the server at the beginning. The values we modify are: PASS_MAX_DAYS: The maximum number of days a password can be used PASS_MIN_DAYS: The minimum number of days allowed between password changes PASS_MIN_LEN: The minimum acceptable password length PASS_WARN_AGE: The number of days warning to be given before a password expires Let's take a look at a sample configuration of the login.defs file: Configuring password complexity and limiting reused password usage By editing the /etc/pam.d/system-auth file, we can configure the password complexity and the number of reused passwords to be denied. A password complexity refers to the complexity of the characters used in the password, and the reused password deny refers to denying the desired number of passwords the user used in the past. By setting the complexity, we force the usage of the desired number of capital characters, lowercase characters, numbers, and symbols in a password. The password will be denied by the system until and unless the complexity set by the rules are met. We do this using the following terms: Force capital characters in passwords: ucredit=-X, where X is the number of capital characters required in the password Force lower case characters in passwords: lcredit=-X, where X is the number of lower case characters required in the password Force numbers in passwords: dcredit=-X, where X is the number numbers required in the password Force the use of symbols in passwords: ocredit=-X, where X is the number of symbols required in the password For example: password requisite pam_cracklib.so try_first_pass retry=3 type= ucredit=-2 lcredit=-2 dcredit=-2 ocredit=-2 Deny reused passwords: remember=X, where X is the number of past passwords to be denied For example: password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5 Let's now take a look at a sample configuration of /etc/pam.d/system-auth: Configuring login failures We set the number of login failures allowed by a user in the /etc/pam.d/password-auth, /etc/pam.d/system-auth, and /etc/pam.d/login files. When a user's failed login attempts are higher than the number defined here, the account is locked and only a system administrator can unlock the account. To configure this, make the following additions to the files. The following deny=X parameter configures this, where X is the number of failed login attempts allowed: Add these two lines to the /etc/pam.d/password-auth and /etc/pam.d/system-auth files and only the first line to the /etc/pam.d/login file: auth        required    pam_tally2.so file=/var/log/tallylog deny=3 no_magic_root unlock_time=300 account     required    pam_tally2.so The following screenshot is a sample /etc/pam.d/system-auth file: The following is a sample /etc/pam.d/login file: To see failures, use the following command: pam_tally2 –user=<User Name> To reset the failure attempts and to enable the user to login again, use the following command: pam_tally2 –user=<User Name> --reset Sudoers Separation of user privilege is one of the main features in Linux operating systems. Normal users operate in limited privileged sessions to limit the scope of their influence on the entire system. One special user exists on Linux that we know about already is root, which has super-user privileges. This account doesn't have any restrictions that are present to normal users. Users can execute commands with super-user or root privileges in a number of different ways. There are mainly three different ways to obtain root privileges on a system: Login to the system as root Login to the system as any user and then use the su - command. This will ask you for the root password and once authenticated, will give you the root shell session. We can disconnect this root shell using Ctrl + D or using the command exit. Once exited, we will come back to our normal user shell. Run commands with root privileges using sudo without spawning a root shell or logging in as root. This sudo command works as follows: sudo <command to execute> Unlike su, sudo will request the password of the user calling the command, not the root password. Sudo doesn't work by default and requires to be set up before it functions correctly. In the following section, we will see how to configure sudo and modify the /etc/sudoers file so that it works the way we want it to. visudo Sudo is modified or implemented using the /etc/sudoers file, and visudo is the command that enables us to edit the file. Note: This file should not be edited using a normal text editor to avoid potential race conditions in updating the file with other processes. Instead, the visudo command should be used. The visudo command opens a text editor normally, but then validates the syntax of the file upon saving. This prevents configuration errors from blocking sudo operations. By default, visudo opens the /etc/sudoers file in Vi Editor, but we can configure it to use the nano text editor instead. For that, we have to make sure nano is already installed or we can install nano using: yum install nano -y Now, we can change it to use nano by editing the ~/.bashrc file: export EDITOR=/usr/bin/nano Then, source the file using: . ~/.bashrc Now, we can use visudo with nano to edit the /etc/sudoers file. So, let's open the /etc/sudoers file using visudo and learn a few things. We can use different kinds of aliases for different set of commands, software, services, users, groups, and so on. For example: Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig and many more ... We can use these aliases to assign a set of command execution rights to a user or a group. For example, if we want to assign the NETWORKING set of commands to the group netadmin we will define: %netadmin ALL = NETWORKING Otherwise, if we want to allow the wheel group users to run all the commands, we use the following command: %wheel  ALL=(ALL)  ALL If we want a specific user, john, to get access to all commands we use the following command: john  ALL=(ALL)  ALL We can create different groups of users, with overlapping membership: User_Alias      GROUPONE = abby, brent, carl User_Alias      GROUPTWO = brent, doris, eric, User_Alias      GROUPTHREE = doris, felicia, grant Group names must start with a capital letter. We can then allow members of GROUPTWO to update the yum database and all the commands assigned to the preceding software by creating a rule like this: GROUPTWO    ALL = SOFTWARE If we do not specify a user/group to run, sudo defaults to the root user. We can allow members of GROUPTHREE to shutdown and reboot the machine by creating a command alias and using that in a rule for GROUPTHREE: Cmnd_Alias      POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart GROUPTHREE  ALL = POWER We create a command alias called POWER that contains commands to power off and reboot the machine. We then allow the members of GROUPTHREE to execute these commands. We can also create Run as aliases, which can replace the portion of the rule that specifies to the user to execute the command as: Runas_Alias     WEB = www-data, apache GROUPONE    ALL = (WEB) ALL This will allow anyone who is a member of GROUPONE to execute commands as the www-data user or the apache user. Just keep in mind that later, rules will override previous rules when there is a conflict between the two. There are a number of ways that you can achieve more control over how sudo handles a command. Here are some examples: The updatedb command associated with the mlocate package is relatively harmless. If we want to allow users to execute it with root privileges without having to type a password, we can make a rule like this: GROUPONE    ALL = NOPASSWD: /usr/bin/updatedb NOPASSWD is a tag that means no password will be requested. It has a companion command called PASSWD, which is the default behavior. A tag is relevant for the rest of the rule unless overruled by its twin tag later down the line. For instance, we can have a line like this: GROUPTWO    ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill In this case, a user can run the updatedb command without a password as the root user, but entering the root password will be required for running the kill command. Another helpful tag is NOEXEC, which can be used to prevent some dangerous behavior in certain programs. For example, some programs, such as less, can spawn other commands by typing this from within their interface: !command_to_run This basically executes any command the user gives it with the same permissions that less is running under, which can be quite dangerous. To restrict this, we could use a line like this: username    ALL = NOEXEC: /usr/bin/less We should now have clear understanding of what sudo is and how do we modify and provide access rights using visudo. There are many more things left here. You can check the default /etc/sudoers file, which has a good number of examples, using the visudo command, or you can read the sudoers manual as well. One point to remember is that root privileges are not given to regular users often. It is important for us to understand what these command do when you execute with root privileges. Do not take the responsibility lightly. Learn the best way to use these tools for your use case, and lock down any functionality that is not needed. Reference Now, let's take a look at the major reference used throughout the chapter: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/index.html Summary In all we learned about some advanced user management and how to manage users through the command line, along with password aging, quota, exposure to /etc/sudoers, and how to modify them using visudo. User and password management is a regular task that a system administrator performs on servers, and it has a very important role in the overall security of the system. Resources for Article:   Further resources on this subject: SELinux - Highly Secured Web Hosting for Python-based Web Applications [article] A Peek Under the Hood – Facts, Types, and Providers [article] Puppet Language and Style [article]
Read more
  • 0
  • 0
  • 9745
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-making-game-console-unity-part-2
Eliot Lash
28 Dec 2015
10 min read
Save for later

Making an In-Game Console in Unity Part 2

Eliot Lash
28 Dec 2015
10 min read
In part 1, I started walking you through making a console using uGUI, Unity’s built-in GUI framework. I'm showing you how to implement a simple input parser. Let's continue where we left off here in Part 2. We’re going to program the behavior of the console. I split this into a ConsoleController, which handles parsing and executing commands, and a view component that handles the communication between the ConsoleController and uGUI. This makes the parser code cleaner, and easier to switch to a different GUI system in the future if needed. First, make a new script called ConsoleController. Completely delete its contents and replace them with the following class: /// <summary> /// Handles parsing and execution of console commands, as well as collecting log output. /// Copyright (c) 2014-2015 Eliot Lash /// </summary> using UnityEngine; using System; using System.Collections.Generic; using System.Text; public delegate void CommandHandler(string[] args); public class ConsoleController { #region Event declarations // Used to communicate with ConsoleView public delegate void LogChangedHandler(string[] log); public event LogChangedHandler logChanged; public delegate void VisibilityChangedHandler(bool visible); public event VisibilityChangedHandler visibilityChanged; #endregion /// <summary> /// Object to hold information about each command /// </summary> class CommandRegistration { public string command { get; private set; } public CommandHandler handler { get; private set; } public string help { get; private set; } public CommandRegistration(string command, CommandHandler handler, string help) { this.command = command; this.handler = handler; this.help = help; } } /// <summary> /// How many log lines should be retained? /// Note that strings submitted to appendLogLine with embedded newlines will be counted as a single line. /// </summary> const int scrollbackSize = 20; Queue<string> scrollback = new Queue<string>(scrollbackSize); List<string> commandHistory = new List<string>(); Dictionary<string, CommandRegistration> commands = new Dictionary<string, CommandRegistration>(); public string[] log { get; private set; } //Copy of scrollback as an array for easier use by ConsoleView const string repeatCmdName = "!!"; //Name of the repeat command, constant since it needs to skip these if they are in the command history public ConsoleController() { //When adding commands, you must add a call below to registerCommand() with its name, implementation method, and help text. registerCommand("babble", babble, "Example command that demonstrates how to parse arguments. babble [word] [# of times to repeat]"); registerCommand("echo", echo, "echoes arguments back as array (for testing argument parser)"); registerCommand("help", help, "Print this help."); registerCommand("hide", hide, "Hide the console."); registerCommand(repeatCmdName, repeatCommand, "Repeat last command."); registerCommand("reload", reload, "Reload game."); registerCommand("resetprefs", resetPrefs, "Reset & saves PlayerPrefs."); } void registerCommand(string command, CommandHandler handler, string help) { commands.Add(command, new CommandRegistration(command, handler, help)); } public void appendLogLine(string line) { Debug.Log(line); if (scrollback.Count >= ConsoleController.scrollbackSize) { scrollback.Dequeue(); } scrollback.Enqueue(line); log = scrollback.ToArray(); if (logChanged != null) { logChanged(log); } } public void runCommandString(string commandString) { appendLogLine("$ " + commandString); string[] commandSplit = parseArguments(commandString); string[] args = new string[0]; if (commandSplit.Length < 1) { appendLogLine(string.Format("Unable to process command '{0}'", commandString)); return; } else if (commandSplit.Length >= 2) { int numArgs = commandSplit.Length - 1; args = new string[numArgs]; Array.Copy(commandSplit, 1, args, 0, numArgs); } runCommand(commandSplit[0].ToLower(), args); commandHistory.Add(commandString); } public void runCommand(string command, string[] args) { CommandRegistration reg = null; if (!commands.TryGetValue(command, out reg)) { appendLogLine(string.Format("Unknown command '{0}', type 'help' for list.", command)); } else { if (reg.handler == null) { appendLogLine(string.Format("Unable to process command '{0}', handler was null.", command)); } else { reg.handler(args); } } } static string[] parseArguments(string commandString) { LinkedList<char> parmChars = new LinkedList<char>(commandString.ToCharArray()); bool inQuote = false; var node = parmChars.First; while (node != null) { var next = node.Next; if (node.Value == '"') { inQuote = !inQuote; parmChars.Remove(node); } if (!inQuote && node.Value == ' ') { node.Value = ' n'; } node = next; } char[] parmCharsArr = new char[parmChars.Count]; parmChars.CopyTo(parmCharsArr, 0); return (new string(parmCharsArr)).Split(new char[] {' n'} , StringSplitOptions.RemoveEmptyEntries); } #region Command handlers //Implement new commands in this region of the file. /// <summary> /// A test command to demonstrate argument checking/parsing. /// Will repeat the given word a specified number of times. /// </summary> void babble(string[] args) { if (args.Length < 2) { appendLogLine("Expected 2 arguments."); return; } string text = args[0]; if (string.IsNullOrEmpty(text)) { appendLogLine("Expected arg1 to be text."); } else { int repeat = 0; if (!Int32.TryParse(args[1], out repeat)) { appendLogLine("Expected an integer for arg2."); } else { for(int i = 0; i < repeat; ++i) { appendLogLine(string.Format("{0} {1}", text, i)); } } } } void echo(string[] args) { StringBuilder sb = new StringBuilder(); foreach (string arg in args) { sb.AppendFormat("{0},", arg); } sb.Remove(sb.Length - 1, 1); appendLogLine(sb.ToString()); } void help(string[] args) { foreach(CommandRegistration reg in commands.Values) { appendLogLine(string.Format("{0}: {1}", reg.command, reg.help)); } } void hide(string[] args) { if (visibilityChanged != null) { visibilityChanged(false); } } void repeatCommand(string[] args) { for (int cmdIdx = commandHistory.Count - 1; cmdIdx >= 0; --cmdIdx) { string cmd = commandHistory[cmdIdx]; if (String.Equals(repeatCmdName, cmd)) { continue; } runCommandString(cmd); break; } } void reload(string[] args) { Application.LoadLevel(Application.loadedLevel); } void resetPrefs(string[] args) { PlayerPrefs.DeleteAll(); PlayerPrefs.Save(); } #endregion } I’ve tried to comment where appropriate, but I’ll give you a basic rundown of this class. It maintains a registry of methods that are mapped to string command names, as well as associated help text. This allows the “help” command to print out all the available commands along with extra info on each one. It keeps track of the output scrollback as well as the history of user-entered commands (this is to aid implementation of bash-style command history paging, which is left as an exercise to the reader. Although I have implemented a simple command, ‘!!’ which repeats the most recent command.) When the view receives command input, it passes it to runCommandString() which calls parseArguments() to perform some rudimentary string parsing using a space as a delimiter. It then calls runCommand() which tries to look up the corresponding method in the command registration dictionary, and if it finds it, calling it with the remaining arguments. Commands can call appendLogLine() to write to the in-game console log, and of course execute arbitrary code. Moving on, we will implement the view. Attach a new script to the ConsoleView object (the parent of ConsoleViewContainer) and call it ConsoleView. Replace its contents with the following: /// <summary> /// Marshals events and data between ConsoleController and uGUI. /// Copyright (c) 2014-2015 Eliot Lash /// </summary> using UnityEngine; using UnityEngine.UI; using System.Text; using System.Collections; public class ConsoleView : MonoBehaviour { ConsoleController console = new ConsoleController(); bool didShow = false; public GameObject viewContainer; //Container for console view, should be a child of this GameObject public Text logTextArea; public InputField inputField; void Start() { if (console != null) { console.visibilityChanged += onVisibilityChanged; console.logChanged += onLogChanged; } updateLogStr(console.log); } ~ConsoleView() { console.visibilityChanged -= onVisibilityChanged; console.logChanged -= onLogChanged; } void Update() { //Toggle visibility when tilde key pressed if (Input.GetKeyUp("`")) { toggleVisibility(); } //Toggle visibility when 5 fingers touch. if (Input.touches.Length == 5) { if (!didShow) { toggleVisibility(); didShow = true; } } else { didShow = false; } } void toggleVisibility() { setVisibility(!viewContainer.activeSelf); } void setVisibility(bool visible) { viewContainer.SetActive(visible); } void onVisibilityChanged(bool visible) { setVisibility(visible); } void onLogChanged(string[] newLog) { updateLogStr(newLog); } void updateLogStr(string[] newLog) { if (newLog == null) { logTextArea.text = ""; } else { logTextArea.text = string.Join(" n", newLog); } } /// <summary> /// Event that should be called by anything wanting to submit the current input to the console. /// </summary> public void runCommand() { console.runCommandString(inputField.text); inputField.text = ""; } } The ConsoleView script manages the GUI and forwards events to the ConsoleController. It also watches the ConsoleController and updates the GUI when necessary. Back in the inspector, select ConsoleView. We’re going to hook up the Console View component properties. Drag ConsoleViewContainer into the “View Container” property. Do the same for LogText into “Log Text Area” and InputField into “Input Field.” i Now we’ve just got a bit more hooking up to do. Select InputField, and in the Input Field component, find the “End Edit” event list. Click the plus button and drag ConsoleView in to the new row. In the function list, select ConsoleView > runCommand (). Finally, select EnterBtn and find the “On Click” event list in the Button component. Click the plus button and drag ConsoleView in to the new row. In the function list, select ConsoleView > onCommand (). Now we’re ready to test! Save your scene and run the game. The console should be visible. Type “help” into the input field: Now press the enter/return key. You should see the help text print out like so: Try out another test command, “echo foo bar baz”. It will show you how it splits the command arguments into a string array, printed out as a comma separated list: Also make sure the fallback “Ent” button is working to submit the input. Lastly, check if the console toggle key works: press the backtick/tilde key (located right above the left tab key.) It looks like this: The console should disappear. Press it again and it should reappear. On a mobile device, tapping five fingers at once will toggle the console instead. If you want to use a different means of toggling the console, you can edit this in ConsoleView.Update(). If anything is not working as expected, please go back over the instructions again and try to see if you’ve missed anything. Lastly, we don’t want the console to show when the game first starts. Stop the game and find ConsoleViewContainer in the hierarchy, then disable it by unchecking the box next to its name in the inspector. Now, save and run the game again. The console should be hidden until you press the backtick key. And that’s it! You now have an in-game, interactive console. It’s an extremely versatile debugging tool that’s easy to extend. Use it to implement cheat codes, enable or disable experimental features, obtain diagnostic output, or whatever else you can think of! When you want to create a new console command, just write a new method in ConsoleController “Command handlers” region and add a registerCommand() line for it in the constructor. Use the commands I’ve included as examples. If you want to have other scripts be able to log to the console, you can make the ConsoleController into a service as I described in my article “One-liner Singletons in Unity”. Make the ConsoleController set itself as a service in its constructor, Then have the other script get the ConsoleController instance and call appendLogLine() with its message. I hope having an in-game console will be as useful for you as it has been for me. Lastly, don’t forget to disable or delete the ConsoleView before shipping production builds unless you want your players to have access to all of your debug cheats! About the author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 3
  • 13174

article-image-one-liner-singletons-unity
Eliot Lash
16 Dec 2015
6 min read
Save for later

One-liner Singletons in Unity

Eliot Lash
16 Dec 2015
6 min read
This article is intended for intermediate-level C# programmers or above, and assumes some familiarity with object-oriented programming terminology. The focus is on Unity, but this approach works just as well in any C# environment. I’ve seen a number of common techniques for implementing singletons in Unity. A little background for those unfamiliar with the term: a singleton is simply an object of which only one instance ever exists, and this instance is globally accessible. This is great for shared data or services that need to be accessible from any script in your game, such as the player’s stats. There are many ways to do this. One way is to just have a GameObject in your scene with a certain MonoBehaviour, and have other scripts look it up by tag (or even slower, by name.) There are a few issues with this approach. First of all, if you accidentally have two GameObjects with the same name or tag, you’ll be arbitrarily interacting with one instead of the others, Unity will not notify you about this issue and it could lead to bugs. Secondly, looking up an object by name has a performance penalty, and it’s more busy work to tag every object if you want to use the tag lookup method. Another common way is to just copy-paste the code to make a class into a singleton. If we are following the principle of avoiding code duplication, an easy way to refactor this approach is by rolling the singleton code into a subclass of MonoBehaviour, and having our singletons inherit from that. A problem with this approach is that now we are adding rigidity into our class hierarchy. So we won’t be able to have a singleton that also inherits from a non-singleton subclass of MonoBehaviour. Both of these approaches also require your singleton to be a MonoBehaviour. This is often convenient, but limiting. For instance, if you are using the Model-View-Controller pattern, you may want your models and controllers to not be MonoBehaviours at all but rather “Plain Old” C# objects. The approach that I am presenting in this article gets around all of the above limitations, as providing some additional advantages. Instead of the classic singleton pattern of each class having a static instance variable, we will create a “service manager” singleton that will hold all of the instances we want global access to. The service manager provides the following advantages: Any class can be made into a service with a single line of code. Strong typing makes it unnecessary to cast services when referencing them. Bugs which cause a service instance to be set more than once result in an exception, making them easier to track down. Services don’t have to inherit from a specific class. By containing all service references in one place, It’s easier to clear them out in a single pass.  Without further ado, here is my implementation of the service manager class: /// <summary> /// Simple service manager. Allows global access to a single instance of any class. /// Copyright (c) 2014-2015 Eliot Lash /// </summary> using System; using System.Collections.Generic; public class Services { //Statics private static Services _instance; //Instance private Dictionary<Type, object> services = new Dictionary<Type, object>(); public Services() { if (_instance != null) { UnityEngine.Debug.LogError("Cannot have two instances of singleton."); return; } _instance = this; } /// <summary> /// Getter for singelton instance. /// </summary> public static Services instance { get { if (_instance == null) { new Services(); } return _instance; } } /// <summary> /// Set the specified service instance. Usually called like Set<ExampleService>(this). /// </summary> /// <param name="service">Service instance object.</param> /// <typeparam name="T">Type of the instance object.</typeparam> public void Set<T>(T service) where T : class { services.Add(typeof(T), service); } /// <summary> /// Gets the specified service instance. Called like Get<ExampleService>(). /// </summary> /// <typeparam name="T">Type of the service.</typeparam> /// <returns>Service instance, or null if not initialized</returns> public T Get<T>() where T : class { T ret = null; try { ret = services[typeof(T)] as T; } catch (KeyNotFoundException) { } return ret; } /// <summary> /// Clears internal dictionary of service instances. /// This will not clear out any global state that they contain, /// unless there are no other references to the object. /// </summary> public void Clear() { services.Clear(); } } As this is a classic singleton, it will be lazily instantiated the first time it’s used, so all you need to do is save this script into your project to get started. Now, let’s make a small script into a service. Create an empty GameObject and call it “TestService”. Also create a script called “TestService” and attach it. Now, add the following method: void Awake() { Services.instance.Set<TestService>(this); } Our class is now a service! Caution: To get around issues of initialization dependency when using MonoBehaviours, use a two-phase init process. All service instances should be set in their Awake() method, and not used by other classes until their respective Start() methods or later. For more information see Execution Order of Event Functions. We’ll also add a stub method to TestService to demonstrate how it can be used: public void Foo() { Debug.Log("TestService: Foo!"); } Now, create another empty GameObject and call it “TestClient”, and attach a new script also called “TestClient”. We’ll change its Start() method to look like so: void Start () { TestService ts = Services.instance.Get<TestService>(); ts.Foo(); //If you're only doing one operation with the service, it can be written even more compactly: //Services.instance.Get<TestService>().Foo(); } Now when you run the game, you should see the test message get written to the Unity console. And that’s all there is to it! Also, a note on global state. Earlier, I mentioned that clearing out global state is easier with the service manager. The service manager code sample I provided has a method (Services.Clear()) which will clear out its internal dictionary of services, but this will not completely reset their state. Unfortunately, this is a complex topic beyond the scope of this article, but I can offer some suggestions. If you are using MonoBehaviors as your services, calling Services.Clear() and then reloading the scene might be enough to do the trick. Otherwise, you’ll need to find a way to notify each service to clean itself up before clearing the service manager, such as having them all implement an interface with such a method in it. I hope you’ll give this a try, and enjoy the ease of creating and accessing more error-resistant and strictly typed global services in one line of code. For more Unity game development tutorials and extra content, visit our Unity page here.  About the Author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 0
  • 3968

article-image-programming-littlebits-circuits-javascript-part-2
Anna Gerber
14 Dec 2015
5 min read
Save for later

Programming littleBits circuits with JavaScript Part 2

Anna Gerber
14 Dec 2015
5 min read
In this two-part series, we're programing littleBits circuits using the Johnny-Five JavaScript Robotics Framework. Be sure to read over Part 1 before continuing here. Let's create a circuit to play with, using all of the modules from the littleBits Arduino Coding Kit. Attach a button to the Arduino connector labelled d0. Attach a dimmer to the connector marked a0 and a second dimmer to a1. Turn the dimmers all the way to the right (max) to start with. Attach a power module to the single connector on the left-hand side of the fork module, and the three output connectors of the fork module to all of the input modules. The bargraph should be connected to d5, and the servo to d9, and both set to PWM output mode using the switches on board of the Arduino. The servo module has two modes: turn and swing. Swing mode makes the servo sweep betwen maximum and minimum. Set it to swing mode using the onboard switch. Reading input values We'll create an instance of the Johnny-Five Button class to respond to button press events. Our button is connected to the connector labelled d0 (i.e. digital "pin" 0) on our Arduino, so we'll need to specify the pin as an argument when we create the button. var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { var button = new five.Button(0); }); Our dimmers are connected to analog pins (A0 and A1), so we'll specify these as strings when we create Sensor objects to read their values. We can also provide options for reading the values; for example, we'll set the frequency to 250 milliseconds, so we'll be receiving 4 readings per second for both dimmers. var dimmer1 = new five.Sensor({ pin: "A0", freq: 250 }); var dimmer2 = new five.Sensor({ pin: "A1", freq: 250 }); We can attach a function that will be run any time the value changes (on "change") or anytime we get a reading (on "data"): dimmer1.on("change", function() { // raw value (between 0 and 1023) console.log("dimmer 1 is " + this.raw); }); Run the code and try turning dimmer 1. You should see the value printed to the console whenever the dimmer value changes. Triggering behavior Now we can use code to hook our input components up to our output components. To use, for example, the dimmer to control the brightness of the bargraph, change the code in the event handler: var led = new five.Led(5); dimmer1.on("change", function() { // set bargraph brightness to one quarter // of the raw value from dimmer led.brightness(Math.floor(this.raw / 4)); }); You'll see the bargraph brightness fade as you turn the dimmer. We can use the JavaScript Math library and operators to manipulate the brightness value before we send it to the bargraph. Writing code gives us more control over the mapping between input values and output behaviors than if we'd snapped our littleBits modules directly together without going via the Arduino. We set our d5 output to PWM mode, so all of the LEDs should fade in and out at the same time. If we set the output to analog mode instead, we'd see the behavior change to light up more or fewer LEDs depending on value of the brightness. Let's use the button to trigger the servo stop and start functions. Add a button press handler to your code, and a variable to keep track of whether the servo is running or not. We'll toggle this variable between true and false using JavaScript's boolean not operator (!). We can determine whether to stop or start the servo each time the button is pressed via a conditional statement based on the value of this variable. var servo = new five.Motor(9); servo.start(); var button = new five.Button(0); var toggle = false; var speed = 255; button.on("press", function(value){ toggle = !toggle; if (toggle) { servo.start(speed); } else { servo.stop(); } }); The other dimmer can be used to change the servo speed: dimmer2.on("change", function(){ speed = Math.floor(this.raw / 4); if (toggle) { servo.start(speed); } }); There are many input and output modules available within the littleBits system for you to experiment with. You can use the Sensor class with input modules, and check out the Johnny-Five API docs to see examples of types of outputs supported by the API. You can always fall back to using the Pin class to program any littleBits module. Using the REPL Johnny-Five includes a Read-Eval-Print-Loop (REPL) so you can interactively write code to control components instantly - no waiting for code to compile and upload! Any of the JavaScript objects from your program that you want to access from the REPL need to be "injected". The following code, for example, injects our servo and led objects. this.repl.inject({ led: led, servo: servo }); After running the program using Node.js, you'll see a >> prompt in your terminal. Try some of the following functions (hit Enter after each to see it take effect): servo.stop(): stop the servo servo.start(50): start servo moving at slow speed servo.start(255): start servo moving at max speed led.on(): turn LED on led.off(): turn LED off led.pulse(): slowly fade LED in and out led.stop(): stop pulsing LED led.brightness(100): set brightness of LED - the parameter should be between 0 and 255 LittleBits are fantastic for prototyping, and pairing the littleBits Arduino with JavaScript makes prototyping interactive electronic projects even faster and easier. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 2747

article-image-making-game-console-unity-part-1
Eliot Lash
14 Dec 2015
6 min read
Save for later

Making an In-Game Console in Unity Part 1

Eliot Lash
14 Dec 2015
6 min read
This article is intended for intermediate-level Unity developers or above. One of my favorite tools that I learned about while working in the game industry is the in-game console. It’s essentially a bare-bones command-line environment where you can issue text commands to your game and get text output. Unlike Unity’s built-in console (which is really just a log,) it can take input and display output on whatever device the game is running on. I’m going to walk you through making a console using uGUI, Unity’s built-in GUI framework available in Unity 4.6 and later. I’m assuming you have some familiarity with it already, so if not I’d recommend reading the UI Overview before continuing. I’ll also show how to implement a simple input parser. I’ve included a unitypackage with an example scene showing the console all set up, as well as a prefab console that you can drop into an existing scene if you’d like. However, I will walk you through setting everything up from scratch below. If you don’t have a UI Canvas in your scene yet, make one by selecting GameObject > UI > Canvas from the menu bar. Now, let’s get started by making a parent object for our console view. Right click on the Canvas in the hierarchy and select “Create Empty”. There will now be an empty GameObject in the Canvas hierarchy, that we can use as the parent for our console. Rename this object to “ConsoleView”. I personally like to organize my GUI hierarchy a bit to make it easier to do flexible layouts and turn elements of the GUI off and on, so I also made some additional parent objects for “HUD” (GUI elements that are part of a layer that draws over the game, usually while the game is running) and a child of that for “DevHUD”, those HUD elements that are only needed during development. This makes it easier to disable or delete the DevHUD when making a production build of my game. However, this is optional. Enter 2D selection mode and scale the ConsoleView so it fills the width of the Canvas and most of its height. Then set its anchor mode to “stretch top”. Now right click on ConsoleView in the hierarchy and select “Create Empty”. Rename this new child “ConsoleViewContainer”. Drag to the same size as its parent, and set its anchor mode to “stretch stretch”. We need this additional container as the console needs the ability to show and hide during gameplay, so we will be enabling and disabling ConsoleViewContainer. But we still need the ConsoleView object to stay enabled so that it can listen for the special gesture/keypress which the user will input to bring up the console. Next, we’ll create our text input field. Right click on ConsoleViewContainer in the hierarchy and select UI > Input Field. Align the InputField with the upper left corner of ConsoleViewContainer and drag it out to about 80% of the screen width. Then set the anchor mode to “stretch top”. I prefer a dark console, so I changed the Image Color to dark grey. Open up the children of the InputField and you can edit the placeholder text, I set mine to “Console input”. You may also change the Placeholder and Text color to white if you want to use a dark background. On some platforms at this time of writing, Unity won’t handle the native enter/submit button correctly, so we’re going to add a fallback enter button next. (If you’re sure this won’t be an issue on your platforms, you can skip this paragraph and resize the console input to fill the width of the container.) Right click on ConsoleViewContainer in the hierarchy and select UI > Button. Align the button to the right of the InputField and set the anchor to “stretch top”. Rename the Button to EnterBtn. Select its text child in the hierarchy and edit the text to say “Ent”. Next, we’re going to make the view for the console log output. Right click on ConsoleViewContainer in the hierarchy and select UI > Image. Drag the image to fill the area below the InputField and set the anchor to “stretch stretch”. Rename Image to LogView. If you want a dark console (you know you do!) change the image color to black. Now at the bottom of the inspector, click “Add Component” and select UI > Mask. Again, click “Add Component” and select UI > Scroll Rect. Right click on LogView in the hierarchy and select UI > Text. Scale it to fill the LogView and set the anchor to “stretch bottom”. Rename it to LogText. Set the text to bottom align. If you’re doing a dark console, set the text color to white. To make sure we’ve got everything set up properly, add a few paragraphs of placeholder text (my favorite source for this is the hipster ipsum generator.) Now drag the top way up past the top of the canvas to give room for the log scrollback. If it’s too short, the log rotation code we’ll write later might not work properly. Now, we’ll make the scrollbar. Right click on ConsoleViewContainer in the hierarchy and select UI > Scrollbar. In the Scrollbar component, set the Direction to “Bottom To Top”, and set the Value to 0. Size it to fit between the LogView and the edge of the container and set the anchor to “stretch stretch”. Finally, we’ll hook up our complete scroll view. Select LogView and in the Scroll Rect component, drag in LogText to the “Content” property, and Scrollbar into the “Vertical Scrollbar” property. Then, uncheck the “Horizontal” box. Go ahead and run the game to make sure we’ve set everything up correctly. You should be able to drag the scroll bar and watch the text scroll down. If not, go back through the previous steps and try to figure out if you missed anything. This concludes part one. Stay tuned for part two of this series where you will learn how to program the behavior of the console. Find more Unity game development tutorials and content on our Unity page.  About the Author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 2
  • 28139
article-image-platform-detection-your-nwjs-app
Adam Lynch
11 Dec 2015
6 min read
Save for later

Platform detection in your NW.js app

Adam Lynch
11 Dec 2015
6 min read
There are various reasons why you might want to detect which platform or operating system your NW.js app is currently being ran on. Your keyboard shortcuts or UI may differ per platform, you might want to store files in platform-specific directories on disk, etc. Thanks to node's (or io.js') os module, it isn't too difficult. Which operating system? On Mac, Linux and Windows, the following script would output darwin, linux and win32 or win64 respectively. var os = require('os'); console.log(os.platform()); The other possible return values of os.platform() are freebsd and sunos. Which Linux distribution? Figuring this out is a bit more problematic. The uname -v command returns some information like the following if ran on Ubuntu: #42~precise1-Ubuntu SMP Wed Aug 14 15:31:16 UTC 2013. You could spawn this command via io.js' child_process module or any of the countless similar modules on npm. This doesn't give you much though, it's probably safest to check for and read distrubtion-specific release information files (with the io.js' fs module) which include: Distribution File(s) Debian /etc/debian_release and /etc/debian_version but be careful as these also exist on Ubuntu. Fedora /etc/fedora-release Gentoo /etc/gentoo-release Mandrake /etc/mandrake-release Novell SUSE /etc/SUSE-release Red Hat /etc/redhat-release and /etc/redhat_version Slackware /etc/slackware-release and /etc/slackware-version Solaris / Sparc /etc/release Sun JDS /etc/sun-release UnitedLinux /etc/UnitedLinux-release Ubuntu /etc/lsb-release and /etc/os-release Yellow dog /etc/yellowdog-release Keep in mind that the format of each of these files can differ. An example /etc/lsb-release file: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS" An example /etc/os-release file: NAME="Ubuntu" VERSION="12.04.3 LTS, Precise Pangolin" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu precise (12.04.3 LTS)" VERSION_ID="12.04" 32-bit or 64-bit architecture? The safest way to check this is to use os.arch() in combination with system environment variables; the following script will output 32 or 64 depending on the architecture: var os = require('os'); var is64Bit = os.arch() === 'x64' || process.env.hasOwnProperty('PROCESSOR_ARCHITEW6432'); console.log(is64Bit ? 64 : 32); Version detection This is a bit trickier. os.release() returns the platform version but it is not what you'd expect it to be. It will return the actual internal (not sure if this is the right word?) operating system version. You might expect the return value to be 10.0.0 when called on Mac OSX Yosemite but it will in fact return 14.0.0. To see the mappings between Darwin and Mac release versions, see the Darwin (operating system) Wikipedia entry. Microsoft provide Windows' versions but you may need to do some testing yourself to be safe as you can see both Windows 8 and Windows Server 2012 are both listed as being 9.2. In my experience, it's safe to check against 6.2.9200 but don't take my word for it. os.release() will return whatever uname -v would return on Linux (e.g. 3.8.0-29-generic on Ubuntu 12.04.3 LTS) so it's safer to read the distribution-specific release information file(s) we saw earlier. The finished article The final version of our platform.js module looks like this: var os = require('os'); var platform = { isLinux: false, isMac: false, isWindows: false, isWindows8: false, version: os.release() }; /** * Checks if the current platform version is greater than or equal to the desired minimum version given * * @param minimumVersion {string} E.g. 10.0.0. * See [the Darwin operating system Wikipedia entry](http://en.wikipedia.org/wiki/Darwin_%28operating_system%29#Release_history) for Mac - Darwin versions. * Also, Windows 8 >= 6.2.9200 * * @returns {boolean} */ var isOfMinimumVersion = function(minimumVersion){ var actualVersionPieces = platform.version.split('.'), pieces = minimumVersion.split('.'), numberOfPieces = pieces.length; for(var i = 0; i < numberOfPieces; i++){ var piece = parseInt(pieces[i], 10), actualPiece = parseInt(actualVersionPieces[i], 10); if (typeof actualPiece === 'undefined') { break; // e.g. 13.1 passed and actual is 13.1.0 } else if (actualPiece > piece) { break; // doesn't matter what the next bits are, the major version (or whatever) is larger } else if (actualPiece === piece) { continue; // to check next version piece } else { return false; } } return true; // all was ok }; var name = os.platform(); if(name === 'darwin'){ platform.name = 'mac'; platform.isMac = true; } else if(name === 'linux'){ platform.name = 'linux'; platform.isLinux = true; } else { platform.name = 'windows'; platform.isWindows = true; platform.isWindows8 = isOfMinimumVersion('6.2.9200'); } platform.is64Bit = os.arch() === 'x64' || process.env.hasOwnProperty('PROCESSOR_ARCHITEW6432'); module.exports = platform; Take note of our isOfMinimumVersion method and isWindows8 property. So then, from anywhere in your app you could require this module and use it for platform-specific code where needs be. For example: var platform = require('./platform'); if(platform.isMac){ // do something } elseif(platform.isWindows8 && platform.is64bit) { // do something else } Platform-dependent styles You may have spotted that our platforms.js module exports a name property. This is really useful for applying platform-specific styles. To achieve differing styles per platform, we'll use this name property to add a platform- class to our body element: var platform = require('./platform'); document.body.classList.add('platform-' + platform.name); Note that I've used Element.classList here which isn't supported by a lot of browsers people currently use. The great thing about NW.js is we can ignore that. We know that 100% of our app's users are using Chromium 43. So then we can change the styling of certain elements based on the current platform. Let's say you have some nice custom button styles and you'd like them to be a bit rounder on Mac OS X. All we have to do is use this platform- class in our CSS: .button{ //yourcustomstyles border-radius:3px; } .platform-mac.button{ border-radius:5px; } So any elements with the button class look like just like the custom buttons you designed (or grabbed from CodePen) but if the platform-mac class exists on an ancestor, i.e. the body element, then the buttons' corners are a little more rounded. You could easily go a little further and add certain classes depending on the given platform version. You could add a platform-windows-8 class to the body if platform.isWindows8 is true and then make the buttons' square-cornered if it exists; .button{ //yourcustomstyles border-radius:3px; } .platform-mac.button{ border-radius:5px; } .platform-windows-8.button{ border-radius:0; } That's it! Feel free to take this, use it, abuse it, build on top of it, or whatever you'd like. Go wild. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010. 
Read more
  • 0
  • 0
  • 4859

article-image-installing-your-nwjs-app-windows
Adam Lynch
09 Dec 2015
4 min read
Save for later

Installing your NW.js app on Windows

Adam Lynch
09 Dec 2015
4 min read
NW.js is great for creating desktop applications using Web app technologies. If you're not familiar with NW.js, I'd advise you to read an introductory article like Creating Your First Desktop App With HTML, JS and Node-WebKit to get a good base first. This is a slightly more advanced article intended for anyone interested into distributing their NW.js app to Windows users. I've been through it myself with Teamwork Chat and there are a few things to consider. What exactly should you provide the end user with? How? Where? And why? What to ship If you want to keep it simple you can simply package everything up into a ZIP archive for your users to download. The ZIP should include your app, along with all of the files generated during the build; the dynamic-link libraries (.dlls), the nw.pak and the other PAK files in the locales directory. All of these extra files are required to be certain your app will function correctly on Windows, even if they already have some of these from a previous installation of Google Chrome, for example. When I say you need to include "your app" in this archive, I of course mean your myApp.exe if you've used the nw-builder module to build your app (which I recommend). If you want to use the .nw method of running your app, you will have to distribute your app in separate pieces; nw.exe, a .nw archive containing your app code and myApp.ink a shortcut which executes nw.exe with your .nw archive. This is how the popular Popcorn Time app works. You could rename nw.exe to something nicer but it's not advised to ensure native module compatibility. Installers Giving the user a simple ZIP isn't optimal though. It isn't the most user-friendly option and you wouldn't have much control over what the user does with your app; where they put it, how many copies of your app they have, etc. This is where installers come in. E.g. Inno Setup, NSIS or Install Shield. The applications provided to build these installers can be configured to grab all of your files and store them wherever you choose on the user's machine, pin your app to their start menu and a whole host of other options. Where to store your app The first place that springs to mind is Program Files, right? Well, if your app has to add / overwrite / remove files from the directory in which it's located then you'll run into problems with permissions. To get around this I suggest storing your app in C:Users<username>AppDataRoamingMyApp like a handful of big name apps do. If you really need to store your app in Program Files then you could theoretically use something like the node-windows node module to elevate the privileges of the current user to a local administrator and execute the problematic filesystem interactions using Windows services. This means though that Windows' UAC (User Account Control) may show for the user depending on their settings. If you were to use node-windows, this also means that you'd have to pass Windows commands as strings instead of using the fs module. Another possible location is C:UsersDefaultAppDataRoamingMyApp. Anything stored here will be copied to C:Users<new-username>AppDataRoamingMyApp for each new user profile created on the machine. This may or may not suit your application or you might even want to let the user to decide (by having this as an option in the installer). What to sign If you're digitally signing your app with a certificate, make sure you sign each and every executable; not only myApp.exe / nw.exe but also any .exe's your app spawns as well as any executables any of your node_modules dependencies spawn (which aren't already signed by the maintainers). If you were to use the [node-webkit-updatermodule](https://github.com/edjafarov/node-webkit-updater/), for example, it contains an unsignedunzip.exe`. Make sure to sign all of these before building your installer, as well as signing the installer itself. That's all folks! I've had to figure a lot of this stuff myself by trial and error so I hope it saves you some time. If I've missed anything, feel free to let me know in a comment below. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 13828

article-image-how-to-stream-live-video-with-raspberry-pi
Jakub Mandula
07 Dec 2015
5 min read
Save for later

How to Stream Live Video With Raspberry Pi

Jakub Mandula
07 Dec 2015
5 min read
Say you want to build a remote controlled robot or a surveillance camera using your Raspberry Pi. What is the best method of transmitting the live footage to your screen? If only there was a program that could do this in a simple way while not frying your Pi. Fortunately, there is a program called mjpg-streamer to save the day. In its core, it grabs images from one input device and forwards them to one or more output devices. We are going to provide it with the video from our web-camera and feed it into a self-hosted HTTP server that lets us display the images in the browser. Dependencies Let's get started! mjpg-streamer does not come as a standard package and must be compiled manually. But do not let that be the reason for giving up on it. In order to compile mjpg-streamer, we have to install several dependencies. sudo apt-get update sudo apt-get upgrade sudo apt-get install build-essential libjpeg8-dev imagemagick libv4l-dev Recently the videodev.h file has been replaced by a newer version videodev2.h. In order to fix the path references, just create a quick symbolic link. sudo ln -s /usr/include/linux/videodev2.h /usr/include/linux/videodev.h Downloading Now that we have all the dependencies we can download the mjpg-streamer repository. I am using a Github fork by jacksonliam. git clone git@github.com:jacksonliam/mjpg-streamer.git cd mjpg-streamer/mjpg-streamer-experimental Compilation There is a number of plugins that come with MJPEG-streamer. We are going to only compile these: input_file.so - used to provide a file as input input_uvc.so - provides input form USB web cameras input_raspicam.so - provides input from the raspicam module output_http.so - our http streaming output Feel free to look through the rest of the plugins folder and explore the other inputs/outputs. Just add their names to the command below. make mjpg_streamer input_file.so input_uvc.so input_raspicam.so output_http.so Moving mjpg-streamer I recommend moving mjpg-streamer to a more permanent location in your file system. I personally use the /usr/local directory. But feel free to use any other path as long as you adjust any following steps in the setup process. sudo cp mjpg_streamer /usr/local/bin sudo cp input_file.so input_uvc.so input_raspicam.so output_http.so /usr/local/lib/ sudo cp -R www /usr/local/www Finally reference the plugins in your bashrc file. Simply open it with your favorite text editor. vim ~/.bashrc And append the following line to the file: export LD_LIBRARY_PATH=/usr/local/lib/ You can a Now source your bash and you are good to go. source ~/.bashrc Running mjpg-streamer Running mjpg-streamer is very simple. If you have followed all the steps up to now all you have to do is one command. mjpg_streamer -i input_uvc.so -o "output_http.so -w /usr/local/www" The flags mean the following: * -i - input to mjpg-streamer (our USB camera) * -o output from mjpg-streamer (our HTTP server) * -w a flag to the HTTP server of the location with the HTML and CSS which we moved to /usr/local/www. There is a vast number of other flags that you can explore. Here I list a few of them. -f - framerate in seconds -c - protect the HTTP server with a username:password -b - run in background -p - use another port Testing the stream Now that you have your mjpg-streamer running go to http://ip-of-your-raspberry-pi:8080 and watch the live stream. To just grab the stream, paste the following image tag into your HTML <img src="http://ip-of-your-raspberry-pi:8080/?action=stream"> This should work in most modern browsers. I found there to be a problem with Google Chrome on iOS devices, which is strange because it does work in Safari which is basically identical to Chrome. Securing the stream You could leave it at that. However as of now your stream is insecure and anyone with the IP address to your Raspberry Pi can access it. We have talked about the -c flag which can be passed to the output_http.so plugin. However, this does not prevent people eavesdropping on your connection. We need to use HTTPS. The easiest way to secure your mjpg-stream is using a utility called stunnel. Stunnel is a very lightweight HTTPS-proxy. You give it all the keys and certificates and a two ports. stunnel then forwards the traffic from one port to the other while silently encrypting it. The installation is very simple. sudo apt-get install stunnel4 Next you have to generate an RSA key and certificate. This is very easy with OpenSSL openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 720 -nodes Just fill the prompts with your information. This creates a 2048-bit RSA key pair and a self-signed certificate valid for 720 days. Now create the following configuration file stunnel.conf. ; Paths to your key and certificate you generated in the previous step key=./key.pem cert=./cert.pem debug=7 [https] clinet=no accept=1234 ; Secure port connect=8080 ; mjpg streamer port sslVersion=all Now the last thing to do is start stunnel. sudo stunnel4 stunnel.conf Go to https://ip-of-your-raspberry-pi:1234. Confirm that you trust this certificate (it is self-signed so most browsers will complain about the security). Conclusion Now you can enjoy a live and secure stream directly from your Raspberry Pi. You can integrate it in your Home security system or on a robot. You can also grab the stream and use OpenCV to implement some cool computer vision ability. I used this on my PiNet Project to build a robot that can be controlled over the internet using a Webcam. I am curious what you can come up with! About the author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 0
  • 9190
article-image-making-space-invaders-game
Mike Cluck
04 Dec 2015
7 min read
Save for later

Making a Space Invaders Game

Mike Cluck
04 Dec 2015
7 min read
In just 6 quick steps, we're going to make our own Space Invaders game using Psykick2D. All of the code for this can be found here. What is Psykick2D? Psykick2D is a 2D game engine built with [Pixi.js] and designed with modularity in mind. Game objects are entities which are made up of components, systems contain and act on entities, and layers run systems. Getting Started After you download the latest Psykick2D build create an HTML page referencing Psykick2D. Your page should look something like this. Psykick2D will be looking for a container with a psykick id to place the game in. Now create a main.js and initialize the game world. var World = Psykick2D.World; World.init({ width: 400, height: 500, backgroundColor: '#000' }); Reload the page and you should see something like this. Blank screens aren't very exciting so let's add some sprites. Drawing Sprites You can find the graphics used here (special thanks to Jacob Zinman-Jeanes for providing these). Before we can use them though, we need to preload them. Add a preload option to the world initialization like so: World.init({ ... preload: [ 'sprites/player.json', 'sprites/enemy1.json', 'sprites/enemy2.json', 'sprites/enemy3.json', 'sprites/enemy4.json' ] }); Accessing sprites is as easy as referencing their frame name (given in the .json files) in a sprite component. To make an animated player, we just have to assemble the right parts in an entity. (I suggest placing these in a factory.js file) var Sprite = Psykick2D.Components.GFX.Sprite, Animation = Psykick2D.Components.GFX.Animation; function createPlayer(x, y) { var player = World.createEntity(), // Generate a new entity sprite = new Sprite({ frameName: 'player-1', x: x, y: y, width: 64, height: 29, pivot: { // The original image is sideways x: 64, y: 29 }, rotation: (270 * Math.PI) / 180 // 270 degrees in radians }), animation = new Animation({ maxFrames: 3, // zero-indexed frames: [ 'player-1', 'player-2', 'player-3', 'player-4' ] }); player.addComponent(sprite); player.addComponent(animation); return player; } Entities are comprised of components. All an entity needs is the right components and they'll work with any system. Because of thise, creating enemies looks almost exactly the same just using the enemy sprites. Once those components are attached, we just add the entities to a system then the system to a layer. The rest is taken care of automatically. var mainLayer = World.createLayer(), spriteSystem = new Psykick2D.Systems.Render.Sprite(), animationSystem = new Psykick2D.Systems.Behavior.Animate(); var player = createPlayer(210, 430); spriteSystem.addEntity(player); animationSystem.addEntity(player); mainLayer.addSystem(spriteSystem); mainLayer.addSystem(animationSystem); World.pushLayer(mainLayer); If you repeat the process for the enemies then you'll end up with a result like what you see here. With your ship on screen, let's add some controls. source Ship Controls To control the ship, we'll want to extend the BehaviorSystem to update the player's position on every update. This is made easier using the Keyboard module. var BehaviorSystem = Psykick2D.BehaviorSystem, Keyboard = Psykick2D.Input.Keyboard, Keys = Psykick2D.Keys, SPEED = 100; var PlayerControl = function() { this.player = null; BehaviorSystem.call(this); }; Psykick2D.Helper.inherit(PlayerControl, BehaviorSystem); PlayerControl.prototype.update = function(delta) { var velocity = 0; // Give smooth movement by using the change in time if (Keyboard.isKeyDown(Keys.Left)) { velocity = -SPEED * delta; } else if (Keyboard.isKeyDown(Keys.Right)) { velocity = SPEED * delta; } var player = this.player.getComponent('Sprite'); player.x += velocity; // Don't leave the screen if (player.x < 15) { player.x = 15; } else if (player.x > 340) { player.x = 340; } }; Since there's only one player we'll just set them directly instead of using the addEntity method. // main.js ... var controls = new PlayerControl(); controls.player = player; mainLayer.addSystem(controls); ... Now that the player can move we should level the playing field a little bit and give the enemies some brains. source Enemy AI In the original Space Invaders, the group of aliens would move left to right and then move closer to the player whenever they hit the edge. Since systems only accept entities with the right components, let's tag the enemies as enemies. function createEnemy(x, y) { var enemy = World.createEntity(); ... enemy.addComponentAs(true, 'Enemy'); return enemy; } Creating the enemy AI itself is pretty easy. var EnemyAI = function() { BehaviorSystem.call(this); this.requiredComponents = ['Enemy']; this.speed = 30; this.direction = 1; }; Psykick2D.Helper.inherit(EnemyAI, BehaviorSystem); EnemyAI.prototype.update = function(delta) { var minX = 1000, maxX = -1000, velocity = this.speed * this.direction * delta; for (var i = 0; i < this.actionOrder.length; i++) { var enemy = this.actionOrder[i].getComponent('Sprite'); enemy.x += velocity; // Prevent it from going outside the bounds if (enemy.x < 15) { enemy.x = 15; } else if (enemy.x > 340) { enemy.x = 340; } // Track the min/max minX = Math.min(minX, enemy.x); maxX = Math.max(maxX, enemy.x); } // If they hit the boundary if (minX <= 15 || maxX >= 340) { // Flip around and speed up this.direction = this.direction * -1; this.speed += 1; // Move the row down for (var i = 0; i < this.actionOrder.length; i++) { var enemy = this.actionOrder[i].getComponent('Sprite'); enemy.y += enemy.height / 2; } } }; Like before, we just add the correct entities to the system and add the system to the layer. var enemyAI = new EnemyAI(); enemyAI.addEntity(enemy1); enemyAI.addEntity(enemy2); ... mainLayer.addSystem(enemyAI); Incoming! Aliens are now raining down from the sky. We need a way to stop these invaders from space. source Set phasers to kill To start shooting alien scum, add the bullet sprite to the preload list preload: [ ... 'sprites/bullet.json' ] then generate a bullet just like you did the player. Since the original only let one bullet exist on screen at once, we're going to do the same. So in your PlayerControl system give it a new property for bullet and we'll add some shooting ability. var PlayerControl = function() { BehaviorSystem.call(this); this.player = null; this.bullet = null; }; ... PlayerControl.prototype.update = function(delta) { ... var bullet = this.bullet.getComponent('Sprite'); // If the bullet is off-screen and the player pressed the spacebar if (bullet.y <= -bullet.height && Keyboard.isKeyDown(Keys.Space)) { // Fire! bullet.y = player.x - 18; bullet.y = player.y; } else if (bullet.y > -bullet.height) { // Move the bullet up bullet.y -= 250 * delta; } }; Now we just need to draw the bullet and attach it to the PlayerControl system. var bullet = createBullet(0, -100); spriteSystem.addEntity(bullet); controls.bullet = bullet; And just like that you've got yourself a working gun. But you can't quite destroy those aliens yet. We need a way of making the bullet collide with the aliens. source Final Step Psykick2D has a couple of different collision structures built in. For our purposes we're going to use a grid. But in order to keep everything in sync, we want a dedicated physics system. So we're going to give our sprite components new properties, newX and newY, and set the new position there. Example: player.newX += velocity; To create a physics system, simply extend the BehaviorSystem and give it a collision structure. var Physics = function() { BehaviorSystem.call(this); this.requiredComponents = ['Sprite']; this.grid = new Psykick2D.DataStructures.CollisionGrid({ width: 400, height: 500, cellSize: 100, componentType: 'Sprite' // Do all collision checks using the sprite }); }; Psykick2D.Helper.inherit(Physics, BehaviorSystem); There's a little bit of work involved so you can view the full source here. What's important is that we check what kind of entity we're colliding with (entity.hasComponent('Bullet')) then we can destroy it by removing it from the layer. Here's the final product of all of your hard work. A fully functional space invaders-like game! Psykick2D has a lot more built in. Go ahead and really polish it up! final source About the Author Mike Cluck is a software developer interested in game development. He can be found on Github at MCluck90.
Read more
  • 0
  • 0
  • 4744

article-image-how-do-your-own-collision-detection
Mike Cluck
02 Dec 2015
7 min read
Save for later

How to Do Your Own Collision Detection

Mike Cluck
02 Dec 2015
7 min read
In almost every single game you're going to have different objects colliding. If you've worked with any game engines, this is generally taken care of for you. But what about if you want to write your own? For starters, we're just going to check for collisions between rectangles. In many cases, you'll want rectangles even for oddly shaped objects. Rectangles are very easy to construct, very easy to check for collisions, and take up very little memory. We can define a rectangle as such: var rectangle = { x: xPosition, y: yPosition, w: width, h: height } Checking for collisions between two rectangles can be broken into 2 sections: Compare the right-side against the left and the bottom-side against the top. A simple rectangle collision function might look this this: function isColliding(A, B) { var horizontalCollision = (A.x + A.w >= B.x && B.x + B.w >= A.x); var verticalCollision = (A.y + A.h >= B.y && B.y + B.h >= A.y); return horizontalCollision || verticalCollision; } Now that all of our game objects have collision rectangles, how do we decide which ones are colliding? Check All The Things! Why not just check everything against everything? The algorithm is really simple: for (var i = 0; i < rectangles.length; i++) { var A = rectangles[i]; for (var j = 0; j < rectangles.length; j++) { // Don't check a rectangle against itself if (j === i) { continue; } var B = rectangles[j]; if (isColliding(A, B)) { A.isColliding = true; B.isColliding = true; } } } Here's a working sample of this approach. The problem here is that this approach becomes drastically slower as the number of rectangles increases. If you're familiar with Big-O then this is O(n2) which is a big no-no. Try moving around the slider in the example to see how this affects the number of collision checks. Just by looking at the screen you can easily tell that there are rectangles which have no chance of touching each other, such as those on opposite sides of the screen. So to make this more efficient, we can only check against objects close to each other. How do we decide which ones could touch? Enter: Spatial Partitioning Spatial partioning is the process of breaking up the space into smaller chunks and only using objects residing in similar chunks for comparisons. This gives us a much better chance of only checking objects that could collide. The simplest way of doing this is to break the screen into a grid. Uniform Grid When you use a uniform grid, you divide the screen into equal sized blocks. Each of the chunks, or buckets as they're commonly called, will contain a list of objects which reside in them. Deciding which bucket to place them in is very simple: function addRectangle(rect) { // cellSize is the width/height of a bucket var startX = Math.floor(rect.x / cellSize); var endX = Math.floor((rect.x + rect.w) / cellSize); var startY = Math.floor(rect.y / cellSize); var endY = Math.floor((rect.y + rect.h) / cellSize); for (var y = startY; y <= endY; y++) { for (var x = startX; x<= endX; x++) { // Make sure this rectangle isn't already in this bucket if (grid[y][x].indexOf(rect) === -1) { grid[y][x].push(rect); } } } } A working example can be found here. Simple grids are actually extremely efficient. Mapping an object to a bucket is very straightforward which means adding and removing objects from the grid is very fast. There is one downside to using a grid though. It requires you to know the size of the world from the beginning and if that world is too big, you'll consume too much memory just creating the buckets. What we need for those situations is a more dynamic structure. Quad Tree Imagine your whole screen as one bucket. Well the problem with the first approach was that we would have to check against too many objects so what if that bucket automatically broke into smaller buckets whenever we put too many objects into it? That's exactly how a quad tree works. A quad tree limits how many objects can exist in each bucket and when that limit is broken, it subdivides into four distinct quadrants. Each of these quadrants are also quad trees so the process can continue recursively. We can define a quad tree like this: function QuadTree(x, y, w, h, depth) { this.x = x; this.y = y; this.w = w; this.h = h; this.depth = depth; this.objects = []; this.children = []; } As you can see, a quad tree is also a rectangle. You can think of a quad tree like those little Russian stacking dolls. But rather than finding just one doll inside of each, you'll find four different dolls inside of each one. Since these dolls are rectangular, we just check to see if an object intersects it. If it does, we put the object inside of it. QuadTree.prototype.insert = function(obj) { // Only add object if we're at max depth // or haven't exceeded the max objects count var atMaxDepth = (this.depth > MAX_DEPTH); var noChildren = (this.children.length === 0); var canAddMore = (this.objects.length < MAX_OBJECTS); if (atMaxDepth || (noChildren && canAddMore)) { this.objects.push(obj); } else if (this.children.length > 0) { // If there are children, add to them for (var i = 0; i < 4; i++) { var child = this.children[i]; if (isColliding(child, obj)) { child.insert(obj); } } } else { // Split into quadrants var halfWidth = this.w / 2; var halfHeight = this.h / 2; var top = this.y; var bottom = this.y + halfHeight; var left = this.x; var right = this.x + halfWidth; var newDepth = this.depth + 1; this.children.push(new QuadTree(right, top, halfWidth, halfHeight, newDepth)); this.children.push(new QuadTree(left, top, halfWidth, halfHeight, newDepth)); this.children.push(new QuadTree(left, bottom, halfWidth, halfHeight, newDepth)); this.children.push(new QuadTree(right, bottom, halfWidth, halfHeight, newDepth)); // Add the new object to simplify the next section this.objects.push(obj); // Move each of the objects into the children for (var i = 0; i < 4; i++) { var child = this.children[i]; for (var j = 0; j < this.objects.length; j++) { var otherObj = this.objects[j]; if (isColliding(child, otherObj)) { child.insert(otherObj); } } } // Clear out the objects from this node this.objects = []; } }; At this point any given object will only reside in nodes in which it's relatively close to other objects. To check for collisions, we only need to check against this subset of objects like how we did with the grid based collision. QuadTree.prototype.getCollisions = function(obj) { var collisions = []; // If there are children, get the collisions from there if (this.children.length > 0) { for (var i = 0; i < 4; i++) { var child = this.children[i]; if (isColliding(child, obj)) { // Concatenate together the results collisions = collisions.concat(child.getCollisions(obj)); } } } else { // Otherwise, check against each object for a collision for (var i = 0; i < this.objects.length; i++) { var other = this.objects[i]; // Don't compare an object with itself if (other === obj) { continue; } if (isColliding(other, obj)) { collisions.push(other); } } } return collisions; }; A working example of this is available here. Traversing this tree takes time so why would we use this instead of a collision grid? Because a quad tree can grow and change as you need it. This means you don't need to know how big your entire world is at the beginning of the game and load it all into memory. Just make sure to tweak your cell size and max objects. Conclusion Now that you've got your feet wet, you can go out and write your own collision detection systems or at least have a better understanding of how it all works under the hood. There are many more ways of checking collisions, such as using oct-trees and ray casting, and every approach has it's benefits and drawbacks so you'll have to figure out what works best for you and your game. Now go make something amazing! About the Author Mike Cluck is a software developer interested in game development. He can be found on Github at MCluck90.
Read more
  • 0
  • 0
  • 20458
Modal Close icon
Modal Close icon