Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-video-surveillance-background-modeling
Packt
30 Dec 2015
7 min read
Save for later

Video Surveillance, Background Modeling

Packt
30 Dec 2015
7 min read
In this article by David Millán Escrivá, Prateek Joshi and Vinícius Godoy the authors of the book OpenCV By Example, willIn order to detect moving objects, we first need to build a model of the background. This is not the same as the direct frame differencing because we are actually modeling the background and using this model to detect moving objects. When we say that we are modeling the background, we are basically building a mathematical formulation that can be used to represent the background. So, this performs in a much better way than the simple frame differencing technique. This technique tries to detect static parts of the scene and then includes builds (updates?) in the background model. This background model is then used to detect background pixels. So, it's an adaptive technique that can adjust according to the scene. (For more resources related to this topic, see here.) Naive background subtraction Let's start the discussion from the beginning. What does a background subtraction process look like? Consider the following image: The preceding image represents the background scene. Now, let's introduce a new object into this scene: As shown in the preceding image, there is a new object in the scene. So, if we compute the difference between this image and our background model, you should be able to identify the location of the TV remote: The overall process looks like this: Does it work well? There's a reason why we call it the naive approach. It works under ideal conditions, and as we know, nothing is ideal in the real world. It does a reasonably good job of computing the shape of the given object, but it does so under some constraints. One of the main requirements of this approach is that the color and intensity of the object should be sufficiently different from that of the background. Some of the factors that affect these kinds of algorithms are image noise, lighting conditions, autofocus in cameras, and so on. Once a new object enters our scene and stays there, it will be difficult to detect new objects that are in front of it. This is because we don't update our background model, and the new object is now part of our background. Consider the following image: Now, let's say a new object enters our scene: We identify this to be a new object, which is fine. Let's say another object comes into the scene: It will be difficult to identify the location of these two different objects because their locations overlap. Here's what we get after subtracting the background and applying the threshold: In this approach, we assume that the background is static. If some parts of our background start moving, then those parts will start getting detected as new objects. So, even if the movements are minor, say a waving flag, it will cause problems in our detection algorithm. This approach is also sensitive to changes in illumination, and it cannot handle any camera movement. Needless to say, it's a delicate approach! We need something that can handle all these things in the real world. Frame differencing We know that we cannot keep a static background image that can be used to detect objects. So, one of the ways to fix this would be to use frame differencing. It is one of the simplest techniques that we can use to see what parts of the video are moving. When we consider a live video stream, the difference between successive frames gives a lot of information. The concept is fairly straightforward. We just take the difference between successive frames and display the difference. If I move my laptop rapidly, we can see something like this: Instead of the laptop, let's move the object and see what happens. If I rapidly shake my head, it will look something like this: As you can see in the preceding images, only the moving parts of the video get highlighted. This gives us a good starting point to see the areas that are moving in the video. Let's take a look at the function to compute the frame difference: Mat frameDiff(Mat prevFrame, Mat curFrame, Mat nextFrame) { Mat diffFrames1, diffFrames2, output; // Compute absolute difference between current frame and the next frame absdiff(nextFrame, curFrame, diffFrames1); // Compute absolute difference between current frame and the previous frame absdiff(curFrame, prevFrame, diffFrames2); // Bitwise "AND" operation between the above two diff images bitwise_and(diffFrames1, diffFrames2, output); return output; } Frame differencing is fairly straightforward. You compute the absolute difference between the current frame and previous frame and between the current frame and next frame. We then take these frame differences and apply bitwise AND operator. This will highlight the moving parts in the image. If you just compute the difference between the current frame and previous frame, it tends to be noisy. Hence, we need to use the bitwise AND operator between successive frame differences to get some stability when we see the moving objects. Let's take a look at the function that can extract and return a frame from the webcam: Mat getFrame(VideoCapture cap, float scalingFactor) { //float scalingFactor = 0.5; Mat frame, output; // Capture the current frame cap >> frame; // Resize the frame resize(frame, frame, Size(), scalingFactor, scalingFactor, INTER_AREA); // Convert to grayscale cvtColor(frame, output, CV_BGR2GRAY); return output; } As we can see, it's pretty straightforward. We just need to resize the frame and convert it to grayscale. Now that we have the helper functions ready, let's take a look at the main function and see how it all comes together: int main(int argc, char* argv[]) { Mat frame, prevFrame, curFrame, nextFrame; char ch; // Create the capture object // 0 -> input arg that specifies it should take the input from the webcam VideoCapture cap(0); // If you cannot open the webcam, stop the execution! if( !cap.isOpened() ) return -1; //create GUI windows namedWindow("Frame"); // Scaling factor to resize the input frames from the webcam float scalingFactor = 0.75; prevFrame = getFrame(cap, scalingFactor); curFrame = getFrame(cap, scalingFactor); nextFrame = getFrame(cap, scalingFactor); // Iterate until the user presses the Esc key while(true) { // Show the object movement imshow("Object Movement", frameDiff(prevFrame, curFrame, nextFrame)); // Update the variables and grab the next frame prevFrame = curFrame; curFrame = nextFrame; nextFrame = getFrame(cap, scalingFactor); // Get the keyboard input and check if it's 'Esc' // 27 -> ASCII value of 'Esc' key ch = waitKey( 30 ); if (ch == 27) { break; } } // Release the video capture object cap.release(); // Close all windows destroyAllWindows(); return 1; } How well does it work? As we can see, frame differencing addresses a couple of important problems that we faced earlier. It can quickly adapt to lighting changes or camera movements. If an object comes in the frame and stays there, it will not be detected in the future frames. One of the main concerns of this approach is about detecting uniformly colored objects. It can only detect the edges of a uniformly colored object. This is because a large portion of this object will result in very low pixel differences, as shown in the following image: Let's say this object moved slightly. If we compare this with the previous frame, it will look like this: Hence, we have very few pixels that are labeled on that object. Another concern is that it is difficult to detect whether an object is moving toward the camera or away from it. Resources for Article: Further resources on this subject: Tracking Objects in Videos [article] Detecting Shapes Employing Hough Transform [article] Hand Gesture Recognition Using a Kinect Depth Sensor [article]
Read more
  • 0
  • 0
  • 2709

article-image-courses-users-and-roles
Packt
30 Dec 2015
9 min read
Save for later

Courses, Users, and Roles

Packt
30 Dec 2015
9 min read
In this article by Alex Büchner, the author of Moodle 3 Administration, Third Edition, gives an overview of Moodle courses, users, and roles. The three concepts are inherently intertwined and any one of these cannot be used without the other two. We will deal with the basics of the three core elements and show how they work together. Let's see what they are: Moodle courses: Courses are central to Moodle as this is where learning takes place. Teachers upload their learning resources, create activities, assist in learning and grade work, monitor progress, and so on. Students, on the other hand, read, listen to or watch learning resources, participate in activities, submit work, collaborate with others, and so on. Moodle users: These are individuals accessing our Moodle system. Typical users are students and teachers/trainers, but also there are others such as teaching assistants, managers, parents, assessors, examiners, or guests. Oh, and the administrator, of course! Moodle roles: Roles are effectively permissions that specify which features users are allowed to access and, also, where and when (in Moodle) they can access them. Bear in mind that this articleonly covers the basic concepts of these three core elements. (For more resources related to this topic, see here.) A high-level overview To give you an overview of courses, users, and roles, let's have a look at the following diagram. It shows nicely how central the three concepts are and also how other features are related to them. Again, all of their intricacies will be dealt with in due course, so for now, just start getting familiar with some Moodle terminology. Let's start at the bottom-left and cycle through the pyramid clockwise. Users have to go through an Authentication process to get access to Moodle. They then have to go through theEnrolments step to be able to participate in Courses, which themselves are organized into Categories. Groups & Cohorts are different ways to group users at course level or site-wide. Users are granted Roles in particular Contexts. Which role is allowed to do what and which isn't, depends entirely on the Permissions set within that role. The diagram also demonstrates a catch-22 situation. If we start with users, we have no courses to enroll them in to (except the front page); if we start with courses, we have no users who can participate in them. Not to worry though. Moodle lets us go back and forth between any administrative areas and, often, perform multiple tasks at once. Moodle courses Moodle manages activities and stores resources in courses, and this is where learning and collaboration takes place. Courses themselves belong to categories, which are organized hierarchically, similar to folders on our local hard drive. Moodle comes with a default category called Miscellaneous, which is sufficient to show the basics of courses. Moodle is a course-centric system. To begin with, let's create the first course. To do so, go to Courses|Managecourses and categories. Here, select the Miscellaneous category. Then, select the Create newcourse link, and you will be directed to the screen where course details have to be entered. For now, let's focus on the two compulsory fields, namely Coursefullname and Courseshortname. The former is displayed at various places in Moodle, whereas the latter is, by default,used to identify the course and is also shown in the breadcrumb trail. For now, we leave all other fields empty or at their default values and save the course by clicking on the Savechanges button at the bottom. The screen displayed after clicking onSavechanges shows enrolled users, if any. Since we just created the course, there are no users present in the course yet. In fact, except the administrator account we are currently using, there are no users at all on our Moodle system. So, we leave the course without users for now and add some users to our LMS before we come back to this screen (select the Home link in the breadcrumb). Moodle users Moodle users, or rather their user accounts, are dealt within Users|Accounts. Before we start, it is important to understand the difference between authentication and enrolment. Moodle users have to be authenticated in order to log in to the system. Authentication grants users access to the system through login where a username and password have to be given (this also applies to guest accounts where a username is allotted internally). Moodle supports a significant number of authentication mechanisms, which are discussed later in detail. Enrolment happens at course level. However, a user has to be authenticated to the system before enrolment to a course can take place. So, a typical workflow is as follows (there are exceptions as always, but we will deal with them when we get there): Create your users Create your courses (and categories) Associate users to courses and assign roles Again, this sequence demonstrates nicely how intertwined courses, users, and roles are in Moodle. Another way of looking at the difference between authentication and enrolment is how a user will get access to a course. Please bear in mind that this is a very simplistic view and it ignores the supported features such as external authentication, guest access, and self-enrolment. During the authentication phase, a user enters his credentials (username and password) or they are entered automatically via single sign-on. If the account exists locally, that is within Moodle, and the password is valid, he/she is granted access. The next phase is enrolment. If the user is enrolled and the enrolment hasn't expired, he/she is granted access to the course. You will come across a more detailed version of these graphics later on, but for now, it hopefully demonstrates the difference between authentication and enrolment. To add a user account manually, go to Users | Accounts|Addanewuser. As with courses, we will only focus on the mandatory fields, which should be self-explanatory: Username (has to be unique) New password (if a password policy has been set, certain rules might apply) Firstname Surname Email address Make sure you save the account information by selecting Create user at the bottom of the page. If any entered information is invalid, Moodle will display error messages right above the field. I have created a few more accounts; to see who has access to your Moodle system, go to Users|Accounts|Browselistofusers, where you will see all users. Actually, I did this via batch upload. Now that we have a few users on our system, let's go back to the course we created a minute ago and manually enroll new participants to it. To achieve this, go back to Courses|Manage courses and categories, select the Miscellaneous category again, and select the created demo course. Underneath the listed demo course, course details will be displayed alongside a number of options (on large screens, details are shown to the right). Here, select Enrolledusers. As expected, the list of enrolled users is still empty. Click on the Enrolusers button to change this. To grant users access to the course, select the Enrol button beside them and close the window. In the following screenshot, three users, participant01 to participant03 have already been enrolled to the course. Two more users, participant04 and participant05, have been selected for enrolment. You have probably spotted the Assignroles dropdown at the top of the pop-up window. This is where you select what role the selected user has, once he/she is enrolled in the course. For example, to give Tommy Teacher appropriate access to the course, we have to select the Teacher role first, before enrolling him to the course. This leads nicely to the third part of the pyramid, namely, roles. Moodle roles Roles define what users can or cannot see and do in your Moodle system. Moodle comes with a number of predefined roles—we already saw Student and Teacher—but it also allows us to create our own roles, for instance, for parents or external assessors. Each role has a certain scope (called context), which is defined by a set of permissions (expressed as capabilities). For example, a teacher is allowed to grade an assignment, whereas a student isn't. Or, a student is allowed to submit an assignment, whereas a teacher isn't. A role is assigned to a user in a context. Okay, so what is a context? A context is a ring-fenced area in Moodle where roles can be assigned to users. A user can be assigned different roles in different contexts, where the context can be a course, a category, an activity module, a user, a block, the front page, or Moodle itself. For instance, you are assigned the Administrator role for the entire system, but additionally, you might be assigned the Teacher role in any courses you are responsible for; or, a learner will be given the Student role in a course, but might have been granted the Teacher role in a forum to act as a moderator. To give you a feel of how a role is defined, let's go to Users |Permissions, where roles are managed, and select Defineroles. Click on the Teacher role and, after some general settings, you will see a (very) long list of capabilities: For now, we only want to stick with the example we used throughout the article. Now that we know what roles are, we can slightly rephrase what we have done. Instead of saying, "We have enrolled the user participant 01 in the demo course as a student", we would say, "We have assigned the studentrole to the user participant 01 in the context of the demo course." In fact, the term enrolment is a little bit of a legacy and goes back to the times when Moodle didn't have the customizable, finely-grained architecture of roles and permissions that it does now. One can speculate whether there are linguistic connotations between the terms role and enrolment. Summary In this article, we very briefly introduced the concepts of Moodle courses, users, and roles. We also saw how central they are to Moodle and how they are linked together. Any one of these concepts simply cannot exist without the other two, and this is something you should bear in mind throughout. Well, theoretically they can, but it would be rather impractical when you try to model your learning environment. If you haven't fully understood any of the three areas, don't worry. The intention was only to provide you with a high-level overview of the three core components and to touch upon the basics. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] Gamification with Moodle LMS [article] Moodle Plugins [article]
Read more
  • 0
  • 0
  • 6499

article-image-making-game-console-unity-part-2
Eliot Lash
28 Dec 2015
10 min read
Save for later

Making an In-Game Console in Unity Part 2

Eliot Lash
28 Dec 2015
10 min read
In part 1, I started walking you through making a console using uGUI, Unity’s built-in GUI framework. I'm showing you how to implement a simple input parser. Let's continue where we left off here in Part 2. We’re going to program the behavior of the console. I split this into a ConsoleController, which handles parsing and executing commands, and a view component that handles the communication between the ConsoleController and uGUI. This makes the parser code cleaner, and easier to switch to a different GUI system in the future if needed. First, make a new script called ConsoleController. Completely delete its contents and replace them with the following class: /// <summary> /// Handles parsing and execution of console commands, as well as collecting log output. /// Copyright (c) 2014-2015 Eliot Lash /// </summary> using UnityEngine; using System; using System.Collections.Generic; using System.Text; public delegate void CommandHandler(string[] args); public class ConsoleController { #region Event declarations // Used to communicate with ConsoleView public delegate void LogChangedHandler(string[] log); public event LogChangedHandler logChanged; public delegate void VisibilityChangedHandler(bool visible); public event VisibilityChangedHandler visibilityChanged; #endregion /// <summary> /// Object to hold information about each command /// </summary> class CommandRegistration { public string command { get; private set; } public CommandHandler handler { get; private set; } public string help { get; private set; } public CommandRegistration(string command, CommandHandler handler, string help) { this.command = command; this.handler = handler; this.help = help; } } /// <summary> /// How many log lines should be retained? /// Note that strings submitted to appendLogLine with embedded newlines will be counted as a single line. /// </summary> const int scrollbackSize = 20; Queue<string> scrollback = new Queue<string>(scrollbackSize); List<string> commandHistory = new List<string>(); Dictionary<string, CommandRegistration> commands = new Dictionary<string, CommandRegistration>(); public string[] log { get; private set; } //Copy of scrollback as an array for easier use by ConsoleView const string repeatCmdName = "!!"; //Name of the repeat command, constant since it needs to skip these if they are in the command history public ConsoleController() { //When adding commands, you must add a call below to registerCommand() with its name, implementation method, and help text. registerCommand("babble", babble, "Example command that demonstrates how to parse arguments. babble [word] [# of times to repeat]"); registerCommand("echo", echo, "echoes arguments back as array (for testing argument parser)"); registerCommand("help", help, "Print this help."); registerCommand("hide", hide, "Hide the console."); registerCommand(repeatCmdName, repeatCommand, "Repeat last command."); registerCommand("reload", reload, "Reload game."); registerCommand("resetprefs", resetPrefs, "Reset & saves PlayerPrefs."); } void registerCommand(string command, CommandHandler handler, string help) { commands.Add(command, new CommandRegistration(command, handler, help)); } public void appendLogLine(string line) { Debug.Log(line); if (scrollback.Count >= ConsoleController.scrollbackSize) { scrollback.Dequeue(); } scrollback.Enqueue(line); log = scrollback.ToArray(); if (logChanged != null) { logChanged(log); } } public void runCommandString(string commandString) { appendLogLine("$ " + commandString); string[] commandSplit = parseArguments(commandString); string[] args = new string[0]; if (commandSplit.Length < 1) { appendLogLine(string.Format("Unable to process command '{0}'", commandString)); return; } else if (commandSplit.Length >= 2) { int numArgs = commandSplit.Length - 1; args = new string[numArgs]; Array.Copy(commandSplit, 1, args, 0, numArgs); } runCommand(commandSplit[0].ToLower(), args); commandHistory.Add(commandString); } public void runCommand(string command, string[] args) { CommandRegistration reg = null; if (!commands.TryGetValue(command, out reg)) { appendLogLine(string.Format("Unknown command '{0}', type 'help' for list.", command)); } else { if (reg.handler == null) { appendLogLine(string.Format("Unable to process command '{0}', handler was null.", command)); } else { reg.handler(args); } } } static string[] parseArguments(string commandString) { LinkedList<char> parmChars = new LinkedList<char>(commandString.ToCharArray()); bool inQuote = false; var node = parmChars.First; while (node != null) { var next = node.Next; if (node.Value == '"') { inQuote = !inQuote; parmChars.Remove(node); } if (!inQuote && node.Value == ' ') { node.Value = ' n'; } node = next; } char[] parmCharsArr = new char[parmChars.Count]; parmChars.CopyTo(parmCharsArr, 0); return (new string(parmCharsArr)).Split(new char[] {' n'} , StringSplitOptions.RemoveEmptyEntries); } #region Command handlers //Implement new commands in this region of the file. /// <summary> /// A test command to demonstrate argument checking/parsing. /// Will repeat the given word a specified number of times. /// </summary> void babble(string[] args) { if (args.Length < 2) { appendLogLine("Expected 2 arguments."); return; } string text = args[0]; if (string.IsNullOrEmpty(text)) { appendLogLine("Expected arg1 to be text."); } else { int repeat = 0; if (!Int32.TryParse(args[1], out repeat)) { appendLogLine("Expected an integer for arg2."); } else { for(int i = 0; i < repeat; ++i) { appendLogLine(string.Format("{0} {1}", text, i)); } } } } void echo(string[] args) { StringBuilder sb = new StringBuilder(); foreach (string arg in args) { sb.AppendFormat("{0},", arg); } sb.Remove(sb.Length - 1, 1); appendLogLine(sb.ToString()); } void help(string[] args) { foreach(CommandRegistration reg in commands.Values) { appendLogLine(string.Format("{0}: {1}", reg.command, reg.help)); } } void hide(string[] args) { if (visibilityChanged != null) { visibilityChanged(false); } } void repeatCommand(string[] args) { for (int cmdIdx = commandHistory.Count - 1; cmdIdx >= 0; --cmdIdx) { string cmd = commandHistory[cmdIdx]; if (String.Equals(repeatCmdName, cmd)) { continue; } runCommandString(cmd); break; } } void reload(string[] args) { Application.LoadLevel(Application.loadedLevel); } void resetPrefs(string[] args) { PlayerPrefs.DeleteAll(); PlayerPrefs.Save(); } #endregion } I’ve tried to comment where appropriate, but I’ll give you a basic rundown of this class. It maintains a registry of methods that are mapped to string command names, as well as associated help text. This allows the “help” command to print out all the available commands along with extra info on each one. It keeps track of the output scrollback as well as the history of user-entered commands (this is to aid implementation of bash-style command history paging, which is left as an exercise to the reader. Although I have implemented a simple command, ‘!!’ which repeats the most recent command.) When the view receives command input, it passes it to runCommandString() which calls parseArguments() to perform some rudimentary string parsing using a space as a delimiter. It then calls runCommand() which tries to look up the corresponding method in the command registration dictionary, and if it finds it, calling it with the remaining arguments. Commands can call appendLogLine() to write to the in-game console log, and of course execute arbitrary code. Moving on, we will implement the view. Attach a new script to the ConsoleView object (the parent of ConsoleViewContainer) and call it ConsoleView. Replace its contents with the following: /// <summary> /// Marshals events and data between ConsoleController and uGUI. /// Copyright (c) 2014-2015 Eliot Lash /// </summary> using UnityEngine; using UnityEngine.UI; using System.Text; using System.Collections; public class ConsoleView : MonoBehaviour { ConsoleController console = new ConsoleController(); bool didShow = false; public GameObject viewContainer; //Container for console view, should be a child of this GameObject public Text logTextArea; public InputField inputField; void Start() { if (console != null) { console.visibilityChanged += onVisibilityChanged; console.logChanged += onLogChanged; } updateLogStr(console.log); } ~ConsoleView() { console.visibilityChanged -= onVisibilityChanged; console.logChanged -= onLogChanged; } void Update() { //Toggle visibility when tilde key pressed if (Input.GetKeyUp("`")) { toggleVisibility(); } //Toggle visibility when 5 fingers touch. if (Input.touches.Length == 5) { if (!didShow) { toggleVisibility(); didShow = true; } } else { didShow = false; } } void toggleVisibility() { setVisibility(!viewContainer.activeSelf); } void setVisibility(bool visible) { viewContainer.SetActive(visible); } void onVisibilityChanged(bool visible) { setVisibility(visible); } void onLogChanged(string[] newLog) { updateLogStr(newLog); } void updateLogStr(string[] newLog) { if (newLog == null) { logTextArea.text = ""; } else { logTextArea.text = string.Join(" n", newLog); } } /// <summary> /// Event that should be called by anything wanting to submit the current input to the console. /// </summary> public void runCommand() { console.runCommandString(inputField.text); inputField.text = ""; } } The ConsoleView script manages the GUI and forwards events to the ConsoleController. It also watches the ConsoleController and updates the GUI when necessary. Back in the inspector, select ConsoleView. We’re going to hook up the Console View component properties. Drag ConsoleViewContainer into the “View Container” property. Do the same for LogText into “Log Text Area” and InputField into “Input Field.” i Now we’ve just got a bit more hooking up to do. Select InputField, and in the Input Field component, find the “End Edit” event list. Click the plus button and drag ConsoleView in to the new row. In the function list, select ConsoleView > runCommand (). Finally, select EnterBtn and find the “On Click” event list in the Button component. Click the plus button and drag ConsoleView in to the new row. In the function list, select ConsoleView > onCommand (). Now we’re ready to test! Save your scene and run the game. The console should be visible. Type “help” into the input field: Now press the enter/return key. You should see the help text print out like so: Try out another test command, “echo foo bar baz”. It will show you how it splits the command arguments into a string array, printed out as a comma separated list: Also make sure the fallback “Ent” button is working to submit the input. Lastly, check if the console toggle key works: press the backtick/tilde key (located right above the left tab key.) It looks like this: The console should disappear. Press it again and it should reappear. On a mobile device, tapping five fingers at once will toggle the console instead. If you want to use a different means of toggling the console, you can edit this in ConsoleView.Update(). If anything is not working as expected, please go back over the instructions again and try to see if you’ve missed anything. Lastly, we don’t want the console to show when the game first starts. Stop the game and find ConsoleViewContainer in the hierarchy, then disable it by unchecking the box next to its name in the inspector. Now, save and run the game again. The console should be hidden until you press the backtick key. And that’s it! You now have an in-game, interactive console. It’s an extremely versatile debugging tool that’s easy to extend. Use it to implement cheat codes, enable or disable experimental features, obtain diagnostic output, or whatever else you can think of! When you want to create a new console command, just write a new method in ConsoleController “Command handlers” region and add a registerCommand() line for it in the constructor. Use the commands I’ve included as examples. If you want to have other scripts be able to log to the console, you can make the ConsoleController into a service as I described in my article “One-liner Singletons in Unity”. Make the ConsoleController set itself as a service in its constructor, Then have the other script get the ConsoleController instance and call appendLogLine() with its message. I hope having an in-game console will be as useful for you as it has been for me. Lastly, don’t forget to disable or delete the ConsoleView before shipping production builds unless you want your players to have access to all of your debug cheats! About the author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 3
  • 13174

article-image-one-liner-singletons-unity
Eliot Lash
16 Dec 2015
6 min read
Save for later

One-liner Singletons in Unity

Eliot Lash
16 Dec 2015
6 min read
This article is intended for intermediate-level C# programmers or above, and assumes some familiarity with object-oriented programming terminology. The focus is on Unity, but this approach works just as well in any C# environment. I’ve seen a number of common techniques for implementing singletons in Unity. A little background for those unfamiliar with the term: a singleton is simply an object of which only one instance ever exists, and this instance is globally accessible. This is great for shared data or services that need to be accessible from any script in your game, such as the player’s stats. There are many ways to do this. One way is to just have a GameObject in your scene with a certain MonoBehaviour, and have other scripts look it up by tag (or even slower, by name.) There are a few issues with this approach. First of all, if you accidentally have two GameObjects with the same name or tag, you’ll be arbitrarily interacting with one instead of the others, Unity will not notify you about this issue and it could lead to bugs. Secondly, looking up an object by name has a performance penalty, and it’s more busy work to tag every object if you want to use the tag lookup method. Another common way is to just copy-paste the code to make a class into a singleton. If we are following the principle of avoiding code duplication, an easy way to refactor this approach is by rolling the singleton code into a subclass of MonoBehaviour, and having our singletons inherit from that. A problem with this approach is that now we are adding rigidity into our class hierarchy. So we won’t be able to have a singleton that also inherits from a non-singleton subclass of MonoBehaviour. Both of these approaches also require your singleton to be a MonoBehaviour. This is often convenient, but limiting. For instance, if you are using the Model-View-Controller pattern, you may want your models and controllers to not be MonoBehaviours at all but rather “Plain Old” C# objects. The approach that I am presenting in this article gets around all of the above limitations, as providing some additional advantages. Instead of the classic singleton pattern of each class having a static instance variable, we will create a “service manager” singleton that will hold all of the instances we want global access to. The service manager provides the following advantages: Any class can be made into a service with a single line of code. Strong typing makes it unnecessary to cast services when referencing them. Bugs which cause a service instance to be set more than once result in an exception, making them easier to track down. Services don’t have to inherit from a specific class. By containing all service references in one place, It’s easier to clear them out in a single pass.  Without further ado, here is my implementation of the service manager class: /// <summary> /// Simple service manager. Allows global access to a single instance of any class. /// Copyright (c) 2014-2015 Eliot Lash /// </summary> using System; using System.Collections.Generic; public class Services { //Statics private static Services _instance; //Instance private Dictionary<Type, object> services = new Dictionary<Type, object>(); public Services() { if (_instance != null) { UnityEngine.Debug.LogError("Cannot have two instances of singleton."); return; } _instance = this; } /// <summary> /// Getter for singelton instance. /// </summary> public static Services instance { get { if (_instance == null) { new Services(); } return _instance; } } /// <summary> /// Set the specified service instance. Usually called like Set<ExampleService>(this). /// </summary> /// <param name="service">Service instance object.</param> /// <typeparam name="T">Type of the instance object.</typeparam> public void Set<T>(T service) where T : class { services.Add(typeof(T), service); } /// <summary> /// Gets the specified service instance. Called like Get<ExampleService>(). /// </summary> /// <typeparam name="T">Type of the service.</typeparam> /// <returns>Service instance, or null if not initialized</returns> public T Get<T>() where T : class { T ret = null; try { ret = services[typeof(T)] as T; } catch (KeyNotFoundException) { } return ret; } /// <summary> /// Clears internal dictionary of service instances. /// This will not clear out any global state that they contain, /// unless there are no other references to the object. /// </summary> public void Clear() { services.Clear(); } } As this is a classic singleton, it will be lazily instantiated the first time it’s used, so all you need to do is save this script into your project to get started. Now, let’s make a small script into a service. Create an empty GameObject and call it “TestService”. Also create a script called “TestService” and attach it. Now, add the following method: void Awake() { Services.instance.Set<TestService>(this); } Our class is now a service! Caution: To get around issues of initialization dependency when using MonoBehaviours, use a two-phase init process. All service instances should be set in their Awake() method, and not used by other classes until their respective Start() methods or later. For more information see Execution Order of Event Functions. We’ll also add a stub method to TestService to demonstrate how it can be used: public void Foo() { Debug.Log("TestService: Foo!"); } Now, create another empty GameObject and call it “TestClient”, and attach a new script also called “TestClient”. We’ll change its Start() method to look like so: void Start () { TestService ts = Services.instance.Get<TestService>(); ts.Foo(); //If you're only doing one operation with the service, it can be written even more compactly: //Services.instance.Get<TestService>().Foo(); } Now when you run the game, you should see the test message get written to the Unity console. And that’s all there is to it! Also, a note on global state. Earlier, I mentioned that clearing out global state is easier with the service manager. The service manager code sample I provided has a method (Services.Clear()) which will clear out its internal dictionary of services, but this will not completely reset their state. Unfortunately, this is a complex topic beyond the scope of this article, but I can offer some suggestions. If you are using MonoBehaviors as your services, calling Services.Clear() and then reloading the scene might be enough to do the trick. Otherwise, you’ll need to find a way to notify each service to clean itself up before clearing the service manager, such as having them all implement an interface with such a method in it. I hope you’ll give this a try, and enjoy the ease of creating and accessing more error-resistant and strictly typed global services in one line of code. For more Unity game development tutorials and extra content, visit our Unity page here.  About the Author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 0
  • 3968

article-image-programming-littlebits-circuits-javascript-part-2
Anna Gerber
14 Dec 2015
5 min read
Save for later

Programming littleBits circuits with JavaScript Part 2

Anna Gerber
14 Dec 2015
5 min read
In this two-part series, we're programing littleBits circuits using the Johnny-Five JavaScript Robotics Framework. Be sure to read over Part 1 before continuing here. Let's create a circuit to play with, using all of the modules from the littleBits Arduino Coding Kit. Attach a button to the Arduino connector labelled d0. Attach a dimmer to the connector marked a0 and a second dimmer to a1. Turn the dimmers all the way to the right (max) to start with. Attach a power module to the single connector on the left-hand side of the fork module, and the three output connectors of the fork module to all of the input modules. The bargraph should be connected to d5, and the servo to d9, and both set to PWM output mode using the switches on board of the Arduino. The servo module has two modes: turn and swing. Swing mode makes the servo sweep betwen maximum and minimum. Set it to swing mode using the onboard switch. Reading input values We'll create an instance of the Johnny-Five Button class to respond to button press events. Our button is connected to the connector labelled d0 (i.e. digital "pin" 0) on our Arduino, so we'll need to specify the pin as an argument when we create the button. var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { var button = new five.Button(0); }); Our dimmers are connected to analog pins (A0 and A1), so we'll specify these as strings when we create Sensor objects to read their values. We can also provide options for reading the values; for example, we'll set the frequency to 250 milliseconds, so we'll be receiving 4 readings per second for both dimmers. var dimmer1 = new five.Sensor({ pin: "A0", freq: 250 }); var dimmer2 = new five.Sensor({ pin: "A1", freq: 250 }); We can attach a function that will be run any time the value changes (on "change") or anytime we get a reading (on "data"): dimmer1.on("change", function() { // raw value (between 0 and 1023) console.log("dimmer 1 is " + this.raw); }); Run the code and try turning dimmer 1. You should see the value printed to the console whenever the dimmer value changes. Triggering behavior Now we can use code to hook our input components up to our output components. To use, for example, the dimmer to control the brightness of the bargraph, change the code in the event handler: var led = new five.Led(5); dimmer1.on("change", function() { // set bargraph brightness to one quarter // of the raw value from dimmer led.brightness(Math.floor(this.raw / 4)); }); You'll see the bargraph brightness fade as you turn the dimmer. We can use the JavaScript Math library and operators to manipulate the brightness value before we send it to the bargraph. Writing code gives us more control over the mapping between input values and output behaviors than if we'd snapped our littleBits modules directly together without going via the Arduino. We set our d5 output to PWM mode, so all of the LEDs should fade in and out at the same time. If we set the output to analog mode instead, we'd see the behavior change to light up more or fewer LEDs depending on value of the brightness. Let's use the button to trigger the servo stop and start functions. Add a button press handler to your code, and a variable to keep track of whether the servo is running or not. We'll toggle this variable between true and false using JavaScript's boolean not operator (!). We can determine whether to stop or start the servo each time the button is pressed via a conditional statement based on the value of this variable. var servo = new five.Motor(9); servo.start(); var button = new five.Button(0); var toggle = false; var speed = 255; button.on("press", function(value){ toggle = !toggle; if (toggle) { servo.start(speed); } else { servo.stop(); } }); The other dimmer can be used to change the servo speed: dimmer2.on("change", function(){ speed = Math.floor(this.raw / 4); if (toggle) { servo.start(speed); } }); There are many input and output modules available within the littleBits system for you to experiment with. You can use the Sensor class with input modules, and check out the Johnny-Five API docs to see examples of types of outputs supported by the API. You can always fall back to using the Pin class to program any littleBits module. Using the REPL Johnny-Five includes a Read-Eval-Print-Loop (REPL) so you can interactively write code to control components instantly - no waiting for code to compile and upload! Any of the JavaScript objects from your program that you want to access from the REPL need to be "injected". The following code, for example, injects our servo and led objects. this.repl.inject({ led: led, servo: servo }); After running the program using Node.js, you'll see a >> prompt in your terminal. Try some of the following functions (hit Enter after each to see it take effect): servo.stop(): stop the servo servo.start(50): start servo moving at slow speed servo.start(255): start servo moving at max speed led.on(): turn LED on led.off(): turn LED off led.pulse(): slowly fade LED in and out led.stop(): stop pulsing LED led.brightness(100): set brightness of LED - the parameter should be between 0 and 255 LittleBits are fantastic for prototyping, and pairing the littleBits Arduino with JavaScript makes prototyping interactive electronic projects even faster and easier. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 2747

article-image-making-game-console-unity-part-1
Eliot Lash
14 Dec 2015
6 min read
Save for later

Making an In-Game Console in Unity Part 1

Eliot Lash
14 Dec 2015
6 min read
This article is intended for intermediate-level Unity developers or above. One of my favorite tools that I learned about while working in the game industry is the in-game console. It’s essentially a bare-bones command-line environment where you can issue text commands to your game and get text output. Unlike Unity’s built-in console (which is really just a log,) it can take input and display output on whatever device the game is running on. I’m going to walk you through making a console using uGUI, Unity’s built-in GUI framework available in Unity 4.6 and later. I’m assuming you have some familiarity with it already, so if not I’d recommend reading the UI Overview before continuing. I’ll also show how to implement a simple input parser. I’ve included a unitypackage with an example scene showing the console all set up, as well as a prefab console that you can drop into an existing scene if you’d like. However, I will walk you through setting everything up from scratch below. If you don’t have a UI Canvas in your scene yet, make one by selecting GameObject > UI > Canvas from the menu bar. Now, let’s get started by making a parent object for our console view. Right click on the Canvas in the hierarchy and select “Create Empty”. There will now be an empty GameObject in the Canvas hierarchy, that we can use as the parent for our console. Rename this object to “ConsoleView”. I personally like to organize my GUI hierarchy a bit to make it easier to do flexible layouts and turn elements of the GUI off and on, so I also made some additional parent objects for “HUD” (GUI elements that are part of a layer that draws over the game, usually while the game is running) and a child of that for “DevHUD”, those HUD elements that are only needed during development. This makes it easier to disable or delete the DevHUD when making a production build of my game. However, this is optional. Enter 2D selection mode and scale the ConsoleView so it fills the width of the Canvas and most of its height. Then set its anchor mode to “stretch top”. Now right click on ConsoleView in the hierarchy and select “Create Empty”. Rename this new child “ConsoleViewContainer”. Drag to the same size as its parent, and set its anchor mode to “stretch stretch”. We need this additional container as the console needs the ability to show and hide during gameplay, so we will be enabling and disabling ConsoleViewContainer. But we still need the ConsoleView object to stay enabled so that it can listen for the special gesture/keypress which the user will input to bring up the console. Next, we’ll create our text input field. Right click on ConsoleViewContainer in the hierarchy and select UI > Input Field. Align the InputField with the upper left corner of ConsoleViewContainer and drag it out to about 80% of the screen width. Then set the anchor mode to “stretch top”. I prefer a dark console, so I changed the Image Color to dark grey. Open up the children of the InputField and you can edit the placeholder text, I set mine to “Console input”. You may also change the Placeholder and Text color to white if you want to use a dark background. On some platforms at this time of writing, Unity won’t handle the native enter/submit button correctly, so we’re going to add a fallback enter button next. (If you’re sure this won’t be an issue on your platforms, you can skip this paragraph and resize the console input to fill the width of the container.) Right click on ConsoleViewContainer in the hierarchy and select UI > Button. Align the button to the right of the InputField and set the anchor to “stretch top”. Rename the Button to EnterBtn. Select its text child in the hierarchy and edit the text to say “Ent”. Next, we’re going to make the view for the console log output. Right click on ConsoleViewContainer in the hierarchy and select UI > Image. Drag the image to fill the area below the InputField and set the anchor to “stretch stretch”. Rename Image to LogView. If you want a dark console (you know you do!) change the image color to black. Now at the bottom of the inspector, click “Add Component” and select UI > Mask. Again, click “Add Component” and select UI > Scroll Rect. Right click on LogView in the hierarchy and select UI > Text. Scale it to fill the LogView and set the anchor to “stretch bottom”. Rename it to LogText. Set the text to bottom align. If you’re doing a dark console, set the text color to white. To make sure we’ve got everything set up properly, add a few paragraphs of placeholder text (my favorite source for this is the hipster ipsum generator.) Now drag the top way up past the top of the canvas to give room for the log scrollback. If it’s too short, the log rotation code we’ll write later might not work properly. Now, we’ll make the scrollbar. Right click on ConsoleViewContainer in the hierarchy and select UI > Scrollbar. In the Scrollbar component, set the Direction to “Bottom To Top”, and set the Value to 0. Size it to fit between the LogView and the edge of the container and set the anchor to “stretch stretch”. Finally, we’ll hook up our complete scroll view. Select LogView and in the Scroll Rect component, drag in LogText to the “Content” property, and Scrollbar into the “Vertical Scrollbar” property. Then, uncheck the “Horizontal” box. Go ahead and run the game to make sure we’ve set everything up correctly. You should be able to drag the scroll bar and watch the text scroll down. If not, go back through the previous steps and try to figure out if you missed anything. This concludes part one. Stay tuned for part two of this series where you will learn how to program the behavior of the console. Find more Unity game development tutorials and content on our Unity page.  About the Author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 2
  • 28139
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-platform-detection-your-nwjs-app
Adam Lynch
11 Dec 2015
6 min read
Save for later

Platform detection in your NW.js app

Adam Lynch
11 Dec 2015
6 min read
There are various reasons why you might want to detect which platform or operating system your NW.js app is currently being ran on. Your keyboard shortcuts or UI may differ per platform, you might want to store files in platform-specific directories on disk, etc. Thanks to node's (or io.js') os module, it isn't too difficult. Which operating system? On Mac, Linux and Windows, the following script would output darwin, linux and win32 or win64 respectively. var os = require('os'); console.log(os.platform()); The other possible return values of os.platform() are freebsd and sunos. Which Linux distribution? Figuring this out is a bit more problematic. The uname -v command returns some information like the following if ran on Ubuntu: #42~precise1-Ubuntu SMP Wed Aug 14 15:31:16 UTC 2013. You could spawn this command via io.js' child_process module or any of the countless similar modules on npm. This doesn't give you much though, it's probably safest to check for and read distrubtion-specific release information files (with the io.js' fs module) which include: Distribution File(s) Debian /etc/debian_release and /etc/debian_version but be careful as these also exist on Ubuntu. Fedora /etc/fedora-release Gentoo /etc/gentoo-release Mandrake /etc/mandrake-release Novell SUSE /etc/SUSE-release Red Hat /etc/redhat-release and /etc/redhat_version Slackware /etc/slackware-release and /etc/slackware-version Solaris / Sparc /etc/release Sun JDS /etc/sun-release UnitedLinux /etc/UnitedLinux-release Ubuntu /etc/lsb-release and /etc/os-release Yellow dog /etc/yellowdog-release Keep in mind that the format of each of these files can differ. An example /etc/lsb-release file: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS" An example /etc/os-release file: NAME="Ubuntu" VERSION="12.04.3 LTS, Precise Pangolin" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu precise (12.04.3 LTS)" VERSION_ID="12.04" 32-bit or 64-bit architecture? The safest way to check this is to use os.arch() in combination with system environment variables; the following script will output 32 or 64 depending on the architecture: var os = require('os'); var is64Bit = os.arch() === 'x64' || process.env.hasOwnProperty('PROCESSOR_ARCHITEW6432'); console.log(is64Bit ? 64 : 32); Version detection This is a bit trickier. os.release() returns the platform version but it is not what you'd expect it to be. It will return the actual internal (not sure if this is the right word?) operating system version. You might expect the return value to be 10.0.0 when called on Mac OSX Yosemite but it will in fact return 14.0.0. To see the mappings between Darwin and Mac release versions, see the Darwin (operating system) Wikipedia entry. Microsoft provide Windows' versions but you may need to do some testing yourself to be safe as you can see both Windows 8 and Windows Server 2012 are both listed as being 9.2. In my experience, it's safe to check against 6.2.9200 but don't take my word for it. os.release() will return whatever uname -v would return on Linux (e.g. 3.8.0-29-generic on Ubuntu 12.04.3 LTS) so it's safer to read the distribution-specific release information file(s) we saw earlier. The finished article The final version of our platform.js module looks like this: var os = require('os'); var platform = { isLinux: false, isMac: false, isWindows: false, isWindows8: false, version: os.release() }; /** * Checks if the current platform version is greater than or equal to the desired minimum version given * * @param minimumVersion {string} E.g. 10.0.0. * See [the Darwin operating system Wikipedia entry](http://en.wikipedia.org/wiki/Darwin_%28operating_system%29#Release_history) for Mac - Darwin versions. * Also, Windows 8 >= 6.2.9200 * * @returns {boolean} */ var isOfMinimumVersion = function(minimumVersion){ var actualVersionPieces = platform.version.split('.'), pieces = minimumVersion.split('.'), numberOfPieces = pieces.length; for(var i = 0; i < numberOfPieces; i++){ var piece = parseInt(pieces[i], 10), actualPiece = parseInt(actualVersionPieces[i], 10); if (typeof actualPiece === 'undefined') { break; // e.g. 13.1 passed and actual is 13.1.0 } else if (actualPiece > piece) { break; // doesn't matter what the next bits are, the major version (or whatever) is larger } else if (actualPiece === piece) { continue; // to check next version piece } else { return false; } } return true; // all was ok }; var name = os.platform(); if(name === 'darwin'){ platform.name = 'mac'; platform.isMac = true; } else if(name === 'linux'){ platform.name = 'linux'; platform.isLinux = true; } else { platform.name = 'windows'; platform.isWindows = true; platform.isWindows8 = isOfMinimumVersion('6.2.9200'); } platform.is64Bit = os.arch() === 'x64' || process.env.hasOwnProperty('PROCESSOR_ARCHITEW6432'); module.exports = platform; Take note of our isOfMinimumVersion method and isWindows8 property. So then, from anywhere in your app you could require this module and use it for platform-specific code where needs be. For example: var platform = require('./platform'); if(platform.isMac){ // do something } elseif(platform.isWindows8 && platform.is64bit) { // do something else } Platform-dependent styles You may have spotted that our platforms.js module exports a name property. This is really useful for applying platform-specific styles. To achieve differing styles per platform, we'll use this name property to add a platform- class to our body element: var platform = require('./platform'); document.body.classList.add('platform-' + platform.name); Note that I've used Element.classList here which isn't supported by a lot of browsers people currently use. The great thing about NW.js is we can ignore that. We know that 100% of our app's users are using Chromium 43. So then we can change the styling of certain elements based on the current platform. Let's say you have some nice custom button styles and you'd like them to be a bit rounder on Mac OS X. All we have to do is use this platform- class in our CSS: .button{ //yourcustomstyles border-radius:3px; } .platform-mac.button{ border-radius:5px; } So any elements with the button class look like just like the custom buttons you designed (or grabbed from CodePen) but if the platform-mac class exists on an ancestor, i.e. the body element, then the buttons' corners are a little more rounded. You could easily go a little further and add certain classes depending on the given platform version. You could add a platform-windows-8 class to the body if platform.isWindows8 is true and then make the buttons' square-cornered if it exists; .button{ //yourcustomstyles border-radius:3px; } .platform-mac.button{ border-radius:5px; } .platform-windows-8.button{ border-radius:0; } That's it! Feel free to take this, use it, abuse it, build on top of it, or whatever you'd like. Go wild. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010. 
Read more
  • 0
  • 0
  • 4859

article-image-installing-your-nwjs-app-windows
Adam Lynch
09 Dec 2015
4 min read
Save for later

Installing your NW.js app on Windows

Adam Lynch
09 Dec 2015
4 min read
NW.js is great for creating desktop applications using Web app technologies. If you're not familiar with NW.js, I'd advise you to read an introductory article like Creating Your First Desktop App With HTML, JS and Node-WebKit to get a good base first. This is a slightly more advanced article intended for anyone interested into distributing their NW.js app to Windows users. I've been through it myself with Teamwork Chat and there are a few things to consider. What exactly should you provide the end user with? How? Where? And why? What to ship If you want to keep it simple you can simply package everything up into a ZIP archive for your users to download. The ZIP should include your app, along with all of the files generated during the build; the dynamic-link libraries (.dlls), the nw.pak and the other PAK files in the locales directory. All of these extra files are required to be certain your app will function correctly on Windows, even if they already have some of these from a previous installation of Google Chrome, for example. When I say you need to include "your app" in this archive, I of course mean your myApp.exe if you've used the nw-builder module to build your app (which I recommend). If you want to use the .nw method of running your app, you will have to distribute your app in separate pieces; nw.exe, a .nw archive containing your app code and myApp.ink a shortcut which executes nw.exe with your .nw archive. This is how the popular Popcorn Time app works. You could rename nw.exe to something nicer but it's not advised to ensure native module compatibility. Installers Giving the user a simple ZIP isn't optimal though. It isn't the most user-friendly option and you wouldn't have much control over what the user does with your app; where they put it, how many copies of your app they have, etc. This is where installers come in. E.g. Inno Setup, NSIS or Install Shield. The applications provided to build these installers can be configured to grab all of your files and store them wherever you choose on the user's machine, pin your app to their start menu and a whole host of other options. Where to store your app The first place that springs to mind is Program Files, right? Well, if your app has to add / overwrite / remove files from the directory in which it's located then you'll run into problems with permissions. To get around this I suggest storing your app in C:Users<username>AppDataRoamingMyApp like a handful of big name apps do. If you really need to store your app in Program Files then you could theoretically use something like the node-windows node module to elevate the privileges of the current user to a local administrator and execute the problematic filesystem interactions using Windows services. This means though that Windows' UAC (User Account Control) may show for the user depending on their settings. If you were to use node-windows, this also means that you'd have to pass Windows commands as strings instead of using the fs module. Another possible location is C:UsersDefaultAppDataRoamingMyApp. Anything stored here will be copied to C:Users<new-username>AppDataRoamingMyApp for each new user profile created on the machine. This may or may not suit your application or you might even want to let the user to decide (by having this as an option in the installer). What to sign If you're digitally signing your app with a certificate, make sure you sign each and every executable; not only myApp.exe / nw.exe but also any .exe's your app spawns as well as any executables any of your node_modules dependencies spawn (which aren't already signed by the maintainers). If you were to use the [node-webkit-updatermodule](https://github.com/edjafarov/node-webkit-updater/), for example, it contains an unsignedunzip.exe`. Make sure to sign all of these before building your installer, as well as signing the installer itself. That's all folks! I've had to figure a lot of this stuff myself by trial and error so I hope it saves you some time. If I've missed anything, feel free to let me know in a comment below. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 13828

article-image-how-to-stream-live-video-with-raspberry-pi
Jakub Mandula
07 Dec 2015
5 min read
Save for later

How to Stream Live Video With Raspberry Pi

Jakub Mandula
07 Dec 2015
5 min read
Say you want to build a remote controlled robot or a surveillance camera using your Raspberry Pi. What is the best method of transmitting the live footage to your screen? If only there was a program that could do this in a simple way while not frying your Pi. Fortunately, there is a program called mjpg-streamer to save the day. In its core, it grabs images from one input device and forwards them to one or more output devices. We are going to provide it with the video from our web-camera and feed it into a self-hosted HTTP server that lets us display the images in the browser. Dependencies Let's get started! mjpg-streamer does not come as a standard package and must be compiled manually. But do not let that be the reason for giving up on it. In order to compile mjpg-streamer, we have to install several dependencies. sudo apt-get update sudo apt-get upgrade sudo apt-get install build-essential libjpeg8-dev imagemagick libv4l-dev Recently the videodev.h file has been replaced by a newer version videodev2.h. In order to fix the path references, just create a quick symbolic link. sudo ln -s /usr/include/linux/videodev2.h /usr/include/linux/videodev.h Downloading Now that we have all the dependencies we can download the mjpg-streamer repository. I am using a Github fork by jacksonliam. git clone git@github.com:jacksonliam/mjpg-streamer.git cd mjpg-streamer/mjpg-streamer-experimental Compilation There is a number of plugins that come with MJPEG-streamer. We are going to only compile these: input_file.so - used to provide a file as input input_uvc.so - provides input form USB web cameras input_raspicam.so - provides input from the raspicam module output_http.so - our http streaming output Feel free to look through the rest of the plugins folder and explore the other inputs/outputs. Just add their names to the command below. make mjpg_streamer input_file.so input_uvc.so input_raspicam.so output_http.so Moving mjpg-streamer I recommend moving mjpg-streamer to a more permanent location in your file system. I personally use the /usr/local directory. But feel free to use any other path as long as you adjust any following steps in the setup process. sudo cp mjpg_streamer /usr/local/bin sudo cp input_file.so input_uvc.so input_raspicam.so output_http.so /usr/local/lib/ sudo cp -R www /usr/local/www Finally reference the plugins in your bashrc file. Simply open it with your favorite text editor. vim ~/.bashrc And append the following line to the file: export LD_LIBRARY_PATH=/usr/local/lib/ You can a Now source your bash and you are good to go. source ~/.bashrc Running mjpg-streamer Running mjpg-streamer is very simple. If you have followed all the steps up to now all you have to do is one command. mjpg_streamer -i input_uvc.so -o "output_http.so -w /usr/local/www" The flags mean the following: * -i - input to mjpg-streamer (our USB camera) * -o output from mjpg-streamer (our HTTP server) * -w a flag to the HTTP server of the location with the HTML and CSS which we moved to /usr/local/www. There is a vast number of other flags that you can explore. Here I list a few of them. -f - framerate in seconds -c - protect the HTTP server with a username:password -b - run in background -p - use another port Testing the stream Now that you have your mjpg-streamer running go to http://ip-of-your-raspberry-pi:8080 and watch the live stream. To just grab the stream, paste the following image tag into your HTML <img src="http://ip-of-your-raspberry-pi:8080/?action=stream"> This should work in most modern browsers. I found there to be a problem with Google Chrome on iOS devices, which is strange because it does work in Safari which is basically identical to Chrome. Securing the stream You could leave it at that. However as of now your stream is insecure and anyone with the IP address to your Raspberry Pi can access it. We have talked about the -c flag which can be passed to the output_http.so plugin. However, this does not prevent people eavesdropping on your connection. We need to use HTTPS. The easiest way to secure your mjpg-stream is using a utility called stunnel. Stunnel is a very lightweight HTTPS-proxy. You give it all the keys and certificates and a two ports. stunnel then forwards the traffic from one port to the other while silently encrypting it. The installation is very simple. sudo apt-get install stunnel4 Next you have to generate an RSA key and certificate. This is very easy with OpenSSL openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 720 -nodes Just fill the prompts with your information. This creates a 2048-bit RSA key pair and a self-signed certificate valid for 720 days. Now create the following configuration file stunnel.conf. ; Paths to your key and certificate you generated in the previous step key=./key.pem cert=./cert.pem debug=7 [https] clinet=no accept=1234 ; Secure port connect=8080 ; mjpg streamer port sslVersion=all Now the last thing to do is start stunnel. sudo stunnel4 stunnel.conf Go to https://ip-of-your-raspberry-pi:1234. Confirm that you trust this certificate (it is self-signed so most browsers will complain about the security). Conclusion Now you can enjoy a live and secure stream directly from your Raspberry Pi. You can integrate it in your Home security system or on a robot. You can also grab the stream and use OpenCV to implement some cool computer vision ability. I used this on my PiNet Project to build a robot that can be controlled over the internet using a Webcam. I am curious what you can come up with! About the author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 0
  • 9190

article-image-making-space-invaders-game
Mike Cluck
04 Dec 2015
7 min read
Save for later

Making a Space Invaders Game

Mike Cluck
04 Dec 2015
7 min read
In just 6 quick steps, we're going to make our own Space Invaders game using Psykick2D. All of the code for this can be found here. What is Psykick2D? Psykick2D is a 2D game engine built with [Pixi.js] and designed with modularity in mind. Game objects are entities which are made up of components, systems contain and act on entities, and layers run systems. Getting Started After you download the latest Psykick2D build create an HTML page referencing Psykick2D. Your page should look something like this. Psykick2D will be looking for a container with a psykick id to place the game in. Now create a main.js and initialize the game world. var World = Psykick2D.World; World.init({ width: 400, height: 500, backgroundColor: '#000' }); Reload the page and you should see something like this. Blank screens aren't very exciting so let's add some sprites. Drawing Sprites You can find the graphics used here (special thanks to Jacob Zinman-Jeanes for providing these). Before we can use them though, we need to preload them. Add a preload option to the world initialization like so: World.init({ ... preload: [ 'sprites/player.json', 'sprites/enemy1.json', 'sprites/enemy2.json', 'sprites/enemy3.json', 'sprites/enemy4.json' ] }); Accessing sprites is as easy as referencing their frame name (given in the .json files) in a sprite component. To make an animated player, we just have to assemble the right parts in an entity. (I suggest placing these in a factory.js file) var Sprite = Psykick2D.Components.GFX.Sprite, Animation = Psykick2D.Components.GFX.Animation; function createPlayer(x, y) { var player = World.createEntity(), // Generate a new entity sprite = new Sprite({ frameName: 'player-1', x: x, y: y, width: 64, height: 29, pivot: { // The original image is sideways x: 64, y: 29 }, rotation: (270 * Math.PI) / 180 // 270 degrees in radians }), animation = new Animation({ maxFrames: 3, // zero-indexed frames: [ 'player-1', 'player-2', 'player-3', 'player-4' ] }); player.addComponent(sprite); player.addComponent(animation); return player; } Entities are comprised of components. All an entity needs is the right components and they'll work with any system. Because of thise, creating enemies looks almost exactly the same just using the enemy sprites. Once those components are attached, we just add the entities to a system then the system to a layer. The rest is taken care of automatically. var mainLayer = World.createLayer(), spriteSystem = new Psykick2D.Systems.Render.Sprite(), animationSystem = new Psykick2D.Systems.Behavior.Animate(); var player = createPlayer(210, 430); spriteSystem.addEntity(player); animationSystem.addEntity(player); mainLayer.addSystem(spriteSystem); mainLayer.addSystem(animationSystem); World.pushLayer(mainLayer); If you repeat the process for the enemies then you'll end up with a result like what you see here. With your ship on screen, let's add some controls. source Ship Controls To control the ship, we'll want to extend the BehaviorSystem to update the player's position on every update. This is made easier using the Keyboard module. var BehaviorSystem = Psykick2D.BehaviorSystem, Keyboard = Psykick2D.Input.Keyboard, Keys = Psykick2D.Keys, SPEED = 100; var PlayerControl = function() { this.player = null; BehaviorSystem.call(this); }; Psykick2D.Helper.inherit(PlayerControl, BehaviorSystem); PlayerControl.prototype.update = function(delta) { var velocity = 0; // Give smooth movement by using the change in time if (Keyboard.isKeyDown(Keys.Left)) { velocity = -SPEED * delta; } else if (Keyboard.isKeyDown(Keys.Right)) { velocity = SPEED * delta; } var player = this.player.getComponent('Sprite'); player.x += velocity; // Don't leave the screen if (player.x < 15) { player.x = 15; } else if (player.x > 340) { player.x = 340; } }; Since there's only one player we'll just set them directly instead of using the addEntity method. // main.js ... var controls = new PlayerControl(); controls.player = player; mainLayer.addSystem(controls); ... Now that the player can move we should level the playing field a little bit and give the enemies some brains. source Enemy AI In the original Space Invaders, the group of aliens would move left to right and then move closer to the player whenever they hit the edge. Since systems only accept entities with the right components, let's tag the enemies as enemies. function createEnemy(x, y) { var enemy = World.createEntity(); ... enemy.addComponentAs(true, 'Enemy'); return enemy; } Creating the enemy AI itself is pretty easy. var EnemyAI = function() { BehaviorSystem.call(this); this.requiredComponents = ['Enemy']; this.speed = 30; this.direction = 1; }; Psykick2D.Helper.inherit(EnemyAI, BehaviorSystem); EnemyAI.prototype.update = function(delta) { var minX = 1000, maxX = -1000, velocity = this.speed * this.direction * delta; for (var i = 0; i < this.actionOrder.length; i++) { var enemy = this.actionOrder[i].getComponent('Sprite'); enemy.x += velocity; // Prevent it from going outside the bounds if (enemy.x < 15) { enemy.x = 15; } else if (enemy.x > 340) { enemy.x = 340; } // Track the min/max minX = Math.min(minX, enemy.x); maxX = Math.max(maxX, enemy.x); } // If they hit the boundary if (minX <= 15 || maxX >= 340) { // Flip around and speed up this.direction = this.direction * -1; this.speed += 1; // Move the row down for (var i = 0; i < this.actionOrder.length; i++) { var enemy = this.actionOrder[i].getComponent('Sprite'); enemy.y += enemy.height / 2; } } }; Like before, we just add the correct entities to the system and add the system to the layer. var enemyAI = new EnemyAI(); enemyAI.addEntity(enemy1); enemyAI.addEntity(enemy2); ... mainLayer.addSystem(enemyAI); Incoming! Aliens are now raining down from the sky. We need a way to stop these invaders from space. source Set phasers to kill To start shooting alien scum, add the bullet sprite to the preload list preload: [ ... 'sprites/bullet.json' ] then generate a bullet just like you did the player. Since the original only let one bullet exist on screen at once, we're going to do the same. So in your PlayerControl system give it a new property for bullet and we'll add some shooting ability. var PlayerControl = function() { BehaviorSystem.call(this); this.player = null; this.bullet = null; }; ... PlayerControl.prototype.update = function(delta) { ... var bullet = this.bullet.getComponent('Sprite'); // If the bullet is off-screen and the player pressed the spacebar if (bullet.y <= -bullet.height && Keyboard.isKeyDown(Keys.Space)) { // Fire! bullet.y = player.x - 18; bullet.y = player.y; } else if (bullet.y > -bullet.height) { // Move the bullet up bullet.y -= 250 * delta; } }; Now we just need to draw the bullet and attach it to the PlayerControl system. var bullet = createBullet(0, -100); spriteSystem.addEntity(bullet); controls.bullet = bullet; And just like that you've got yourself a working gun. But you can't quite destroy those aliens yet. We need a way of making the bullet collide with the aliens. source Final Step Psykick2D has a couple of different collision structures built in. For our purposes we're going to use a grid. But in order to keep everything in sync, we want a dedicated physics system. So we're going to give our sprite components new properties, newX and newY, and set the new position there. Example: player.newX += velocity; To create a physics system, simply extend the BehaviorSystem and give it a collision structure. var Physics = function() { BehaviorSystem.call(this); this.requiredComponents = ['Sprite']; this.grid = new Psykick2D.DataStructures.CollisionGrid({ width: 400, height: 500, cellSize: 100, componentType: 'Sprite' // Do all collision checks using the sprite }); }; Psykick2D.Helper.inherit(Physics, BehaviorSystem); There's a little bit of work involved so you can view the full source here. What's important is that we check what kind of entity we're colliding with (entity.hasComponent('Bullet')) then we can destroy it by removing it from the layer. Here's the final product of all of your hard work. A fully functional space invaders-like game! Psykick2D has a lot more built in. Go ahead and really polish it up! final source About the Author Mike Cluck is a software developer interested in game development. He can be found on Github at MCluck90.
Read more
  • 0
  • 0
  • 4744
article-image-how-do-your-own-collision-detection
Mike Cluck
02 Dec 2015
7 min read
Save for later

How to Do Your Own Collision Detection

Mike Cluck
02 Dec 2015
7 min read
In almost every single game you're going to have different objects colliding. If you've worked with any game engines, this is generally taken care of for you. But what about if you want to write your own? For starters, we're just going to check for collisions between rectangles. In many cases, you'll want rectangles even for oddly shaped objects. Rectangles are very easy to construct, very easy to check for collisions, and take up very little memory. We can define a rectangle as such: var rectangle = { x: xPosition, y: yPosition, w: width, h: height } Checking for collisions between two rectangles can be broken into 2 sections: Compare the right-side against the left and the bottom-side against the top. A simple rectangle collision function might look this this: function isColliding(A, B) { var horizontalCollision = (A.x + A.w >= B.x && B.x + B.w >= A.x); var verticalCollision = (A.y + A.h >= B.y && B.y + B.h >= A.y); return horizontalCollision || verticalCollision; } Now that all of our game objects have collision rectangles, how do we decide which ones are colliding? Check All The Things! Why not just check everything against everything? The algorithm is really simple: for (var i = 0; i < rectangles.length; i++) { var A = rectangles[i]; for (var j = 0; j < rectangles.length; j++) { // Don't check a rectangle against itself if (j === i) { continue; } var B = rectangles[j]; if (isColliding(A, B)) { A.isColliding = true; B.isColliding = true; } } } Here's a working sample of this approach. The problem here is that this approach becomes drastically slower as the number of rectangles increases. If you're familiar with Big-O then this is O(n2) which is a big no-no. Try moving around the slider in the example to see how this affects the number of collision checks. Just by looking at the screen you can easily tell that there are rectangles which have no chance of touching each other, such as those on opposite sides of the screen. So to make this more efficient, we can only check against objects close to each other. How do we decide which ones could touch? Enter: Spatial Partitioning Spatial partioning is the process of breaking up the space into smaller chunks and only using objects residing in similar chunks for comparisons. This gives us a much better chance of only checking objects that could collide. The simplest way of doing this is to break the screen into a grid. Uniform Grid When you use a uniform grid, you divide the screen into equal sized blocks. Each of the chunks, or buckets as they're commonly called, will contain a list of objects which reside in them. Deciding which bucket to place them in is very simple: function addRectangle(rect) { // cellSize is the width/height of a bucket var startX = Math.floor(rect.x / cellSize); var endX = Math.floor((rect.x + rect.w) / cellSize); var startY = Math.floor(rect.y / cellSize); var endY = Math.floor((rect.y + rect.h) / cellSize); for (var y = startY; y <= endY; y++) { for (var x = startX; x<= endX; x++) { // Make sure this rectangle isn't already in this bucket if (grid[y][x].indexOf(rect) === -1) { grid[y][x].push(rect); } } } } A working example can be found here. Simple grids are actually extremely efficient. Mapping an object to a bucket is very straightforward which means adding and removing objects from the grid is very fast. There is one downside to using a grid though. It requires you to know the size of the world from the beginning and if that world is too big, you'll consume too much memory just creating the buckets. What we need for those situations is a more dynamic structure. Quad Tree Imagine your whole screen as one bucket. Well the problem with the first approach was that we would have to check against too many objects so what if that bucket automatically broke into smaller buckets whenever we put too many objects into it? That's exactly how a quad tree works. A quad tree limits how many objects can exist in each bucket and when that limit is broken, it subdivides into four distinct quadrants. Each of these quadrants are also quad trees so the process can continue recursively. We can define a quad tree like this: function QuadTree(x, y, w, h, depth) { this.x = x; this.y = y; this.w = w; this.h = h; this.depth = depth; this.objects = []; this.children = []; } As you can see, a quad tree is also a rectangle. You can think of a quad tree like those little Russian stacking dolls. But rather than finding just one doll inside of each, you'll find four different dolls inside of each one. Since these dolls are rectangular, we just check to see if an object intersects it. If it does, we put the object inside of it. QuadTree.prototype.insert = function(obj) { // Only add object if we're at max depth // or haven't exceeded the max objects count var atMaxDepth = (this.depth > MAX_DEPTH); var noChildren = (this.children.length === 0); var canAddMore = (this.objects.length < MAX_OBJECTS); if (atMaxDepth || (noChildren && canAddMore)) { this.objects.push(obj); } else if (this.children.length > 0) { // If there are children, add to them for (var i = 0; i < 4; i++) { var child = this.children[i]; if (isColliding(child, obj)) { child.insert(obj); } } } else { // Split into quadrants var halfWidth = this.w / 2; var halfHeight = this.h / 2; var top = this.y; var bottom = this.y + halfHeight; var left = this.x; var right = this.x + halfWidth; var newDepth = this.depth + 1; this.children.push(new QuadTree(right, top, halfWidth, halfHeight, newDepth)); this.children.push(new QuadTree(left, top, halfWidth, halfHeight, newDepth)); this.children.push(new QuadTree(left, bottom, halfWidth, halfHeight, newDepth)); this.children.push(new QuadTree(right, bottom, halfWidth, halfHeight, newDepth)); // Add the new object to simplify the next section this.objects.push(obj); // Move each of the objects into the children for (var i = 0; i < 4; i++) { var child = this.children[i]; for (var j = 0; j < this.objects.length; j++) { var otherObj = this.objects[j]; if (isColliding(child, otherObj)) { child.insert(otherObj); } } } // Clear out the objects from this node this.objects = []; } }; At this point any given object will only reside in nodes in which it's relatively close to other objects. To check for collisions, we only need to check against this subset of objects like how we did with the grid based collision. QuadTree.prototype.getCollisions = function(obj) { var collisions = []; // If there are children, get the collisions from there if (this.children.length > 0) { for (var i = 0; i < 4; i++) { var child = this.children[i]; if (isColliding(child, obj)) { // Concatenate together the results collisions = collisions.concat(child.getCollisions(obj)); } } } else { // Otherwise, check against each object for a collision for (var i = 0; i < this.objects.length; i++) { var other = this.objects[i]; // Don't compare an object with itself if (other === obj) { continue; } if (isColliding(other, obj)) { collisions.push(other); } } } return collisions; }; A working example of this is available here. Traversing this tree takes time so why would we use this instead of a collision grid? Because a quad tree can grow and change as you need it. This means you don't need to know how big your entire world is at the beginning of the game and load it all into memory. Just make sure to tweak your cell size and max objects. Conclusion Now that you've got your feet wet, you can go out and write your own collision detection systems or at least have a better understanding of how it all works under the hood. There are many more ways of checking collisions, such as using oct-trees and ray casting, and every approach has it's benefits and drawbacks so you'll have to figure out what works best for you and your game. Now go make something amazing! About the Author Mike Cluck is a software developer interested in game development. He can be found on Github at MCluck90.
Read more
  • 0
  • 0
  • 20458

article-image-react-dashboard-and-visualizing-data
Xavier Bruhiere
26 Nov 2015
8 min read
Save for later

React Dashboard and Visualizing Data

Xavier Bruhiere
26 Nov 2015
8 min read
I spent the last six months working on data analytics and machine learning to feed my curiosity and prepare for my new job. It is a challenging mission and I chose to give up for a while on my current web projects to stay focused. Back then, I was coding a dashboard for an automated trading system, powered by an exciting new framework from Facebook : React. In my opinion, Web Components was the way to go and React seemed gentler with my brain than, say, Polymer. One just needed to carefully design components boundaries, properties and states and bam, you got a reusable piece of web to plug anywhere. Beautiful. This is quite a naive way to put it of course but, for an MVP, it actually kind of worked. Fast forward to last week, I was needing a new dashboard to monitor various metrics from my shiny new infrastructure. Specialized requirements kept me away from a full-fledged solution like InfluxDB and Grafana combo, so I naturally starred at my old code. Well, it turned out I did not reuse a single line of code. Since the last time I spent in web development, new tools, frameworks and methodologies had taken over the world : es6 (and transpilers), isomorphic applications, one-way data flow, hot reloading, module bundler, ... Even starter kits are remarkably complex (at least for me) and I got overwhelmed. But those new toys are also truly empowering and I persevered. In this post, we will learn to leverage them, build the simplest dashboard possible and pave the way toward modern, real-time metrics monitoring. Tooling & Motivations I think the points of so much tooling are productivity and complexity management. New single page applications usually involve a significant number of moving parts : front and backend development, data management, scaling, appealing UX, ... Isomorphic webapps with nodejs and es6 try to harmonize this workflow sharing one readable language across the stack. Node already sells the "javascript everywhere" argument but here, it goes even further, with code that can be executed both on the server and in the browser, indifferently. Team work and reusability are improved, as well as SEO (Search Engine optimization) when rendering HTML on server-side. Yet, applications' codebase can turn into a massive mess and that's where Web Components come handy. Providing clear contracts between modules, a developer is able to focus on subpart of the UI with an explicit definition of its parameters and states. This level of abstraction makes the application much more easy to navigate, maintain and reuse. Working with React gives a sense of clarity with components as Javascript objects. Lifecycle and behavior are explicitly detailed by pre-defined hooks, while properties and states are distinct attributes. We still need to glue all of those components and their dependencies together. That's where npm, Webpack and Gulp join the party. Npm is the de facto package manager for nodejs, and more and more for frontend development. What's more, it can run for you scripts and spare you from using a task runner like Gulp. Webpack, meanwhile, bundles pretty much anything thanks to its loaders. Feed it an entrypoint which require your js, jsx, css, whatever ... and it will transform and package them for the browser. Given the steep learning curve of modern full-stack development, I hope you can see the mean of those tools. Last pieces I would like to introduce for our little project are metrics-graphics and react-sparklines (that I won't actually describe but worth noting for our purpose). Both are neat frameworks to visualize data and play nicely with React, as we are going to see now. Graph Component When building components-based interfaces, first things to define are what subpart of the UI those components are. Since we start a spartiate implementation, we are only going to define a Graph. // Graph.jsx // new es6 import syntax import React from 'react'; // graph renderer import MG from 'metrics-graphics'; export default class Graph extends React.Component { // called after the `render` method below componentDidMount () { // use d3 to load data from metrics-graphics samples d3.json('node_modules/metrics-graphics/examples/data/confidence_band.json', function(data) { data = MG.convert.date(data, 'date'); MG.data_graphic({ title: {this.props.title}, data: data, format: 'percentage', width: 600, height: 200, right: 40, target: '#confidence', show_secondary_x_label: false, show_confidence_band: ['l', 'u'], x_extended_ticks: true }); }); } render () { // render the element targeted by the graph return <div id="confidence"></div>; } } This code, a trendy combination of es6 and jsx, defines in the DOM a standalone graph from the json data in confidence_band.json I stole on Mozilla official examples. Now let's actually mount and render the DOM in the main entrypoint of the application (I mentioned above with Webpack). // main.jsx // tell webpack to bundle style along with the javascript import 'metrics-graphics/dist/metricsgraphics.css'; import 'metrics-graphics/examples/css/metricsgraphics-demo.css'; import 'metrics-graphics/examples/css/highlightjs-default.css'; import React from 'react'; import Graph from './components/Graph'; function main() { // it is recommended to not directly render on body var app = document.createElement('div'); document.body.appendChild(app); // key/value pairs are available under `this.props` hash within the component React.render(<Graph title={Keep calm and build a dashboard}/>, app); } main(); Now that we defined in plain javascript the web page, it's time for our tools to take over and actually build it. Build workflow This is mostly a matter of configuration. First, create the following structure. $ tree . ├── app │ ├── components │ │ ├── Graph.jsx │ ├── main.jsx ├── build └── package.json Where package.json is defined like below. { "name": "react-dashboard", "scripts": { "build": "TARGET=build webpack", "dev": "TARGET=dev webpack-dev-server --host 0.0.0.0 --devtool eval-source --progress --colors --hot --inline --history-api-fallback" }, "devDependencies": { "babel-core": "^5.6.18", "babel-loader": "^5.3.2", "css-loader": "^0.15.1", "html-webpack-plugin": "^1.5.2", "node-libs-browser": "^0.5.2", "react-hot-loader": "^1.2.7", "style-loader": "^0.12.3", "webpack": "^1.10.1", "webpack-dev-server": "^1.10.1", "webpack-merge": "^0.1.2" }, "dependencies": { "metrics-graphics": "^2.6.0", "react": "^0.13.3" } } A quick npm install will download every package we need for development and production. Two scripts are even defined to build a static version of the site, or serve a dynamic one that will be updated on file changes detection. This formidable feature becomes essential once tasted. But we have yet to configure Webpack to enjoy it. var path = require('path'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var webpack = require('webpack'); var merge = require('webpack-merge'); // discern development server from static build var TARGET = process.env.TARGET; // webpack prefers abolute path var ROOT_PATH = path.resolve(__dirname); // common environments configuration var common = { // input main.js we wrote earlier entry: [path.resolve(ROOT_PATH, 'app/main')], // import requirements with following extensions resolve: { extensions: ['', '.js', '.jsx'] }, // define the single bundle file output by the build output: { path: path.resolve(ROOT_PATH, 'build'), filename: 'bundle.js' }, module: { // also support css loading from main.js loaders: [ { test: /.css$/, loaders: ['style', 'css'] } ] }, plugins: [ // automatically generate a standard index.html to attach on the React app new HtmlWebpackPlugin({ title: 'React Dashboard' }) ] }; // production specific configuration if(TARGET === 'build') { module.exports = merge(common, { module: { // compile es6 jsx to standard es5 loaders: [ { test: /.jsx?$/, loader: 'babel?stage=1', include: path.resolve(ROOT_PATH, 'app') } ] }, // optimize output size plugins: [ new webpack.DefinePlugin({ 'process.env': { // This has effect on the react lib size 'NODE_ENV': JSON.stringify('production') } }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] }); } // development specific configuration if(TARGET === 'dev') { module.exports = merge(common, { module: { // also transpile javascript, but also use react-hot-loader, to automagically update web page on changes loaders: [ { test: /.jsx?$/, loaders: ['react-hot', 'babel?stage=1'], include: path.resolve(ROOT_PATH, 'app'), }, ], }, }); } Webpack configuration can be hard to swallow at first but, given the huge amount of transformations to operate, this style scales very well. Plus, once setup, the development environment becomes remarkably productive. To convince yourself, run webpack-dev-server and reach localhost:8080/assets/bundle.js in your browser. Tweak the title argument in main.jsx, save the file and watch the browser update itself. We are ready to build new components and extend our modular dashboard. Conclusion We condensed in a few paragraphs a lot of what makes the current web ecosystem effervescent. I strongly encourage the reader to deepen its knowledge on those matters and consider this post as it is : an introduction. Web components, like micro-services, are fun, powerful and bleeding edges. But also complex, fast-moving and unstable. The tooling, especially, is impressive. Spend a hard time to master them and craft something cool ! About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 12873

article-image-go-extensions-fetching-data-and-more
Xavier Bruhiere
24 Nov 2015
6 min read
Save for later

Go Extensions: Fetching Data and More

Xavier Bruhiere
24 Nov 2015
6 min read
The choice of Go for my last project was driven by its ability to cross-compile code into static binary. A script pushes stable versions on Github releases or Bintray and anyone can wget the package and use it right away. One of the important distinctions between Influx and some other time series solutions is that it doesn’t require any other software to install and run. This is one of the many wins that Influx gets from choosing Go as its implementation language. - Paul Dix This "static linking" awesomeness has a cost though. No evaluation at runtime, every piece of features are frozen once compiled. However a developer might happen to need more flexibility. In this post, we will study several use-cases and implementations where Go dynamic extensions unlock great features for your projects. Configuration Gulp is a great example of the benefits of configuration as code (more control, easier to extend). Thanks to gopher-lua, we're going to implement this behavior. Being our first step, let's write a skeleton for our investigations. package main import ( "log" "github.com/yuin/gopher-lua" ) // LuaPlayground exposes a bridge to Lua. type LuaPlayground struct { VM *lua.LState } func main() { // initialize lua VM 5.1 and compiler L := lua.NewState() defer L.Close() } Gopher-lua let us call Lua code from Go and share information between each environments. The idea is to define the app configuration as a convenient scripting language like the one below. -- save as conf.lua print("[lua] defining configuration") env = os.getenv("ENV") log = "debug" plugins = { "plugin.lua" } Now we can read those variables from Go. // DummyConf is a fake configuration we want to fill type DummyConf struct { Env string LogLevel string Plugins *lua.LTable } // Config evaluates a Lua script to build a configuration structure func (self *LuaPlayground) Config(filename string) *DummyConf { if err := self.VM.DoFile(filename); err != nil { panic(err) } return &DummyConf{ Env: self.VM.GetGlobal("env").String(), LogLevel: self.VM.GetGlobal("log").String(), Plugins: self.VM.GetGlobal("plugins").(*lua.LTable), } } func main() { // [...] playground := LuaPlayground{ VM: L } conf := playground.Config("conf.lua") log.Printf("loaded configuration: %vn", conf) } Using a high level scripting language gives us great flexibility to initialize an application. While we only exposed simple assignments, properties could be fetched from services or computed at runtime. Scripting Heka 's sandbox constitutes a broader approach to Go plugins. It offers an isolated environment where developers have access to specific methods and data to control Heka's behavior. This strategy exposes an higher level interface to contributors without recompilation. The following code snippet extends our existing LuaPlayground structure with such skills. // Log is a go function lua will be able to run func Log(L *lua.LState) int { // lookup the first argument msg := L.ToString(1) log.Println(msg) return 1 } // Scripting exports Go objects to Lua sandbox func (self *LuaPlayground) Scripting(filename string) { // expose the log function within the sandbox self.VM.SetGlobal("log", self.VM.NewFunction(Log)) if err := self.VM.DoFile(filename); err != nil { panic(err) } } func main() { // [...] playground.Scripting("script.lua") } Lua code are now able to leverage the disruptive Go function Log. -- save as script.lua log("Hello from lua !") This is obviously a scarce example intended to show the way. Following the same idiom, gopher-lua let us export to Lua runtime complete modules, channels, Go structures. Therefor we can hide and compile implementation details as a Go library, while business logic and data manipulation is left to a productive and safe scripting environment. This idea leads us toward another pattern : hooks. As an illustration, Git is able to execute arbitrary scripts when such files are found under a specific directory, on specific events (like running tests before pushing code). In the same spirit, we could program a routine to list and execute files in a pre-defined directory. Moving a script in this folder, therefore, would activate a new hook. This is also the strategy Dokku leverages. Extensions This section takes things upside down. The next piece of code expects a Lua script to define its methods. Those components become plug-and-play extensions or components one could replace, activate or deactivate. // [...] // Call executes a function defined in Lua namespace func (self *LuaPlayground) Call(method string, arg string) string { if err := self.VM.CallByParam(lua.P{ Fn: self.VM.GetGlobal(method), NRet: 1, Protect: true, }, lua.LString(arg) /* method argument */ ); err != nil { panic(err) } // returned value ret := self.VM.Get(-1) // remove last value self.VM.Pop(1) return ret.String() } // Extend plugs new capabilities to this program by loading the given script func (self *LuaPlayground) Extend(filename string) { if err := self.VM.DoFile(filename); err != nil { panic(err) } log.Printf("Identity: %vn", self.Call("lookupID", "mario")) } func main() { // [...] playground.Extend("extension.lua") } An interesting use-case for such feature would be swappable backend. A service discovery application, for example, might use a key/value storage. One extension would perform requests against Consul, while another one would fetch data from etcd. This setup would allow an easier integration into existing infrastructures. Alternatives Executing arbitrary code at runtime brings the flexibility we can expect from language like Python or Node.js, and popular projects developed their own framework. Hashicorp reuses the same technic throughout its Go projects. Plugins are standalone binaries only a master process can run. Once launched, both parties use RPC to communicate data and commands. This approach proved to be a great fit in the open-source community, enabling experts to contribute drivers for third-party services. An other take on Go plugins was recently pushed by InfluxDB with Telegraf, a server agent for reporting metrics. Much closer to OOP, plugins must implement an interface provided by the project. While we still need to recompile to register new plugins, it eases development by providing a dedicated API. Conclusion The release of Docker A.7 and previous debates show the potential of Go extensions, especially in open-source projects where author wants other developers to contribute features in a manageable fashion. This article skimmed several approach to bypass static go binaries and should feed some further ideas. Being able to just drop-in an executable and instantly use a new tool is a killer feature of the language and one should be careful if scripts became dependencies to make it work. However dynamic code execution and external plugins keep development modular and ease developers on-boarding. Having those trade-off in mind, the use-cases we explored could unlock worthy features for your next Go project. About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 2922
article-image-internet-peas-gardening-javascript-part-2
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 2

Anna Gerber
23 Nov 2015
6 min read
In this two-part article series, we're building an internet-connected garden bot using JavaScript. In part one, we set up a Particle Core board, created a Johnny-Five project, and ran a Node.js program to read raw values from a soil moisture sensor. Adding a light sensor Let's connect another sensor. We'll extend our circuit to add a photo resistor to measure the ambient light levels around our plants. Connect one lead of the photo resistor to ground, and the other to analog pin 4, with a 1K pull-down resistor from A4 to the 3.3V pin. The value of the pull-down resistor determines the raw readings from the sensor. We're using a 1K resistor so that the sensor values don't saturate under tropical sun conditions. For plants kept inside a dark room, or in a less sunny climate, a 10K resistor might be a better choice. Read more about how pull-down resistors work with photo resistors at AdaFruit. Now, in our board's ready callback function, we add another sensor instance, this time on pin A4: var lightSensor = new five.Sensor({ pin: "A4", freq: 1000 }); lightSensor.on("data", function() { console.log("Light reading " + this.value); }); For this sensor we are logging the sensor value every second, not just when it changes. We can control how often sensor events are emitted by specifying the number of milliseconds in the freq option when creating the sensor. We can use the threshold config option can be used to control when the change callback occurs. Calibrating the soil sensor The soil sensor uses the electrical resistance between two probes to provide a measure of the moisture content of the soil. We're using a commercial sensor, but you could make your own simply using two pieces of wire spaced about an inch apart (using galvinized wire to avoid rust). Water is a good conductor of electricity, so a low reading means that the soil is moist, while a high amount of resistance indicates that the soil is dry. Because these aren't very sophisticated sensors, the readings will vary from sensor to sensor. In order to do anything meaningful with the readings within our application, we'll need to calibrate our sensor. Calibrate by making a note of the sensor values for very dry soil, wet soil, and in between to get a sense of what the optimal range of values should be. For an imprecise sensor like this, it also helps to map the raw readings onto ranges that can be used to display different messages (e.g. very dry, dry, damp, wet) or trigger different actions. The scale method on the Sensor class can come in handy for this. For example, we could convert the raw readings from 0 - 1023 to a 0 - 5 scale: soilSensor.scale(0, 5).on("change", function() { console.log(this.value); }); However, the raw readings for this sensor range between about 50 (wet) to 500 (fairly dry soil). If we're only interested in when the soil is dry, i.e. when readings are above 300, we could use a conditional statement within our callback function, or use the within method so that the function is only triggered when the values are inside a range of values we care about. soilSensor.within([ 300, 500 ], function() { console.log("Water me!"); }); Our raw soil sensor values will vary depending on the temperature of the soil, so this type of sensor is best for indoor plants that aren't exposed to weather extremes. If you are installing a soil moisture sensor outdoors, consider adding a temperature sensor and then calibrate for values at different temperature ranges. Connecting more sensors We have seven analog and seven digital IO pins on the Particle Core, so we could attach more sensors, perhaps more of the same type to monitor additional planters, or different types of sensors to monitor additional conditions. There are many kinds of environmental sensors available through online marketplaces like AliExpress and ebay. These include sensors for temperature, humidity, dust, gas, water depth, particulate detection etc. Some of these sensors are straightforward analog or digital devices that can be used directly with the Johnny-Five Sensor class, as we have with our soil and light sensors. The Johnny-Five API also includes subclasses like Temperature, with controllers for some widely used sensor components. However, some sensors use protocols like SPI, I2C or OneWire, which are not as well supported by Johnny-Five across all platforms. This is always improving, for example, I2C was added to the Particle-IO plugin in October 2015. Keep an eye on I2C component backpacks which are providing support for additional sensors via secondary microcontrollers. Automation If you are gardening at scale, or going away on extended vacation, you might want more than just monitoring. You might want to automate some basic garden maintenance tasks, like turning on grow lights on overcast days, or controlling a pump to water the plants when the soil moisture level gets low. This can be acheived with relays. For example, we can connect a relay with a daylight bulb to a digital pin, and use it to turn lights on in response to the light readings, but only between certain hours: var five = require("johnny-five"); var Particle = require("particle-io"); var moment = require("moment"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { var lightSensor = new five.Sensor("A4"); var lampRelay = new five.Relay(2); lightSensor.scale(0, 5).on("change", function() { console.log("light reading is " + this.value) var now = moment(); var nightCurfew = now.endOf('day').subtract(4,'h'); var morningCurfew = now.startOf('day').add(6,'h'); if (this.value > 4) { if (!lampRelay.isOn && now.isAfter(morningCurfew) && now.isBefore(nightCurfew)) { lampRelay.on(); } } else { lampRelay.off(); } }); }); And beyond... One of the great things about using Node.js with hardware is that we can extend our apps with modules from npm. We could publish an Atom feed of sensor readings over time, push the data to a web UI using socket-io, build an alert system or create a data visualization layer, or we might build an API to control lights or pumps attached via relays to our board. It's never been easier to program your own internet-connected robot helpers and smart devices using JavaScript. Build more exciting robotics projects with servos and motors – click here to find out how. About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 3488

article-image-internet-peas-gardening-javascript-part-1
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 1

Anna Gerber
23 Nov 2015
6 min read
Who wouldn't want an army of robots to help out around the home and garden? It's not science fiction: Robots are devices that sense and respond to the world around us, so with some off-the-shelf hardware, and the power of the Johnny-Five JavaScript Robotics framework, we can build and program simple "robots" to automate every day tasks. In this two part article series, we'll build an internet-connected device for monitoring plants. Bill of materials You'll need these parts to build this project Part Source Particle Core (or Photon) Particle 3xAA Battery holder e.g. with micro USB connector from DF Robot Jumper wires Any electronics supplier e.g. Sparkfun Solderless breadboard Any electronics supplier e.g. Sparkfun Photo resistor Any electronics supplier e.g. Sparkfun 1K resistor Any electronics supplier e.g. Sparkfun Soil moisture sensor e.g. Sparkfun Plants   Particle (formerly known as Spark) is a platform for developing devices for the Internet of Things. The Particle Core was their first generation Wifi development board, and has since been supeceded by the Photon. Johnny-Five supports both of these boards, as well as Arduino, BeagleBone Black, Raspberry Pi, Edison, Galileo, Electric Imp, Tessel and many other device platforms, so you can use the framework with your device of choice. The Platform Support page lists the features currently supported on each device. Any device with Analog Read support is suitable for this project. Setting up the Particle board Make sure you have a recent version of Node.js installed. We're using npm (Node Package Manager) to install the tools and libraries required for this project. Install the Particle command line tools with npm (via the Terminal on Mac, or Command Prompt on Windows): npm install -g particle-cli Particle boards need to be registered with the Particle Cloud service, and you must also configure your device to connect to your wireless network. So the first thing you'll need to do is connect it to your computer via USB and run the setup program. See the Particle Setup docs. The LED on the Particle Core should be blinking blue when you plug it in for the first time (if not, press and hold the mode button). Sign up for a Particle Account and then follow the prompts to setup your device via the Particle website, or if you prefer you can run the setup program from the command line. You'll be prompted to sign in and then to enter your Wifi SSID and password: particle setup After setup is complete, the Particle Core can be disconnected from your computer and powered by batteries or a separate USB power supply - we will connect to the board wirelessly from now on. Flashing the board We also need to flash the board with the Voodoospark firmware. Use the CLI tool to sign in to the Particle Cloud and list your devices to find out the ID of your board: particle cloud login particle list Download the firmware.cpp file and use the flash command to write the Voodoospark firmware to your device: particle cloud flash <Your Device ID> voodoospark.cpp See the Voodoospark Getting Started page for more details. You should see the following message: Flash device OK: Update started The LED on the board will flash magenta. This will take about a minute, and will change back to green when the board is ready to use. Creating a Johnny-Five project We'll be installing a few dependencies from npm, so to help manage these, we'll set up our project as an npm package. Run the init command, filling in the project details at the prompts. npm init After init has completed, you'll have a package.json file with the metadata that you entered about your project. Dependencies for the project can also be saved to this file. We'll use the --save command line argument to npm when installing packages to persist dependencies to our package.json file. We'll need the Johnny-Five npm module as well as the Particle-IO IO Plugin for Johnny-Five. npm install johnny-five --save npm install particle-io --save Johnny-Five uses the Firmata protocol to communicate with Arduino-based devices. IO Plugins provide Firmata compatible interfaces to allow Johnny-Five to communicate with non-Arduino-based devices. The Particle-IO Plugin allows you to run Node.js applications on your computer that communicate with the Particle board over Wifi, so that you can read from sensors or control components that are connected to the board. When you connect to your board, you'll need to specify your Device ID and your Particle API Access Token. You can look up your access token under Settings in the Particle IDE. It's a good idea to copy these to environment variables rather than hardcoding them into your programs. If you are on Mac or Linux, you can create a file called .particlerc then run source .particlerc: export PARTICLE_TOKEN=<Your Token Here> export PARTICLE_DEVICE_ID=<Your Device ID Here> Reading from a sensor Now we're ready to get our hands dirty! Let's confirm that we can communicate with our Particle board using Johnny-Five, by taking a reading from our soil moisture sensor. Using jumper wires, connect one pin on the soil sensor to pin A0 (analog pin 0) and the other to GND (ground). The probes go into the soil in your plant pot. Create a JavaScript file named sensor.js using your preferred text editor or IDE. We use require statements to include the Johnny-Five module and the Particle-IO plugin. We're creating an instance of the Particle IO plugin (with our token and deviceId read from our environment variables) and providing this as the io config option when creating our Board object. var five = require("johnny-five"); var Particle = require("particle-io"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { console.log("CONNECTED"); var soilSensor = new five.Sensor("A0"); soilSensor.on("change", function() { console.log(this.value); }); }); After the board is ready, we create a Sensor object to monitor changes on pin A0, and then print the value from the sensor to the Node.js console whenever it changes. Run the program using Node.js: node sensor.js Try pulling the sensor out of the soil or watering your plant to make the sensor reading change. See the Sensor API for more methods that you can use with Sensors. You can hit control-C to end the program. In the next installment we'll connect our light sensor and extend our Node.js application to monitor our plant's environment. Continue reading now! About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 3297
Modal Close icon
Modal Close icon