Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-9-reasons-why-rust-programmers-love-rust
Richa Tripathi
03 Oct 2018
8 min read
Save for later

9 reasons why Rust programmers love Rust

Richa Tripathi
03 Oct 2018
8 min read
The 2018 survey of the RedMonk Programming Language Rankings marked the entry of a new programming language in their Top 25 list. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It also jumped from the 46th most popular language on GitHub to the 18th position. The Stack overflow survey of 2018 is another indicator of the rise of Rust programming language. Almost 78% of the developers who are working with Rust loved working on it. It topped the list of the most loved programming language among the developers who took the survey for a straight third year in the row. Not only that but it ranked 8th in the most wanted programming language in the survey, which means that the respondent of the survey who has not used it yet but would like to learn. Although, Rust was designed as a low-level language, best suited for systems, embedded, and other performance critical code, it is gaining a lot of traction and presents a great opportunity for web developers and game developers. RUST is also empowering novice developers with the tools to start shipping code fast. So, why is Rust so tempting? Let's explore the high points of this incredible language and understand the variety of features that make it interesting to learn. Automatic Garbage Collection Garbage collection and non-memory resources often create problems with some systems languages. But Rust pays no head to garbage collection and removes the possibilities of failures caused by them. In Rust, garbage collection is completely taken care of by RAII (Resource Acquisition Is Initialization). Better support for Concurrency Concurrency and parallelism are incredibly imperative topics in computer science and are also a hot topic in the industry today. Computers are gaining more and more cores, yet many programmers aren't prepared to fully utilize the power of them. Handling concurrent programming safely and efficiently is another major goal of Rust language. Concurrency is difficult to reason about. In Rust, there is a strong, static type system that helps to reason about your code. As such, Rust also gives you two traits Send and Sync to help you make sense of code that can possibly be concurrent. Rust's standard library also provides a library for threads, which enable you to run Rust code in parallel. You can also use Rust’s threads as a simple isolation mechanism. Error Handling in Rust is beautiful A programmer is bound to make errors, irrespective of the programming language they use. Making errors while programming is normal, but it's the error handling mechanism of that programming language, which enhances the experience of writing the code. In Rust, errors are divided into types: unrecoverable errors and recoverable errors. Unrecoverable errors An error is classified as 'unrecoverable' when there is no other option other than to abort the program. The panic! macro in Rust is very helpful in these cases, especially when a bug has been detected in the code but the programmer is not clear how to handle that error. The panic! macro generates a failure message that helps the user to debug a problem. It also helps to stop the execution before more catastrophic events occur. Recoverable errors The errors which can be handled easily or which do not have a serious impact on the execution of the program are known as recoverable errors. It is represented by the Result<T, E>. The Result<T, E> is an enum that consists of two variants, i.e., OK<T> and Err<E>. It describes the possible error in the program. OK<T>: The 'T' is a type of value which returns the OK variant in the success case. It is an expected outcome. Err<E>: The 'E' is a type of error which returns the ERR variant in the failure. It is an unexpected outcome. Resource Management The one attribute that makes Rust stand out (and completely overpowers Google’s Go for that matter), is the algorithm used for resource management. Rust follows the C++ lead, with concepts like borrowing and mutable borrowing on the plate and thus resource management becomes an elegant process. Furthermore, Rust didn’t need a second chance to know that resource management is not just about memory usage; the fact that they did it right first time makes them a standout performer on this point. Although the Rust documentation does a good job of explaining the technical details, the article by Tim explains the concept in a much friendlier and easy to understand language. As such I thought, it would be good to list his points as well here. The following excerpt is taken from the article written by M.Tim Jones. Reusable code via modules Rust allows you to organize code in a way that promotes its reuse. You attain this reusability by using modules which are nothing but organized code as packages that other programmers can use. These modules contain functions, structures and even other modules that you can either make public, which can be accessed by the users of the module or you can make it private which can be used only within the module and not by the module user. There are three keywords to create modules, use modules, and modify the visibility of elements in modules. The mod keyword creates a new module The use keyword allows you to use the module (expose the definitions into the scope to use them) The pub keyword makes elements of the module public (otherwise, they're private). Cleaner code with better safety checks In Rust, the compiler enforces memory safety and another checking that make the programming language safe. Here, you will never have to worry about dangling pointers or bother using an object after it has been freed. These things are part of the core Rust language that allows you to write clean code. Also, Rust includes an unsafe keyword with which you can disable checks that would typically result in a compilation error. Data types and Collections in Rust Rust is a statically typed programming language, which means that every value in Rust must have a specified data type. The biggest advantage of static typing is that a large class of errors is identified earlier in the development process. These data types can be broadly classified into two types: scalar and compound. Scalar data types represent a single value like integer, floating-point, and character, which are commonly present in other programming languages as well. But Rust also provides compound data types which allow the programmers to group multiple values in one type such as tuples and arrays. The Rust standard library provides a number of data structures which are also called collections. Collections contain multiple values but they are different from the standard compound data types like tuples and arrays which we discussed above. The biggest advantage of using collections is the capability of not specifying the amount of data at compile time which allows the structure to grow and shrink as the program runs. Vectors, Strings, and hash maps are the three most commonly used collections in Rust. The friendly Rust community Rust owes it success to the breadth and depth of engagement of its vibrant community, which supports a highly collaborative process for helping the language to evolve in a truly open-source way. Rust is built from the bottom up, rather than any one individual or organization controlling the fate of the technology. Reliable Robust Release cycles of Rust What is common between Java, Spring, and Angular? They never release their update when they promise to. The release cycle of the Rust community works with clockwork precision and is very reliable. Here’s an overview of the dates and versions: In mid-September 2018, the Rust team released Rust 2018 RC1 version. Rust 2018 is the first major new edition of Rust (after Rust 1.0 released in 2015). This new release would mark the culmination of the last three years of Rust’s development from the core team, and brings the language together in one neat package. This version includes plenty of new features like raw identifiers, better path clarity, new optimizations, and other additions. You can learn more about the Rust language and its evolution at the Rust blog and download from the Rust language website. Note: the headline was edited 09.06.2018 to make it clear that Rust was found to be the most loved language among developers using it. Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes Rust as a Game Programming Language: Is it any good? Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more
Read more
  • 0
  • 3
  • 28344

article-image-scripting-animation-maya
Packt
02 Aug 2016
28 min read
Save for later

Scripting for Animation in Maya

Packt
02 Aug 2016
28 min read
This article, written by Adrian Herbez, author of Maya Programming with Python Cookbook, will cover various recipes related to animating objects with scripting: Querying animation data Working with animation layers Copying animation from one object to another Setting keyframes Creating expressions via script (For more resources related to this topic, see here.) In this article, we'll be looking at how to use scripting to create animation and set keyframes. We'll also see how to work with animation layers and create expressions from code. Querying animation data In this example, we'll be looking at how to retrieve information about animated objects, including which attributes are animated and both the location and value of keyframes. Although this script is unlikely to be useful by itself, knowing the number, time, and values of keyframes is sometimes a prerequisite for more complex animation tasks. Getting ready To make get the most out of this script, you'll need to have an object with some animation curves defined. Either load up a scene with animation or skip ahead to the recipe on setting keyframes. How to do it... Create a new file and add the following code: import maya.cmds as cmds def getAnimationData(): objs = cmds.ls(selection=True) obj = objs[0] animAttributes = cmds.listAnimatable(obj); for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) if (numKeyframes > 0): print("---------------------------") print("Found ", numKeyframes, " keyframes on ", attribute) times = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), timeChange=True) values = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), valueChange=True) print('frame#, time, value') for i in range(0, numKeyframes): print(i, times[i], values[i]) print("---------------------------") getAnimationData() If you select an object with animation curves and run the script, you should see a readout of the time and value for each keyframe on each animated attribute. For example, if we had a simple bouncing ball animation with the following curves: We would see something like the following output in the script editor: --------------------------- ('Found ', 2, ' keyframes on ', u'|bouncingBall.translateX') frame#, time, value (0, 0.0, 0.0) (1, 190.0, 38.0) --------------------------- --------------------------- ('Found ', 20, ' keyframes on ', u'|bouncingBall.translateY') frame#, time, value (0, 0.0, 10.0) (1, 10.0, 0.0) (2, 20.0, 8.0) (3, 30.0, 0.0) (4, 40.0, 6.4000000000000004) (5, 50.0, 0.0) (6, 60.0, 5.120000000000001) (7, 70.0, 0.0) (8, 80.0, 4.096000000000001) (9, 90.0, 0.0) (10, 100.0, 3.276800000000001) (11, 110.0, 0.0) (12, 120.0, 2.6214400000000011) (13, 130.0, 0.0) (14, 140.0, 2.0971520000000008) (15, 150.0, 0.0) (16, 160.0, 1.6777216000000008) (17, 170.0, 0.0) (18, 180.0, 1.3421772800000007) (19, 190.0, 0.0) --------------------------- How it works... We start out by grabbing the selected object, as usual. Once we've done that, we'll iterate over all the keyframeable attributes, determine if they have any keyframes and, if they do, run through the times and values. To get the list of keyframeable attributes, we use the listAnimateable command: objs = cmds.ls(selection=True) obj = objs[0] animAttributes = cmds.listAnimatable(obj) This will give us a list of all the attributes on the selected object that can be animated, including any custom attributes that have been added to it. If you were to print out the contents of the animAttributes array, you would likely see something like the following: |bouncingBall.rotateX |bouncingBall.rotateY |bouncingBall.rotateZ Although the bouncingBall.rotateX part likely makes sense, you may be wondering about the | symbol. This symbol is used by Maya to indicate hierarchical relationships between nodes in order to provide fully qualified node and attribute names. If the bouncingBall object was a child of a group named ballGroup, we would see this instead: |ballGroup|bouncingBall.rotateX Every such fully qualified name will contain at least one pipe (|) symbol, as we see in the first, nongrouped example, but there can be many more—one for each additional layer of hierarchy. While this can lead to long strings for attribute names, it allows Maya to make use of objects that may have the same name, but under different parts of a larger hierarchy (to have control objects named handControl for each hand of a character, for example). Now that we have a list of all of the possibly animated attributes for the object, we'll next want to determine if there are any keyframes set on it. To do this, we can use the keyframe command in the query mode. for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) At this point, we have a variable (numKeyframes) that will be greater than zero for any attribute with at least one keyframe. Getting the total number of keyframes on an attribute is only one of the things that the keyframe command can do; we'll also use it to grab the time and value for each of the keyframes. To do this, we'll call it two more times, both in the query mode—once to get the times and once to get the values: times = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), timeChange=True) values = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), valueChange=True) These two lines are identical in everything except what type of information we're asking for. The important thing to note here is the index flag, which is used to tell Maya which keyframes we're interested in. The command requires a two-element argument representing the first (inclusive) and last (exclusive) index of keyframes to examine. So, if we had total 20 keyframes, we would pass in (0,20), which would examine the keys with indices from 0 to 19. The flags we're using to get the values likely look a bit odd—both valueChange and timeChange might lead you to believe that we would be getting relative values, rather than absolute. However, when used in the previously mentioned manner, the command will give us what we want—the actual time and value for each keyframe, as they appear in the graph editor. If you want to query information on a single keyframe, you still have to pass in a pair of values- just use the index that you're interested in twice- to get the fourth frame, for example, use (3,3). At this point, we have two arrays—the times array, which contains the time value for each keyframe, and the values array that contains the actual attribute value. All that's left is to print out the information that we've found: print('frame#, time, value') for i in range(0, numKeyframes): print(i, times[i], values[i]) There's more... Using the indices to get data on keyframes is an easy way to run through all of the data for a curve, but it's not the only way to specify a range. The keyframe command can also accept time values. If we wanted to know how many keyframes existed on a given attribute between frame 1 and frame 100, for example, we could do the following: numKeyframes = cmds.keyframe(attributeName, query=True, time=(1,100) keyframeCount=True) Also, if you find yourself with highly nested objects and need to extract just the object and attribute names, you may find Python's built-in split function helpful. You can call split on a string to have Python break it up into a list of parts. By default, Python will break up the input string by spaces, but you can specify a particular string or character to split on. Assume that you have a string like the following: |group4|group3|group2|group1|ball.rotateZ Then, you could use split to break it apart based on the | symbol. It would give you a list, and using −1 as an index would give you just ball.rotateZ. Putting that into a function that can be used to extract the object/attribute names from a full string would be easy, and it would look something like the following: def getObjectAttributeFromFull(fullString): parts = fullString.split("|") return parts[-1] Using it would look something like this: inputString = "|group4|group3|group2|group1|ball.rotateZ" result = getObjectAttributeFromFull(inputString) print(result) # outputs "ball.rotateZ" Working with animation layers Maya offers the ability to create multiple layers of animation in a scene, which can be a good way to build up complex animation. The layers can then be independently enabled or disabled, or blended together, granting the user a great deal of control over the end result. In this example, we'll be looking at how to examine the layers that exist in a scene, and building a script will ensure that we have a layer of a given name. For example, we might want to create a script that would add additional randomized motion to the rotations of selected objects without overriding their existing motion. To do this, we would want to make sure that we had an animation layer named randomMotion, which we could then add keyframes to. How to do it... Create a new script and add the following code: import maya.cmds as cmds def makeAnimLayer(layerName): baseAnimationLayer = cmds.animLayer(query=True, root=True) foundLayer = False if (baseAnimationLayer != None): childLayers = cmds.animLayer(baseAnimationLayer, query=True, children=True) if (childLayers != None) and (len(childLayers) > 0): if layerName in childLayers: foundLayer = True if not foundLayer: cmds.animLayer(layerName) else: print('Layer ' + layerName + ' already exists') makeAnimLayer("myLayer") Run the script, and you should see an animation layer named myLayer appear in the Anim tab of the channel box. How it works... The first thing that we want to do is to find out if there is already an animation layer with the given name present in the scene. To do this, we start by grabbing the name of the root animation layer: baseAnimationLayer = cmds.animLayer(query=True, root=True) In almost all cases, this should return one of two possible values—either BaseAnimation or (if there aren't any animation layers yet) Python's built-in None value. We'll want to create a new layer in either of the following two possible cases: There are no animation layers yet There are animation layers, but none with the target name In order to make the testing for the above a bit easier, we first create a variable to hold whether or not we've found an animation layer and set it to False: foundLayer = False Now we need to check to see whether it's true that both animation layers exist and one of them has the given name. First off, we check that there was, in fact, a base animation layer: if (baseAnimationLayer != None): If this is the case, we want to grab all the children of the base animation layer and check to see whether any of them have the name we're looking for. To grab the children animation layers, we'll use the animLayer command again, again in the query mode: childLayers = cmds.animLayer(baseAnimationLayer, query=True, children=True) Once we've done that, we'll want to see if any of the child layers match the one we're looking for. We'll also need to account for the possibility that there were no child layers (which could happen if animation layers were created then later deleted, leaving only the base layer): if (childLayers != None) and (len(childLayers) > 0): if layerName in childLayers: foundLayer = True If there were child layers and the name we're looking for was found, we set our foundLayer variable to True. If the layer wasn't found, we create it. This's easily done by using the animLayer command one more time, with the name of the layer we're trying to create: if not foundLayer: cmds.animLayer(layerName) Finally, we finish off by printing a message if the layer was found to let the user know. There's more... Having animation layers is great, in that we can make use of them when creating or modifying keyframes. However, we can't actually add animation to layers without first adding the objects in question to the animation layer. Let's say that we had an object named bouncingBall, and we wanted to set some keyframes on its translateY attribute, in the bounceLayer animation layer. The actual command to set the keyframe(s) would look something like this: cmds.setKeyframe("bouncingBall.translateY", value=yVal, time=frame, animLayer="bounceLayer") However, this would only work as expected if we had first added the bouncingBall object to the bounceLayer animation layer. To do it, we could use the animLayer command in the edit mode, with the addSelectedObjects flag. Note that because the flag operates on the currently selected objects, we would need to first select the object we want to add: cmds.select("bouncingBall", replace=True) cmds.animLayer("bounceLayer", edit=True, addSelectedObjects=True) Adding the object will, by default, add all of its animatable attributes. You can also add specific attributes, rather than entire objects. For example, if we only wanted to add the translateY attribute to our animation layer, we could do the following: cmds.animLayer("bounceLayer", edit=True, attribute="bouncingBall.translateY") Copying animation from one object to another In this example, we'll create a script that will copy all of the animation data on one object to one or more additional objects, which could be useful to duplicate motion across a range of objects. Getting ready For the script to work, you'll need an object with some keyframes set. Either create some simple animation or skip ahead to the example on creating keyframes with script, later in this article. How to do it... Create a new script and add the following code: import maya.cmds as cmds def getAttName(fullname): parts = fullname.split('.') return parts[-1] def copyKeyframes(): objs = cmds.ls(selection=True) if (len(objs) < 2): cmds.error("Please select at least two objects") sourceObj = objs[0] animAttributes = cmds.listAnimatable(sourceObj); for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) if (numKeyframes > 0): cmds.copyKey(attribute) for obj in objs[1:]: cmds.pasteKey(obj, attribute=getAttName(attribute), option="replace") copyKeyframes() Select the animated object, shift-select at least one other object, and run the script. You'll see that all of the objects have the same motion. How it works... The very first part of our script is a helper function that we'll be using to strip the attribute name off a full object name/attribute name string. More on it will be given later. Now on to the bulk of the script. First off, we run a check to make sure that the user has selected at least two objects. If not, we'll display a friendly error message to let the user know what they need to do: objs = cmds.ls(selection=True) if (len(objs) < 2): cmds.error("Please select at least two objects") The error command will also stop the script from running, so if we're still going, we know that we had at least two objects selected. We'll set the first one to be selected to be our source object. We could just as easily use the second-selected object, but that would mean using the first selected object as the destination, limiting us to a single target:     sourceObj = objs[0] Now we're ready to start copying animation, but first, we'll need to determine which attributes are currently animated, through a combination of finding all the attributes that can be animated, and checking each one to see whether there are any keyframes on it: animAttributes = cmds.listAnimatable(sourceObj); for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) If we have at least one keyframe for the given attribute, we move forward with the copying: if (numKeyframes > 0): cmds.copyKey(attribute) The copyKey command will cause the keyframes for a given object to be temporarily held in memory. If used without any additional flags, it will grab all of the keyframes for the specified attribute, exactly what we want in this case. If we wanted only a subset of the keyframes, we could use the time flag to specify a range. We're passing in each of the values that were returned by the listAnimatable function. These will be full names (both object name and attribute). That's fine for the copyKey command, but will require a bit of additional work for the paste operation. Since we're copying the keys onto a different object than the one that we copied them from, we'll need to separate out the object and attribute names. For example, our attribute value might be something like this: |group1|bouncingBall.rotateX From this, we'll want to trim off just the attribute name (rotateX) since we're getting the object name from the selection list. To do this, we created a simple helper function that takes a full-length object/attribute name and returns just the attribute name. That's easy enough to do by just breaking the name/attribute string apart on the . and returning the last element, which in this case is the attribute: def getAttName(fullname): parts = fullname.split('.') return parts[-1] Python's split function breaks apart the string into an array of strings, and using a negative index will count back from the end, with −1 giving us the last element. Now we can actually paste our keys. We'll run through all the remaining selected objects, starting with the second, and paste our copied keyframes: for obj in objs[1:]: cmds.pasteKey(obj, attribute=getAttName(attribute), option="replace") Note that we're using the nature of Python's for loops to make the code a bit more readable. Rather than using an index, as would be the case in most other languages, we can just use the for x in y construction. In this case, obj will be a temporary variable, scoped to the for loop, that takes on the value of each item in the list. Also note that instead of passing in the entire list, we use objs[1:] to indicate the entire list, starting at index 1 (the second element). The colon allows us to specify a subrange of the objs list, and leaving the right-hand side blank will cause Python to include all the items to the end of the list. We pass in the name of the object (from our original selection), the attribute (stripped from full name/attribute string via our helper function), and we use option="replace" to ensure that the keyframes we're pasting in replace anything that's already there. Original animation (top). Here, we see the result of pasting keys with the default settings (left) and with the replace option (right). Note that the default results still contain the original curves, just pushed to later frames If we didn't include the option flag, Maya would default to inserting the pasted keyframes while moving any keyframes already present forward in the timeline. There's more... There are a lot of other options for the option flag, each of which handles possible conflicts with the keys you're pasting and the ones that may already exist in a slightly different way. Be sure to have a look at the built-in documentation for the pasteKeys command for more information. Another, and perhaps better option to control how pasted keys interact with existing one is to paste the new keys into a separate animation layer. For example, if we wanted to make sure that our pasted keys end up in an animation layer named extraAnimation, we could modify the call to pasteKeys as follows: cmds.pasteKey(objs[i], attribute=getAttName(attribute), option="replace", animLayer="extraAnimation") Note that if there was no animation layer named extraAnimation present, Maya would fail to copy the keys. See the section on working with animation layers for more information on how to query existing layers and create new ones. Setting keyframes While there are certainly a variety of ways to get things to move in Maya, the vast majority of motion is driven by keyframes. In this example, we'll be looking at how to create keyframes with code by making that old animation standby—a bouncing ball. Getting ready The script we'll be creating will animate the currently selected object, so make sure that you have an object—either the traditional sphere or something else you'd like to make bounce. How to do it... Create a new file and add the following code: import maya.cmds as cmds def setKeyframes(): objs = cmds.ls(selection=True) obj = objs[0] yVal = 0 xVal = 0 frame = 0 maxVal = 10 for i in range(0, 20): frame = i * 10 xVal = i * 2 if i % 2 == 1: yVal = 0 else: yVal = maxVal maxVal *= 0.8 cmds.setKeyframe(obj + '.translateY', value=yVal, time=frame) cmds.setKeyframe(obj + '.translateX', value=xVal, time=frame) setKeyframes() Run the preceding script with an object selected and trigger playback. You should see the object move up and down. How it works... In order to get our object to bounce, we'll need to set keyframes such that the object alternates between a Y-value of zero and an ever-decreasing maximum so that the animation mimics the way a falling object loses velocity with each bounce. We'll also make it move forward along the x-axis as it bounces. We start by grabbing the currently selected object and setting a few variables to make things easier to read as we run through our loop. Our yVal and xVal variables will hold the current value that we want to set the position of the object to. We also have a frame variable to hold the current frame and a maxVal variable, which will be used to hold the Y-value of the object's current height. This example is sufficiently simple that we don't really need separate variables for frame and the attribute values, but setting things up this way makes it easier to swap in more complex math or logic to control where keyframes get set and to what value. This gives us the following: yVal = 0 xVal = 0 frame = 0 maxVal = 10 The bulk of the script is a single loop, in which we set keyframes on both the X and Y positions. For the xVal variable, we'll just be multiplying a constant value (in this case, 2 units). We'll do the same thing for our frame. For the yVal variable, we'll want to alternate between an ever-decreasing value (for the successive peaks) and zero (for when the ball hits the ground). To alternate between zero and non-zero, we'll check to see whether our loop variable is divisible by two. One easy way to do this is to take the value modulo (%) 2. This will give us the remainder when the value is divided by two, which will be zero in the case of even numbers and one in the case of odd numbers. For odd values, we'll set yVal to zero, and for even ones, we'll set it to maxVal. To make sure that the ball bounces a little less each time, we set maxVal to 80% of its current value each time we make use of it. Putting all of that together gives us the following loop: for i in range(0, 20): frame = i * 10 xVal = i * 2 if (i % 2) == 1: yVal = 0 else: yVal = maxVal maxVal *= 0.8 Now we're finally ready to actually set keyframes on our object. This's easily done with the setKeyframe command. We'll need to specify the following three things: The attribute to keyframe (object name and attribute) The time at which to set the keyframe The actual value to set the attribute to In this case, this ends up looking like the following: cmds.setKeyframe(obj + '.translateY', value=yVal, time=frame) cmds.setKeyframe(obj + '.translateX', value=xVal, time=frame) And that's it! A proper bouncing ball (or other object) animated with pure code. There's more... By default, the setKeyframe command will create keyframes with both in tangent and out tangent being set to spline. That's fine for a lot of things, but will result in overly smooth animation for something that's supposed to be striking a hard surface. We can improve our bounce animation by keeping smooth tangents for the keyframes when the object reaches its maximum height, but setting the tangents at its minimum to be linear. This will give us a nice sharp change every time the ball strikes the ground. To do this, all we need to do is to set both the inTangentType and outTangentType flags to linear, as follows: cmds.setKeyframe(obj + ".translateY", value=animVal, time=frame, inTangentType="linear", outTangentType="linear") To make sure that we only have linear tangents when the ball hits the ground, we could set up a variable to hold the tangent type, and set it to one of two values in much the same way that we set the yVal variable. This would end up looking like this: tangentType = "auto" for i in range(0, 20): frame = i * 10 if i % 2 == 1: yVal = 0 tangentType = "linear" else: yVal = maxVal tangentType = "spline" maxVal *= 0.8 cmds.setKeyframe(obj + '.translateY', value=yVal, time=frame, inTangentType=tangentType, outTangentType=tangentType) Creating expressions via script While most animation in Maya is created manually, it can often be useful to drive attributes directly via script, especially for mechanical objects or background items. One way to approach this is through Maya's expression editor. In addition to creating expressions via the expression editor, it is also possible to create expressions with scripting, in a beautiful example of code-driven code. In this example, we'll be creating a script that can be used to create a sine wave-based expression to smoothly alter a given attribute between two values. Note that expressions cannot actually use Python code directly; they require the code to be written in the MEL syntax. But this doesn't mean that we can't use Python to create expressions, which is what we'll do in this example. Getting ready Before we dive into the script, we'll first need to have a good handle on the kind of expression we'll be creating. There are a lot of different ways to approach expressions, but in this instance, we'll keep things relatively simple and tie the attribute to a sine wave based on the current time. Why a sine wave? Sine waves are great because they alter smoothly between two values, with a nice easing into and out of both the minimum and maximums. While the minimum and maximum values range from −1 to 1, it's easy enough to alter the output to move between any two numbers we want. We'll also make things a bit more flexible by setting up the expression to rely on a custom speed attribute that can be used to control the rate at which the attribute animates. The end result will be a value that varies smoothly between any two numbers at a user-specified (and keyframeable) rate. How to do it... Create a new script and add the following code: import maya.cmds as cmds def createExpression(att, minVal, maxVal, speed): objs = cmds.ls(selection=True) obj = objs[0] cmds.addAttr(obj, longName="speed", shortName="speed", min=0, keyable=True) amplitude = (maxVal – minVal)/2.0 offset = minVal + amplitude baseString = "{0}.{1} = ".format(obj, att) sineClause = '(sin(time * ' + obj + '.speed)' valueClause = ' * ' + str(amplitude) + ' + ' + str(offset) + ')' expressionString = baseString + sineClause + valueClause cmds.expression(string=expressionString) createExpression('translateY', 5, 10, 1) How it works... The first that we do is to add a speed attribute to our object. We'll be sure to make it keyable for later animation: cmds.addAttr(obj, longName="speed", shortName="speed", min=0, keyable=True) It's generally a good idea to include at least one keyframeable attribute when creating expressions. While math-driven animation is certainly a powerful technique, you'll likely still want to be able to alter the specifics. Giving yourself one or more keyframeable attributes is an easy way to do just that. Now we're ready to build up our expression. But first, we'll need to understand exactly what we want; in this case, a value that smoothly varies between two extremes, with the ability to control its speed. We can easily build an expression to do that using the sine function, with the current time as the input. Here's what it looks like in a general form: animatedValue = (sin(time * S) * M) + O; Where: S is a value that will either speed up (if greater than 1) or slow down (if less) the rate at which the input to the sine function changes M is a multiplier to alter the overall range through which the value changes O is an offset to ensure that the minimum and maximum values are correct You can also think about it visually—S will cause our wave to stretch or shrink along the horizontal (time) axis, M will expand or contract it vertically, and O will move the entire shape of the curve either up or down. S is already taken care of; it's our newly created "speed" attribute. M and O will need to be calculated, based on the fact that sine functions always produce values ranging from −1 to 1. The overall range of values should be from our minVal to our maxVal, so you might think that M should be equal to (maxVal – minVal). However, since it gets applied to both −1 and 1, this would leave us with double the desired change. So, the final value we want is instead (maxVal – minVal)/2. We store that into our amplitude variable as follows: amplitude = (maxVal – minVal)/2.0 Next up is the offset value O. We want to move our graph such that the minimum and maximum values are where they should be. It might seem like that would mean just adding our minVal, but if we left it at that, our output would dip below the minimum for 50% of the time (anytime the sine function is producing negative output). To fix it, we set O to (minVal + M) or in the case of our script: offset = minVal + amplitude This way, we move the 0 position of the wave to be midway between our minVal and maxVal, which is exactly what we want. To make things clearer, let's look at the different parts we're tacking onto sin(), and the way they effect the minimum and maximum values the expression will output. We'll assume that the end result we're looking for is a range from 0 to 4. Expression Additional component Minimum Maximum sin(time) None- raw sin function −1 1 sin(time * speed) Multiply input by "speed" −1 (faster) 1 (faster) sin(time * speed) * 2 Multiply output by 2 −2 2 (sin(time * speed) * 2) + 2 Add 2 to output 0 4   Note that 2 = (4-0)/2 and 2 = 0 + 2. Here's what the preceding progression looks like when graphed:   Four steps in building up an expression to var an attribute from 0 to 4 with a sine function. Okay, now that we have the math locked down, we're ready to translate that into Maya's expression syntax. If we wanted an object named myBall to animate along Y with the previous values, we would want to end up with: myBall.translateY = (sin(time * myBall.speed) * 5) + 12; This would work as expected if entered into Maya's expression editor, but we want to make sure that we have a more general-purpose solution that can be used with any object and any values. That's straightforward enough and just requires building up the preceding string from various literals and variables, which is what we do in the next few lines: baseString = "{0}.{1} = ".format(obj, att) sineClause = '(sin(time * ' + obj + '.speed)' valueClause = ' * ' + str(amplitude) + ' + ' + str(offset) + ')' expressionString = baseString + sineClause + valueClause I've broken up the string creation into a few different lines to make things clearer, but it's by no means necessary. The key idea here is that we're switching back and forth between literals (sin(time *, .speed, and so on) and variables (obj, att, amplitude, and offset) to build the overall string. Note that we have to wrap numbers in the str() function to keep Python from complaining when we combine them with strings. At this point, we have our expression string ready to go. All that's left is to actually add it to the scene as an expression, which is easily done with the expression command: cmds.expression(string=expressionString) And that's it! We will now have an attribute that varies smoothly between any two values. There's more... There are tons of other ways to use expressions to drive animation, and all sorts of simple mathematical tricks that can be employed. For example, you can easily get a value to move smoothly to a target value with a nice easing-in to the target by running this every frame: animatedAttribute = animatedAttribute + (targetValue – animatedAttribute) * 0.2; This will add 20% of the current difference between the target and the current value to the attribute, which will move it towards the target. Since the amount that is added is always a percentage of the current difference, the per-frame effect reduces as the value approaches the target, providing an ease-in effect. If we were to combine this with some code to randomly choose a new target value, we would end up with an easy way to, say, animate the heads of background characters to randomly look in different positions (maybe to provide a stadium crowd). Assume that we had added custom attributes for targetX, targetY, and targetZ to our object that would end up looking something like the following: if (frame % 20 == 0) { myCone.targetX = rand(time) * 360; myCone.targetY = rand(time) * 360; myCone.targetZ = rand(time) * 360; } myObject.rotateX += (myObject.targetX - myCone.rotateX) * 0.2; myObject.rotateY += (myObject.targetY - myCone.rotateY) * 0.2; myObject.rotateZ += (myObject.targetZ - myCone.rotateZ) * 0.2; Note that we're using the modulo (%) operator to do something (setting the target) only when the frame is an even multiple of 20. We're also using the current time as the seed value for the rand() function to ensure that we get different results as the animation progresses. The previously mentioned example is how the code would look if we entered it directly into Maya's expression editor; note the MEL-style (rather than Python) syntax. Generating this code via Python would be a bit more involved than our sine wave example, but would use all the same principles—building up a string from literals and variables, then passing that string to the expression command. Summary In this article, we primarily discussed scripting and animation using Maya.  Resources for Article: Further resources on this subject: Introspecting Maya, Python, and PyMEL [article] Discovering Python's parallel programming tools [article] Mining Twitter with Python – Influence and Engagement [article]
Read more
  • 0
  • 0
  • 28343

article-image-your-first-swift-program
Packt
20 Feb 2018
4 min read
Save for later

Your First Swift Program

Packt
20 Feb 2018
4 min read
 In this article, by Keith Moon author of the book Swift 4 Programming Cookbook, we will learn how to write your first swift program. (For more resources related to this topic, see here.) Your first Swift program In this first recipe will be get up and running with Swift using a Swift Playground, and run our first piece of Swift code. Getting ready To run our first Swift program, we first need to download and install our IDE. During the beta of Apple's Xcode 9, it is available as a direct download from Apple's developer website at http://developer.apple.com/download, access to this beta will require a free Apple developer account. Once the beta has ended and Xcode 9 is publically available, it will also be available from the Mac App Store. By obtaining it from the Mac App Store, you will automatically be informed of updates, so this is the preferred route, once Xcode 9 is out of beta. Xcode from the Mac App Store Open up the Mac App Store, either from the dock or via Spotlight: Search for xcode: Click Install: Xcode is a large download (over 4 GB). So, depending on your internet connection, this could take a while! Progress can be monitored from Launchpad: Xcode as a direct download Go to the Apple Developer download page at http://developer.apple.com/download  Click the Download button to download Xcode within a .xip file.  Double click on the downloaded file to unpack the Xcode application. Drag the Xcode application into your Applications folder How to do it... With Xcode downloaded, let create our first Swift playground: Launch Xcode from the icon in your dock. From the welcome screen, choose Get started with a playground. From the template chooser, select the blank template from the iOS tab: Choose a name for your playground and a location to save it: Xcode Playgrounds can be based on one of three different Apple platforms, iOS, tvOS and macOS (the operating system formerly known as OSX). Playgrounds provide full access to the frameworks available to either iOS, tvOS or macOS, depending on which you choose. An iOS playground will be assumed for the entirety of this chapter, chiefly because this is the platform of choice of the author. Where recipes do have UI components, the iOS platform will be used until otherwise stated. You are now presented with a view that looks like this: Let's replace the word playground with Swift!. Press the blue play button in the bottom left-hand corner of the window to execute the code in the playground: Congratulations! You have just run some Swift code. On the right-hand side of the window, you will see the output of each line of code in the playground. We can see our line of code has output "Hello, Swift!": There's more... If you put your cursor over the output on the left-hand side, you will see two buttons, one that looks like an eye, another that is a circle: Click on the eye button and you get a Quick Look box of the output. This isn't that useful for just a string, but can be useful for more visual output like colors and views. Click on the square button, and a box will be added in-line, under your code, showing the output of the code. This can be really useful if you want to see how the output changes as you change the code. Summary In this article, we learnt how to run your first swift program. Resources for Article: Further resources on this subject: Your First Swift App [article] Exploring Swift [article] Functions in Swift [article]
Read more
  • 0
  • 0
  • 28336

article-image-minecraft-java-team-are-open-sourcing-some-of-minecrafts-code-as-libraries
Sugandha Lahoti
08 Oct 2018
2 min read
Save for later

Minecraft Java team are open sourcing some of Minecraft's code as libraries

Sugandha Lahoti
08 Oct 2018
2 min read
Stockholm's Minecraft Java team are open sourcing some of Minecraft's code as libraries for game developers. Developers can now use them to improve their Minecraft mods, use them for their own projects, or help improve pieces of the Minecraft Java engine. The team will open up different libraries gradually. These libraries are open source and MIT licensed. For now, they have open sourced two libraries Brigadier and DataFixerUpper. Brigadier The first library, Brigadier takes random strings of text entered into Minecraft and turns into an actual function that the game will perform. Basically, if you enter in the game something like /give Dinnerbone sticks, it goes internally into Brigadier and breaks it down into pieces. Then it tries to figure out what the developer is trying to do with this random piece of text. Nathan Adams, a Java developer hopes that giving the Minecraft community access to Brigadier can make it “extremely user-friendly one day.” Brigadier has been available for a week now. It has already seen improvements in the code and the readme doc. DataFixerUpper Another important library of the Minecraft game engine, the DataFixerUpper is also being open sourced. When a developer adds a new feature into Minecraft, they have to change the way level data and save files are stored. DataFixerUpper turns these data formats to what the game should currently be using now. Also in consideration for open sourcing is the Blaze3D library, which is a complete rewrite of the render engine for Minecraft 1.14. You can check out the announcement on the Minecraft website. You can also download Brigadier and DataFixerUpper. Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Learning with Minecraft Mods A Brief History of Minecraft Modding
Read more
  • 0
  • 0
  • 28334

article-image-ethical-dilemmas-developers-on-artificial-intelligence-products-must-consider
Amey Varangaonkar
29 Sep 2018
10 min read
Save for later

The ethical dilemmas developers working on Artificial Intelligence products must consider

Amey Varangaonkar
29 Sep 2018
10 min read
Facebook has recently come under the scanner for sharing the data of millions of users without their consent. Their use of Artificial Intelligence to predict their customers’ behavior and then to sell this information to advertisers has come under heavy criticism and has raised concerns over the privacy of users’ data. A lot of it inadvertently has to do with the ‘smart use’ of data by companies like Facebook. As Artificial Intelligence continues to revolutionize the industry, and as the applications of AI continue to rapidly grow across a spectrum of real-world domains, the need for a regulated, responsible use of AI has also become more important than ever. Several ethical questions are being asked of the way the technology is being used and how it is impacting our lives, Facebook being just one of the many examples right now. In this article, we look at some of these ethical concerns surrounding the use of AI. Infringement of users’ data privacy Probably the biggest ethical concern in the use of Artificial Intelligence and smart algorithms is the way companies are using them to gain customer insights, without getting the consent of the said customers in the first place. Tracking customers’ online activity, or using the customer information available on various social media and e-commerce websites in order to tailor marketing campaigns or advertisements that are targeted towards the customer is a clear breach of their privacy, and sometimes even amounts to ‘targeted harassment’. In the case of Facebook, for example, there have been many high profile instances of misuse and abuse of user data, such as: The recent Cambridge Analytica scandal where Facebook’s user data was misused Boston-based data analytics firm Crimson Hexagon misusing Facebook user data Facebook’s involvement in the 2016 election meddling Accusations of Facebook along with Twitter and Google having a bias against conservative views Accusation of discrimination with targeted job ads on the basis of gender and age How far will these tech giants such as Facebook go to fix what they have broken - the trust of many of its users? The European Union General Data Protection Regulation (GDPR) is a positive step to curb this malpractice. However, such a regulation needs to be implemented worldwide, which has not been the case yet. There needs to be a universal agreement on the use of public data in the modern connected world. Individual businesses and developers must be accountable and hold themselves ethically responsible when strategizing or designing these AI products, keeping the users’ privacy in mind. Risk of automation in the workplace The most fundamental ethical issue that comes up when we talk about automation, or the introduction of Artificial Intelligence in the workplace, is how it affects the role of human workers. ‘Does the AI replace them completely?’ is a common question asked by many. Also, if human effort is not going to be replaced by AI and automation, in what way will the worker’s role in the organization be affected? The World Economic Forum (WEF) recently released a Future of Jobs report in which they highlight the impact of technological advancements on the current workforce. The report states that machines will be able to do half of the current job tasks within the next 5 years. A few important takeaways from this report with regard to automation and its impact on the skilled human workers are: Existing jobs will be augmented through technology to create new tasks and resulting job roles altogether - from piloting drones to remotely monitoring patients. The inclusion of AI and smart algorithms is going to reduce the number of workers required for certain work tasks The layoffs in certain job roles will also involve difficult transitions for many workers and investment for reskilling and training, commonly referred to as collaborative automation. As we enter the age of machine augmented human productivity, employees will be trained to work along with the AI tools and systems, empowering them to work quickly and more efficiently. This will come with an additional cost of training which the organization will have to bear Artificial stupidity - how do we eliminate machine-made mistakes? It goes without saying that learning happens over time, and it is no different for AI. The AI systems are fed lots and lots of training data and real-world scenarios. Once a system is fully trained, it is then made to predict outcomes on real-world test data and the accuracy of the model is then determined and improved. It is only normal, however, that the training model cannot be fed with every possible scenario there is, and there might be cases where the AI is unprepared for or can be fooled by an unusual scenario or test-case. Some images where the deep neural network is unable to identify their pattern is an example of this. Another example would be the presence of random dots in an image that would lead the AI to think there is a pattern in an image, where there really isn’t any. Deceptive perceptions like this may lead to unwanted errors, which isn’t really the AI’s fault, it’s just the way they are trained. These errors, however, can prove costly to a business and can lead to potential losses. What is the way to eliminate these possibilities? How do we identify and weed out such training errors or inadequacies that go a long way in determining whether an AI system can work with near 100% accuracy? These are the questions that need answering. It also leads us to the next problem that is - who takes accountability for the AI’s failure? If the AI fails or misbehaves, who takes the blame? When an AI system designed to do a particular task fails to correctly perform the required task for some reason, who is responsible? This aspect needs careful consideration and planning before any AI system can be adopted, especially on an enterprise-scale. When a business adopts an AI system, it does so assuming the system is fail-safe. However, if for some reason the AI system isn’t designed or trained effectively because either: It was not trained properly using relevant datasets The AI system was not used in a relevant context and as a result, gave inaccurate predictions Any failure like this could lead to potentially millions in losses and could adversely affect the business, not to mention have adverse unintended effects on society. Who is accountable in such cases? Is it the AI developer who designed the algorithm or the model? Or is it the end-user or the data scientist who is using the tool as a customer? Clear expectations and accountabilities need to be defined at the very outset and counter-measures need to be set in place to avoid such failovers, so that the losses are minimal and the business is not impacted severely. Bias in Artificial Intelligence - A key problem that needs addressing One of the key questions in adopting Artificial Intelligence systems is whether they can be trusted to be impartial, fair or neutral. In her NIPS 2017 keynote, Kate Crawford - who is a Principal Researcher at Microsoft as well as the Co-Founder & Director of Research at the AI Now institute - argues that bias in AI cannot just be treated as a technical problem; the underlying social implications need to be considered as well. For example, a machine learning software to detect potential criminals, that tends to be biased against a particular race, raises a lot of questions on its ethical credibility. Or when a camera refuses to detect a particular kind of face because it does not fit into the standard template of a human face in its training dataset, it naturally raises the racism debate. Although the AI algorithms are designed by humans themselves, it is important that the learning data used to train these algorithms is as diverse as possible, and factors in possible kinds of variations to avoid these kinds of biases. AI is meant to give out fair, impartial predictions without any preset predispositions or bias, and this is one of the key challenges that is not yet overcome by the researchers and AI developers. The problem of Artificial Intelligence in cybersecurity As AI revolutionizes the security landscape, it is also raising the bar for the attackers. With passing time it is getting more difficult to breach security systems. To tackle this, attackers are resorting to adopting state-of-the-art machine learning and other AI techniques to breach systems, while security professionals adopt their own AI mechanisms to prevent and protect the systems from these attacks. A cybersecurity firm Darktrace reported an attack in 2017 that used machine learning to observe and learn user behavior within a network. This is one of the classic cases of facing disastrous consequences where technology falls into the wrong hands and necessary steps cannot be taken to tackle or prevent the unethical use of AI - in this case, a cyber attack. The threats posed by a vulnerable AI system with no security measures in place - it can be easily hacked into and misused, doesn’t need any new introduction. This is not a desirable situation for any organization to be in, especially when it has invested thousands or even millions of dollars into the technology. When the AI is developed, strict measures should be taken to ensure it is accessible to only a specific set of people and can be altered or changed by only its developers or by authorized personnel. Just because you can build an AI, should you? The more potent the AI becomes, the more potentially devastating its applications can be. Whether it is replacing human soldiers with AI drones, or developing autonomous weapons - the unmitigated use of AI for warfare can have consequences far beyond imagination. Earlier this year, we saw hundreds of Google employees quit the company over its ties with the Pentagon, protesting against the use of AI for military purposes. The employees were strong of the opinion that the technology they developed has no place on a battlefield, and should ideally be used for the benefit of mankind, to make human lives better. Google isn’t an isolated case of a tech giant lost in these murky waters. Microsoft employees too protested Microsoft’s collaboration with US Immigration and Customs Enforcement (ICE) over building face recognition systems for them, especially after the revelations that ICE was found to confine illegal immigrant children in cages and inhumanely separated asylum-seeking families at the US Mexican border. Amazon is also one of the key tech vendors of facial recognition software to ICE, but its employees did not openly pressure the company to drop the project. While these companies have assured their employees of no direct involvement, it is quite clear that all the major tech giants are supplying key AI technology to the government for defensive (or offensive, who knows) military measures. The secure and ethical use of Artificial Intelligence for non-destructive purposes currently remains one of the biggest challenges in its adoption today. Today, there are many risks and caveats associated with implementing an AI system. Given the tools and techniques we have at our disposal currently, it is far-fetched to think of implementing a flawless Artificial Intelligence within a given infrastructure. While we consider all the risks involved, it is also important to reiterate one important fact. When we look at the bigger picture, all technological advancements effectively translate to better lives for everyone. While AI has tremendous potential, whether its implementation is responsible is completely down to us, humans. Read more Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms New cybersecurity threats posed by artificial intelligence Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 28328

article-image-introduction-hololens
Packt
11 Jul 2017
10 min read
Save for later

Introduction to HoloLens

Packt
11 Jul 2017
10 min read
In this article, Abhijit Jana, Manish Sharma, and Mallikarjuna Rao, the authors of the book, HoloLens Blueprints, we will be covering the following points to introduce you to using HoloLens for exploratory data analysis. Digital Reality - Under the Hood Holograms in reality Sketching the scenarios 3D Modeling workflow Adding Air Tap on speaker Real-time visualization through HoloLens (For more resources related to this topic, see here.) Digital Reality - Under the Hood Welcome to the world of Digital Reality. The purpose of Digital Reality is to bring immersive experiences, such as taking or transporting you to different world or places, make you interact within those immersive, mix digital experiences with reality, and ultimately open new horizons to make you more productive. Applications of Digital Reality are advancing day by day; some of them are in the field of gaming, education, defense, tourism, aerospace, corporate productivity, enterprise applications, and so on. The spectrum and scenarios of Digital Reality are huge. In order to understand them better, they are broken down into three different categories:  Virtual Reality (VR): It is where you are disconnected from the real world and experience the virtual world. Devices available on the market for VR are Oculus Rift, Google VR, and so on. VR is the common abbreviation of Virtual Reality. Augmented Reality (AR): It is where digital data is overlaid over the real world. Pokémon GO, one of the very famous games, is an example of the this globally. A device available on the market, which falls under this category, is Google Glass. Augmented Reality is abbreviated to AR. Mixed Reality (MR): It spreads across the boundary of the real environment and VR. Using MR, you can have a seamless and immersive integration of the virtual and the real world. Mixed Reality is abbreviated to MR. This topic is mainly focused on developing MR applications using Microsoft HoloLens devices. Although these technologies look similar in the way they are used, and sometimes the difference is confusing to understand, there is a very clear boundary that distinguishes these technologies from each other. As you can see in the following diagram, there is a very clear distinction between AR and VR. However, MR has a spectrum, that overlaps across all three boundaries of real world, AR, and MR. Digital Reality Spectrum The following table describes the differences between the three: Holograms in reality Till now, we have mentioned Hologram several times. It is evident that these are crucial for HoloLens and Holographic apps, but what is a Hologram? Virtual Reality Complete Virtual World User is completely isolated from the Real World Device examples: Oculus Rift and Google VR Augmented Reality Overlays Data over the real world Often used for mobile devices Device example: Google Glass Application example: Pokémon GO Mixed Reality Seamless integration of the real and virtual world Virtual world interacts with Real world Natural interactions Device examples: HoloLens and Meta Holograms are the virtual objects which will be made up with light and sound and blend with the real world to give us an immersive MR experience with both real and virtual worlds. In other words, a Hologram is an object like any other real-world object; the only difference is that it is made up of light rather than matter. The technology behind making holograms is known as Holography. The following figure represent two holographic objects placed on the top of a real-size table and gives the experience of placing a real object on a real surface: Holograms objects in real environment Interacting with holograms There are basically five ways that you can interact with holograms and HoloLens. Using your Gaze, Gesture, and Voice and with spatial audio and spatial mapping. Spatial mapping provides a detailed illustration of the real-world surfaces in the environment around HoloLens. This allows developers to understand the digitalized real environments and mix holograms into the world around you. Gaze is the most usual and common one, and we start the interaction with it. At any time, HoloLens would know what you are looking at using Gaze. Based on that, the device can take further decisions on the gesture and voice that should be targeted. Spatial audio is the sound coming out from HoloLens and we use spatial audio to inflate the MR experience beyond the visual. HoloLens Interaction Model Sketching the scenarios The next step after elaborating scenario details is to come up with sketches for this scenario. There is a twofold purpose for sketching; first, it will be input to the next phase of asset development for the 3D Artist, as well as helping to validate requirements from the customer, so there are no surprises at the time of delivery. For sketching, either the designer can take it up on their own and build sketches, or they can take help from the 3D Artist. Let's start with the sketch for the primary view of the scenario, where the user is viewing the HoloLens's hologram: Roam around the hologram to view it from different angles Gaze at different interactive components Sketch for user viewing hologram for the HoloLens Sketching - interaction with speakers While viewing the hologram, a user can gaze at different interactive components. One such component, identified earlier, is the speaker. At the time of gazing at the speaker, it should be highlighted and the user can then Air Tap at it. The Air Tap action should expand the speaker hologram and the user should be able to view the speaker component in detail. Sketch for expanded speakers After the speakers are expanded, the user should be able to visualize the speaker components in detail. Now, if the user Air Taps on the expanded speakers, the application should do the following: Open the textual detail component about the speakers; the user can read the content and learn about the speakers in detail Start voice narration, detailing speaker details The user can also Air Tap on the expanded speaker component, and this action should close the expanded speaker Textual and voice narration for speaker details  As you did sketching for the speakers, apply a similar approach and do sketching for other components, such as lenses, buttons, and so on. 3D Modeling workflow Before jumping to 3D Modeling, let's understand the 3D Modeling workflow across different tools that we are going to use during the course of this topic. The following diagram explains the flow of the 3D Modeling workflow: Flow of 3D Modeling workflow Adding Air Tap on speaker In this project, we will be using the left-side speaker for applying Air Tap on speaker. However, you can apply the same for the right-side speaker as well. Similar to Lenses, we have two objects here which we need to identify from the object explorer. Navigate to Left_speaker_geo and left_speaker_details_geo in Object Hierarchy window Tag them as leftspeaker and speakerDetails respectively By default, when you are just viewing the Holograms, we will be hiding the speaker details section. This section only becomes visible when we do the Air Tap, and goes back again when we Air Tap again: Speaker with Box Collider Add a new script inside the Scripts folder, and name it ShowHideBehaviour. This script will handle the Show and Hide behaviour of the speakerDetails game object. Use the following script inside the ShowHideBehaviour.cs file. This script we can use for any other object to show or hide. public class ShowHideBehaviour : MonoBehaviour { public GameObject showHideObject; public bool showhide = false; private void Start() { try { MeshRenderer render = showHideObject.GetComponent InChildren<MeshRenderer>(); if (render != null) { render.enabled = showhide; } } catch (System.Exception) { } } } The script finds the MeshRenderer component from the gameObject and enables or disables it based on the showhide property. In this script, the showhide is property exposed as public, so that you can provide the reference of the object from the Unity scene itself. Attach ShowHideBehaviour.cs as components in speakerDetails tagged object. Then drag and drop the object in the showhide property section. This just takes the reference for the current speaker details objects and will hide the object in the first instance. Attach show-hide script to the object By default, it is unchecked, showhide is set to false and it will be hidden from view. At this point in time, you must check the left_speaker_details_geo on, as we are now handling visibility using code. Now, during the Air Tapped event handler, we can handle the render object to enable visibility. Add a new script by navigating from the context menu Create | C# Scripts, and name it SpeakerGestureHandler. Open the script file in Visual Studio. Similar to SpeakerGestureHandler, by default, the SpeakerGestureHandler class will be inherited from the MonoBehaviour. In the next step, implement the InputClickHandler interface in the SpeakerGestureHandler class. This interface implement the methods OnInputClicked() that invoke on click input. So, whenever you do an Air Tap gesture, this method is invoked. RaycastHit hit; bool isTapped = false; public void OnInputClicked(InputEventData eventData) { hit = GazeManager.Instance.HitInfo; if (hit.transform.gameObject != null) { isTapped = !isTapped; var lftSpeaker = GameObject.FindWithTag("LeftSpeaker"); var lftSpeakerDetails = GameObject.FindWithTag("speakerDetails"); MeshRenderer render = lftSpeakerDetails.GetComponentInChildren <MeshRenderer>(); if (isTapped) { lftSpeaker.transform.Translate(0.0f, -1.0f * Time.deltaTime, 0.0f); render.enabled = true; } else { lftSpeaker.transform.Translate(0.0f, 1.0f * Time.deltaTime, 0.0f); render.enabled = false; } } } When it is gazed, we find the game object for both LeftSpeaker and speakerDetails by the tag name. For the LeftSpeaker object, we are applying transformation based on tapped or not tapped, which worked like what we did for lenses. In the case of speaker details object, we have also taken the reference of MeshRenderer to make it's visibility true and false based on the Air Tap. Attach the SpeakerGestureHandler class with leftSpeaker Game Object. Air Tap in speaker – see it in action Air Tap action for speaker is also done. Save the scene, build and run the solution in emulator once again. When you can see the cursor on the speaker, perform Air Tap. Default View and Air Tapped View Real-time visualization through HoloLens We have learned about the data ingress flow, where devices connect with the IoT Hub, and stream analytics processes the stream of data and pushes it to storage. Now, in this section, let's discuss how this stored data will be consumed for data visualization within holographic application. Solution to consume data through services Summary In this article, we demonstrated using HoloLens,  for exploring Digital Reality - Under the Hood, Holograms in reality, Sketching the scenarios, 3D Modeling workflow, Adding Air Tap on speaker,  and  Real-time visualization through HoloLens. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints [article] Raspberry Pi LED Blueprints [article] Exploring and Interacting with Materials using Blueprints [article]
Read more
  • 0
  • 0
  • 28306
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-configuring-extra-features
Packt
27 Jan 2016
10 min read
Save for later

Configuring Extra Features

Packt
27 Jan 2016
10 min read
In this article by Piotr J Kula, the author of the book Raspberry Pi 2 Server Essentials, you will learn how to keep the Pi up-to-date and use the extra features of the GPU. There are some extra features on the Broadcom chip that can be used out of box or activated using extra licenses that can be purchased. Many of these features are undocumented and found by developers or hobbyists working on various projects for the Pi. (For more resources related to this topic, see here.) Updating the Raspberry Pi The Pi essentially has three software layers: the closed source GPU boot process, the boot loader—also known as the firmware, and the operating system. As of writing this book, we cannot update the GPU code. But maybe one day, Broadcom or hardware hackers will tell us how do to this. This leaves us with the firmware and operating system packages. Broadcom releases regular updates for the firmware as precompiled binaries to the Raspberry Pi Foundation, which then releases it to the public. The Foundation and other community members work on Raspbian and release updates via the aptitude repository; this is where we get all our wonderful applications from. It is essential to keep both the firmware and packages up-to-date so that you can benefit from bug fixes and new or improved functionality from the Broadcom chip. The Raspberry Pi 2 uses ARMv7 as opposed to the Pi 1, which uses ARMv6. It recommended using the latest version of Raspbian release to benefit from the speed increase. Thanks to the ARMv7 upgrade as it now supports standard Debian Hard Float packages and other ARMv7 operating systems, such as Windows IoT Core. Updating firmware Updating the firmware used to be quite an involved process, but thanks to a user on GitHub who goes under by the alias of Hexxeh. He has made some code to automatically do this for us. You don't need to run this as often as apt-update, but if you constantly upgrade the operating system, you may need to run this if advised, or when you are experiencing problems with new features or instability. rpi-update is now included as standard in the Raspbian image, and we can simply run the following: sudo rpi-update After the process is complete, you will need to restart the Pi in order to load the new firmware. Updating packages Keeping Raspbian packages up-to-date is also very important, as many changes might work together with fixes published in the firmware. Firstly, we update the source list, which downloads a list of packages and their versions to the aptitude cache. Then, we run the upgrade command that will compare the packages, which are already installed. It will also compare their dependencies, and then it downloads and updates them accordingly: sudo apt-get update sudo apt-get upgrade If there are major changes in the libraries, updating some packages might break your existing custom code or applications. If you need to change anything in your code before updating, you should always check the release notes. Updating distribution We may find that running the firmware update process and package updates does not always solve a particular problem. If you use a release, such as debian-armhf, you can use the following commands without the need to set everything up again: sudo apt-get dist-upgrade sudo apt-get install raspberrypi-ui-mods Outcomes If you have a long-term or production project that will be running independently, it is not a good idea to login from time to time to update the packages. With Linux, it is acceptable to configure your system and let it run for long periods of time without any software maintenance. You should be aware of critical updates and evaluate if you need to install them. For example, consider the recent Heartbleed vulnerability in SSH. If you had a Pi directly connected to the public internet, this would require instant action. Windows users are conditioned to update frequently, and it is very rare that something will go wrong. Though on Linux, running updates will update your software and operating system components, which could cause incompatibilities with other custom software. For example, you used an open source CMS web application to host some of your articles. It was specifically designed for PHP version x, but upgrading to version y also requires the entire CMS system to be upgraded. Sometimes, less popular open source sources may take several months before the code gets refactored to work with the latest PHP version, and consequently, unknowingly upgrading to the latest PHP may completely or partially break your CMS. One way to try and work around this is to clone your SD card and perform the updates on one card. If any issues are encountered, you can easily go back and use the other SD card. A distribution called CentOS tries to deal with this problem by releasing updates once a year. This is deliberate to make sure that everybody has enough time and have tested their software before you can do a full update with minimal or even no breaking changes. Unfortunately, CentOS has no ARM support, but you can follow this guideline by updating packages when you need them. Hardware watchdog A hardware watchdog is a digital clock that needs to be regularly restarted before it reaches a certain time. Just as in the TV series LOST, there is a dead man's switch hidden on the island that needs to be pressed at regular intervals; otherwise, an unknown event will begin. In terms of the Broadcom GPU, if the switch is not pressed, it means that the system has stopped responding, and the reaction event is used to restart the Raspberry Pi and reload the operating system with the expectation that it will, at least temporarily, resolve the issue. Raspbian has a kernel module included, which is disabled by default that deals with the watchdog hardware. A configurable daemon runs on the software layer that sends regular events (such as pressing a button) referred to as a heartbeat to the watchdog via the kernel module. Enabling the watchdog and daemon To get everything up and running, we need to do a few things as follows: Add the following in the console: sudomodprobebcm2708_wdog sudo vi /etc/modules Add the line of the text bcm2708_wdog to the file, then save and exit by pressing ESC and typing :wq. Next, we need to install the daemon that will send the heartbeat signals every 10 seconds. We use chkconfig and add it to the startup process. Then, we enable it as follows: sudo apt-get install watchdog chkconfig sudochkconfig --add watchdog chkconfig watchdog on We can now configure the daemon to do simple checks. Edit the following file: sudo vi /etc/watchdog.conf Uncomment the max-load-1 = 24 and watchdog-device lines by removing the hash (#) character. The max load means that it will take 24 Pi's to complete the task in 1 minute. In normal usage, this will never happen and would only really occur when the Pi is hung. You can now start the watchdog with that configuration. Each time you change something, you need to restart the watchdog: sudo /etc/init.d/watchdog start There are some other examples in the configuration file that you may find of interest. Testing the watchdog In Linux, you can easily place a function into a separate thread, which runs in a new process by using the & character on the command line. By exploiting this feature together with some anonymous functions, we can issue a very crude but effective system halt. This is a quick way to test if the watchdog daemon is working correctly, and it should not be used to halt the Pi. It is known as a fork bomb and many operating systems are susceptible to this. The random-looking series of characters are actually anonymous functions that create other new anonymous function. This is an endless and uncontrollable loop. Most likely, it adopts the name as a bomb because once it starts, it cannot be stopped. Even if you try to kill the original thread, it has created several new threads that need to be killed. It is just impossible to stop, and eventually, it bombs the system into a critical state, which is also known as a stack overflow. Type these characters into the command line and press Enter: : (){ :|:& };: After you press Enter, the Pi will restart after about 30 seconds, but it might take up to a minute. Enabling extra decoders The Broadcom chip actually has extra hardware for encoding and decoding a few other well-known formats. The Raspberry Pi foundation did not include these licenses because they wanted to keep the costs down to a minimum, but they have included the H.264 license. This allows you to watch HD media on your TV, use the webcam module, or transcode media files. If you would like to use these extra encoders/decoders, they did provide a way for users to buy separate licenses. At the time of writing this book, the only project to use these hardware codecs was the OMXPlayer project maintained by XBMC. The latest Raspbian package has the OMX package included. Buying licenses You can go to http://www.raspberrypi.com/license-keys/ to buy licenses that can be used once per device. Follow the instruction on the website to get your license key. MPEG-2 This is alos known as H.222/H.262. It is the standard of video and audio encoding, which is widely used by digital television, cable, and satellite TV. It is also the format used to store video and audio data on DVDs. This means that watching DVDs from a USB DVD-ROM drive should be possible without any CPU overhead whatsoever. Unfortunately, there is no package that uses this hardware directly, but hopefully, in the near future, it would be as simple as buying this license, which will allow us to watch DVDs or video stream in this format with ease. VC-1 VC-1 is formally known as SMPTE421M and was developed by Microsoft. Today, it is the official video format used on the Xbox and Silverlight frameworks. The format is supported by the HD-DVD and Blu-ray players. The only use for this codec will be to watch the Silverlight packaged media, and its popularity has grown over the years but still not very popular. This codec may need to be purchased if you would like to stream video using the Windows 10 IoT API. Hardware monitoring The Raspberry foundation provides a tool called vcgencmd, which gives you detailed data about various hardware used in the Pi. This tool is updated from time to time and can be used to log temperate of the GPU, voltage levels, processor frequencies, and so on: To see a list of supported commands, we type in this console: vcgencmd commands As newer versions are released, there will be more command available in here. To check the current GPU temperature, we use the following command: vcgencmdmeasure_temp We can use the following command to check how RAM is split for the CPU and GPU: vcgencmdget_mem arm/gpu To check the firmware version, we can use the following command: vcgencmd version The output of all these commands is simple text that can be parsed and displayed on a website or stored in a database. Summary This article's intention was to teach you about how hardware relies on good software, but most importantly, it's intention was to show you how to use leverage hardware using ready-made software packages. For reference, you can go to the following link: http://www.elinux.org/RPI_vcgencmd_usage Resources for Article: Further resources on this subject: Creating a Supercomputer [article] Develop a Digital Clock [article] Raspberry Pi and 1-Wire [article]
Read more
  • 0
  • 0
  • 28300

article-image-implementing-dependency-injection-in-swift-tutorial
Bhagyashree R
11 Feb 2019
14 min read
Save for later

Implementing Dependency Injection in Swift [Tutorial]

Bhagyashree R
11 Feb 2019
14 min read
In software development, it's always recommended to split the system into loosely coupled modules that can work independently as much as they can. Dependency Injection (DI) is a pattern that helps to reach this goal, creating a maintainable and testable system. It is often confused with complex and over-configurable frameworks that permit us to add DI to our code; in reality, it is a simple pattern that can be added without too much effort. This article is taken from the book Hands-On Design Patterns with Swift by Florent Vilmart, Giordano Scalzo, and Sergio De Simone.  This book demonstrates how to apply design patterns and best practices in real-life situations, whether that's for new or already existing Swift projects. You’ll begin with a quick refresher on Swift, the compiler, the standard library, and the foundation, followed by the Cocoa design patterns to follow up with the creational, structural, and behavioral patterns as defined by the GoF.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we'll see what Dependency Injection is, where it comes, and how it's defined so that we can then discuss various methods to implement it, having a clear understanding of its principles. Dependency Injection, a primer Dependency Injection is one of the most misunderstood concepts in computer programming. This is because the Dependency Injection borders are quite blurry and they could overlap with other object-oriented programming concepts. Let's start with a formal definition given by Wikipedia: "In software engineering, Dependency Injection is a software design pattern that implements inversion of control for resolving dependencies." To be honest, this is not really clear: what is Inversion of Control? Why is it useful for resolving dependencies? In procedural programming, each object interacts with all of its collaborators in a direct way and also instantiates them directly. In Inversion Of Control, this flow is managed by a third party, usually, a framework that calls the objects and receives notifications. An example of this is an implementation of a UI engine. In a UI Engine, there are two parts: the Views and the Models part. The Views part handles all the interaction with the users, such as tapping buttons and rendering labels, whereas the Models part is responsible for business logic. Usually, the application code goes in the Models part, and the connections with the Views are done via callbacks that are called by the engine when the user interacts with a button or a text field. The paradigm changes from an imperative style where the algorithm is a sequence of actions, like in do this then do that, to an event style, when the button is tapped then call the server. The control of the actions is thus inverted. Instead of being the model that does things, the model now receives calls. Inversion of Control is often called Hollywood Principle. The essence of this principle is, "Don't call us, we'll call you," which is a response you might hear after auditioning for a role in Hollywood. In procedural programming, the flow of the program is determined by the modules that are statically connected together: ContactsView talks to ContactsCoreData and  ContactsProductionRemoteService, and each object instantiate its next collaborator. In Inversion of Control, ContactsView talks to a generic ContactsStore and a generic ContactsRemoteService whose concrete implementation could change depending on the context. If it is during the tests, an important role is played by the entity that manages how to create and connect all the objects together. After having defined the concept of IoC, let's give a simpler definition of DI by James Shore: "Dependency Injection" is a 25-dollar term for a 5-cent concept. [...] Dependency Injection means giving an object its instance variables. Really. That's it." The first principle of the book Design Patterns by the Gang of Four is "Program to an interface, not an implementation" which means that the objects need to know each other only by their interface and not by their implementation. After having defined how all the classes in software will collaborate with each other, this collaboration can be designed as a graph. The graph could be implemented connecting together the actual implementation of the classes, but following the first principle mentioned previously, we can do it using the interfaces of the same objects: the Dependency Injection is a way of building this graph passing the concrete classes to the objects. Four ways to use Dependency Injection Dependency Injection is used ubiquitously in Cocoa too, and in the following examples, we'll see code snippets both from Cocoa and typical client-side code. Let's take a look at the following four sections to learn how to use Dependency Injection. Constructor Injection The first way to do DI is to pass the collaborators in the constructor, where they are then saved in private properties. Let's have as an example on e-commerce app, whose Basket is handled both locally and remotely. The BasketClient class orchestrates the logic, saves locally in BasketStore, and synchronizes remotely with BasketService: protocol BasketStore { func loadAllProduct() -> [Product] func add(product: Product) func delete(product: Product) } protocol BasketService { func fetchAllProduct(onSuccess: ([Product]) -> Void) func append(product: Product) func remove(product: Product) } struct Product { let id: String let name: String //... } Then in the constructor of BasketClient, the concrete implementations of the protocols are passed: class BasketClient { private let service: BasketService private let store: BasketStore init(service: BasketService, store: BasketStore) { self.service = service self.store = store } func add(product: Product) { store.add(product: product) service.append(product: product) calculateAppliedDiscount() //... } // ... private func calculateAppliedDiscount() { // ... } } In Cocoa and Cocoa Touch, the Apple foundation libraries, there are a few examples of this pattern. A notable example is NSPersistentStore in CoreData: class NSPersistentStore: NSObject { init(persistentStoreCoordinator root: NSPersistentStoreCoordinator?, configurationName name: String?, URL url: NSURL, options: [NSObject: AnyObject]?) var persistentStoreCoordinator: NSPersistentStoreCoordinator? { get } } In the end, Dependency Injection as defined by James Shore is all here: define the collaborators with protocols and then pass them in the constructor. This is the best way to do DI. After the construction, the object is fully formed and it has a consistent state. Also, by just looking at the signature of init, the dependencies of this object are clear. Actually, the Constructor Injection is not only the most effective, but it's also the easiest. The only problem is who has to create the object graph? The parent object? The AppDelegate? We'll discuss that point in the Where to bind the dependencies section. Property Injection We have already agreed that Construction Injection is the best way to do DI, so why bother finding other methods? Well, it is not always possible to define the constructor the way we want. A notable example is doing DI with ViewControllers that are defined in storyboards. Given we have a BasketViewController that orchestrates the service and the store, we must pass them as properties: class BasketViewController: UIViewController { var service: BasketService? var store: BasketStore? // ... } This pattern is less elegant than the previous one: The ViewController isn't in the right state until all the properties are set Properties introduce mutability, and immutable classes are simpler and more efficient The properties must be defined as optional, leading to add question marks everywhere They are set by an external object, so they must be writeable and this could potentially permit something else to overwrite the value set at the beginning after a while There is no way to enforce the validity of the setup at compile-time However, something can be done: The properties can be set as implicitly unwrapped optional and then required in viewDidLoad. This is as a static check, but at least they are checked at the first sensible opportunity, which is when the view controller has been loaded. A function setter of all the properties prevents us from partially defining the collaborator list. The class BasketViewController must then be written as: class BasketViewController: UIViewController { private var service: BasketService! private var store: BasketStore! func set(service: BasketService, store: BasketStore) { self.service = service self.store = store } override func viewDidLoad() { super.viewDidLoad() precondition(service != nil, "BasketService required") precondition(store != nil, "BasketStore required") // ... } } The Properties Injection permits us to have overridable properties with a default value. This can be useful in the case of testing. Let's consider a dependency to a wrapper around the time: class CheckoutViewController: UIViewController { var time: Time = DefaultTime() } protocol Time { func now() -> Date } struct DefaultTime: Time { func now() -> Date { return Date() } } In the production code, we don't need to do anything, while in the testing code we can now inject a particular date instead of always return the current time. This would permit us of testing how the software will behave in the future, or in the past. A dependency defined in the same module or framework is Local. When it comes from another module or framework, it's Foreign. A Local dependency can be used as a default value, but a Foreign cannot, otherwise it would introduce a strong dependency between the modules. Method Injection This pattern just passes a collaborator in the method: class BasketClient { func add(product: Product, to store: BasketStore) { store.add(product: product) calculateAppliedDiscount() //... } // ... private func calculateAppliedDiscount() { // ... } } This is useful when the object has several collaborators, but most of them are just temporary and it isn't worth having the relationship set up for the whole life cycle of the object. Ambient Context The final pattern, Ambient Context, is similar to the Singleton. We still have a single instance as a static variable, but the class has multiple subclasses with different behaviors, and each static variable is writeable with a static function: class Analytics { static private(set) var instance: Analytics = NoAnalytics() static func setAnaylics(analitics: Analytics) { self.instance = analitics } func track(event: Event) { fatalError("Implement in a subclass") } } class NoAnalytics: Analytics { override func track(event: Event) {} } class GoogleAnalytics: Analytics { override func track(event: Event) { //... } } class AdobeAnalytics: Analytics { override func track(event: Event) { //... } } struct Event { //... } This pattern should be used only for universal dependencies, representing some cross-cutting concerns, such as analytics, logging, and times and dates. This pattern has some advantages. The dependencies are always accessible and don't need to change the API. It works well for cross-cutting concerns, but it doesn't fit in other cases when the object isn't unique. Also, it makes the dependency implicit and it represents a global mutable state that sometimes can lead to issues that are difficult to debug. DI anti-patterns When we try to implement a new technique, it is quite easy to lose control and implement it in the wrong way. Let's see then the most common anti-patterns in Dependency Injection. Control Freak The first one is pretty easy to spot: we are not using the Injection at all. Instead of being Injected, the dependency is instantiated inside the object that depends on it: class FeaturedProductsController { private let restProductsService: ProductsService init() { self.restProductsService = RestProductsService(configuration: Configuration.loadFromBundleId()) } } In this example, ProductsService could have been injected in the constructor but it is instantiated there instead. Mark Seeman, in his book Dependency Injection in .NET, Chapter 5.1 - DI anti-patterns, calls it Control Freak because it describes a class that will not relinquish its dependencies. The Control Freak is the dominant DI anti-pattern and it happens every time a class directly instantiates its dependencies, instead of relying on the Inversion of Control for that. In the case of the example, even though the rest of the class is programmed against an interface, there is no way of changing the actual implementation of ProductsService and the type of concrete class that it is, it will always be RestProductsService. The only way to change it is to modify the code and compile it again, but with DI it should be possible to change the behavior at runtime. Sometimes, someone tries to fix the Control Freak anti-pattern using the factory pattern, but the reality is that the only way to fix it is to apply the Inversion of Control for the dependency and inject it in the constructor: class FeaturedProductsController { private let productsService: ProductsService init(service: ProductsService) { self.productsService = service } } As already mentioned, Control Freak is the most common DI anti-pattern; pay particular attention so you don't slip into its trap. Bastard Injection Constructor overloads are fairly common in Swift codebases, but these could lead to the Bastard Injection anti-pattern. A common scenario is when we have a constructor that lets us inject a Test Double, but it also has a default parameter in the constructor: class TodosService { let repository: TodosRepository init(repository: TodosRepository = SqlLiteTodosRepository()) { self.repository = repository } } The biggest problem here is when the default implementation is a Foreign dependency, which is a class defined using another module; this creates a strong relationship between the two modules, making it impossible to reuse the class without including the dependent module too. The reason someone is tempted to write a default implementation it is pretty obvious since it is an easy way to instantiate the class just with TodoService() without the need of Composition Root or something similar. However, this nullifies the benefits of DI and it should be avoided removing the default implementation and injecting the dependency. Service Locator The final anti-pattern that we will explore is the most dangerous one: the Service Locator. It's funny because this is often considered a good pattern and is widely used, even in the famous Spring framework. Originally, the Service Locator pattern was defined in Microsoft patterns & practices' Enterprise Library, as Mark Seeman writes in his book Dependency Injection in .NET, Chapter 5.4 - Service Locator, but now he is advocating strongly against it. Service Locator is a common name for a service that we can query for different objects that were previously registered in it. As mentioned, it is a tricky one because it makes everything seem OK, but in fact, it nullifies all the advantage of the Dependency Injection: let locator = ServiceLocator.instance locator.register( SqlLiteTodosRepository(), forType: TodosRepository.self) class TodosService { private let repository: TodosRepository init() { let locator = ServiceLocator.instance self.repository = locator.resolve(TodosRepository.self) } } Here we have a service locator as a singleton, to whom we register the classes we want to resolve. Instead of injecting the class into the constructor, we just query from the service. It looks like the Service Locator has all the advantages of Dependency Injection, it provides testability and extensibility since we can use different implementations without changing the client. It also enables parallel development and separated configuration from the usage. But it has some major disadvantages. With DI, the dependencies are explicit; it's enough to look at the signature of the constructor or the exposed properties to understand what the dependencies for a class are. With a Service Locator, these dependencies are implicit, and the only way to find them is to inspect the implementation, which breaks the encapsulation. Also, all the classes are depending on the Service Locator and this makes the code tightly coupled with it. If we want to reuse a class, other then that class, we also need to add the Service Locator in our project, which could be in a different module and then adding the whole module as dependency where we wanted just to use one class. Service Locator could also give us the impression that we are not using DI at all because all the dependencies are hidden inside the classes. In this article, we covered the different flavors of dependency injection and examines how each can solve a particular set of problems in real-world scenarios. If you found this post useful, do check out the book, Hands-On Design Patterns with Swift. From learning about the most sought-after design patterns to comprehensive coverage of architectural patterns and code testing, this book is all you need to write clean, reusable code in Swift. Implementing Dependency Injection in Google Guice [Tutorial] Implementing Dependency Injection in Spring [Tutorial] Dagger 2.17, a dependency injection framework for Java and Android, is now out!
Read more
  • 0
  • 0
  • 28296

article-image-app-and-web-development-in-2019-what-we-loved-and-what-mattered
Richard Gall
17 Dec 2019
10 min read
Save for later

App and web development in 2019: What we loved and what mattered

Richard Gall
17 Dec 2019
10 min read
For app and web developers, the world at the end of the decade is very different to the one that began it. Sure, change is inevitable, but the way the discipline(s) have evolved in just a matter of years (arguably the most significant changes came in the latter half of the decade) is a mark of how technologies, business needs, customer expectations, and harsh economic realities have conspired to shape and remold our notion of what software development actually looks like. Full-stack, cloud-native, DevOps (and maybe even ‘NoOps’): all these things have been shaping the way app and web developers work over the last ten years. And in 2019 it feels like that new world is beginning to settle into a specific pattern. Many of the trends and technologies that really defined 2019 are, in truth, trends that have been nascent and emerging for a number of years. Cloud and microservices When cloud first emerged - at some point much earlier this decade - it was largely just about resource efficiency. The idea was to ditch your on premises servers and move instead to a model whereby you rent server space from big vendors. Okay, perhaps that’s a somewhat crude summation; but it’s nevertheless the case that cloud was primarily a field dealt with by administrators and IT professionals, rather than developers. Today, of course, cloud is having a very real impact on the way developers work, giving a degree of agility and flexibility in how software is deployed and managed. With cloud partnering nicely with microservices - which allow developers to break down an application into constituent parts - it’s easy to see how these two trends are getting many app and web developers excited. They shorten the development lifecycle and allow developers to get closer to their code as it runs in production. Learn cloud development - explore Packt's range of cloud bundles. Pick up 5 for $25 throughout our $5 campaign. An essential resource for microservices development: Microservices Development Cookbook. $5 for the rest of December and into January. Go and Rust The growth of Go and Rust throughout 2019 (okay, and a bit before that too) is directly related to the increasing importance of cloud and microservices in software development. Although JavaScript has been taken beyond the browser, it isn’t the best programming language for building high performance applications; that’s where the likes of Go and Rust have been taking over a not insignificant slice of the collective developer imagination. Both languages share a similar history (as this article nicely details); at a fundamental level, moreover, both also aim to build on C++, but with accessibility and safety in mind (C++ has long had a reputation for being both complicated and sometimes vulnerable to bugs and security issues). Go is likely to continue to grow at a faster rate than Rust: it’s a lot easier to use, so for web and app developers with experience in Java or JavaScript, it’s a much gentler learning curve. But this isn’t to say that Rust won’t remain a fixture for developers. Consistently ranked the ‘most loved’ language in Stack Overflow surveys, as developers seek relentless improvements to performance alongside watertight reliability and security, Rust will remain an important language in a fast-changing development world. Search Packt's extensive selection of Go eBooks and videos - $5 throughout December and into the new year. Visit the Packt store. Learn Rust with Rust Programming Cookbook. WebAssembly It’s impossible to talk about web and application development without mentioning WebAssembly. Arguably the full implications of WebAssembly are yet to be realised (indeed, at ReactConf 2019, Richard Feldman suggested that it was unlikely to initiate a wholesale transformation of the web - that, he believes, will take a few more years), but 2019 has been a year when it has properly started to make many developers sit up and take notice. But why is WebAssembly so exciting? Essentially, it allows you to run code on the web using multiple languages at a speed that’s almost akin to native applications. Indeed, WebAssembly is making languages like Rust more attractive to web developers. If WebAssembly is a bridge between Rust and JavaScript, Rust immediately becomes more attractive to developers who previously would have paid very little attention to it. If 2019 was the year more developers decided to take note of WebAssembly, 2020 will be the year when we start to see increased adoption. Learn WebAssembly is $5 throughout this year's $5 campaign. Get it here. State management: Redux, Flux, Vuex… For many years, MVC (Model-View-Controller) was the dominant model for managing application state. However, as applications have grown in complexity, it has become more and more difficult for us to establish a ‘single source of truth’ inside our apps.That can impact performance and can also make them harder to maintain on the development side. To tackle this, we’ve started to see a number of different patterns and frameworks emerging to help us manage application state. The growth of React has been instrumental here - as a very lightweight library it gives developers the freedom to manage application state however they choose - and it’s worth noting that Flux architecture was developed by Facebook to complement the library. Watch: Why do React developers love Redux for state management? https://www.youtube.com/watch?v=7YzgZA_hA48&feature=emb_title Following Flux we’ve also had Redux and Vuex - all of them, each with subtly different approaches, have become an essential aspect of modern web and app development. And while they might not have first emerged in 2019, it feels as though the state management discourse has hit the heights that it previously has not. If you haven’t yet had time to dive into this topic, it's well worth making sure you commit to it in 2020. Learning React with Redux and Flux [Video] is $5 - purchase it here on the Packt store. Learn Vuex with Vuex Quick Start Guide. Functional programming Functional programming is on the rise. This doesn’t however mean that purely functional languages like Haskell and Lisp are dominating the programming language landscape - in fact, it’s been said that JavaScript is now the language used for functional programming (even though it isn’t a functional language). Functional programming is popular because it can help minimize complexity and make it easier to test and reuse code. When you’re dealing with a dense codebase that grows and grows as your application scales, this is immensely valuable. It’s also worth placing functional programming in the context of managing application state. Insofar as functional programming allows you to be specific in determining how different parts of a component should interact with one another - the function is a theoretical abstraction that makes it easier to get to grips with managing the state of a complex and dynamic application. Get to grips with functional programming and discover how to leverage its power. Read Mastering Functional Programming. The new JavaScript framework boom I’m not sure whether JavaScript fatigue is over. On the one hand the space has coalesced around a handful of core tools and frameworks - React, GraphQL, Node.js, among a couple of others - but on the other hand, the last year (and a bit) have been characterized by many other small projects developed to support these core tools. So, while it’s maybe a little bit easier to parse the JavaScript ecosystem at pretty high level of abstraction than it was in the past, at a deeper level you have a range of tools that are designed for very specific purposes or to be used alongside some of those frameworks and tools just mentioned. Tools ranging from Koa.js (for Node), to Polymer, Nuxt, Next, Gatsby, Hugo, Vuelidate (to name just a random assortment) are all vying for developer mindshare. You could say that many of these tools are ‘second-order’ frameworks and libraries - they don’t fundamentally change the way you think about development but instead make it easier to do specific things. It’s for this reason that I’m reluctant to suggest that JavaScript fatigue will return to its former glory - this new JavaScript framework boom is very much geared towards productivity and immediate gains rather than overhauling the way you build applications because of some principled belief in the ‘right’ or ‘best’ way to do things. Learn Nuxt: pick up Build a News Feed with Nuxt 2 and Firestore [Video] for $5 before the end of the year. Get to grips with Next.js with Next.js Quick Start Guide. Learn Koa with Hands-on Server-Side Development with Koa.js [Video] Learn Gatsby with GatsbyJS: Build a PWA Blog with GraphQL, React, and WordPress [Video] GraphQL Much of this decade has been dominated by REST when it comes to APIs. But just as the so called ‘API economy’ has gone into overdrive, GraphQL has come on the scene. Adoption has been rapid, with many developers turning to it because it allows them to handle more complex and sophisticated requests at scale without writing long and confusing lines of code. This isn’t to say, of course, that GraphQL has all but killed REST. Instead, it’s more the case that GraphQL has been found to be a better tool for managing APIs in specific domains than REST. If you’re dealing with APIs that are complex in terms of the number of entities and their relationships between one another, then GraphQL can prove immensely useful. Find out how to put GraphQL to use. Pick up GraphQL Projects for $5 for the rest of December and into January. React Hooks (and Vue Hooks) Launched with React 16.8, React Hooks “let you use state and other React features without writing a class” (that’s from the project’s site). That’s a good thing because building components with a class can sometimes be somewhat inelegant. For a better explanation of the ‘point’ of React Hooks you could do a lot worse than this article. Vue Hooks is part of Vue 3.0 - this won’t be officially released until early next year. But the fact that both leading front end frameworks are taking similar approaches to improve the developer experience demonstrates that they’re responding to a need for more flexibility and control over large projects. That means 2019 has been the year that both tools have hit maturity in the web development space. Learn how React Hooks work with Packt's new React Hooks video. Conclusion The web and app development world is becoming difficult to parse. A few years ago discussion and debate really centered on frameworks; today it feels like there are many other elements to consider. Part of this is symptomatic of a slow DevOps revolution - the gap between build and production is smaller than it has ever been, and developers now have a significant degree of accountability and responsibility for things that were the preserve of different breeds of engineers and IT professionals. Perhaps that story is a bit of a simplification - however, it’s hard to dispute that the web and app developer skill set is incredibly diverse. That means there are an array of options and opportunities out there for those developers looking to push their careers forward, but it also means that they’ll need to do some serious decision making about what they want to do and how they want to do it.
Read more
  • 0
  • 0
  • 28292

article-image-apples-march-event-changes-gears-to-services
Sugandha Lahoti
26 Mar 2019
7 min read
Save for later

Apple’s March Event: Apple changes gears to services, is now your bank, news source, gaming zone, and TV

Sugandha Lahoti
26 Mar 2019
7 min read
Apple’s main business model has always been hardware-centric - to sell phones and computers. However, in light of the recent news of Apple’s iPhone sales dwindling, the company is now shifting its focus to other means of revenue growth to keep its consumers occupied in the world of Apple. That is exactly what happened yesterday when Apple unveiled a set of new services at the Apple March Event. Gadi Schwartz, NBC News correspondent rightly sums up Apple’s latest plan. https://twitter.com/GadiNBC/status/1110270953001410560 Here’s the detailed report. Video subscription service: Apple TV+ The Apple TV+ is a new television subscription service (yes in the likes of Netflix and Amazon Prime) which will give subscribers access to the many shows the company has been developing. Apple TV+, the company says, “will become the new home for the world’s most creative storytellers featuring exclusive original shows, movies, and documentaries.”  Apple plans to launch Apple TV+ in over 100 countries sometime this fall, though it did not disclose the pricing. The subscription service will be ad-free, available on demand, and viewable both online and offline. https://youtu.be/Bt5k5Ix_wS8 Apple also announced Apple TV Channels as a part of the Apple TV app, which will let customers pay and watch HBO, Showtime, Starz, CBS All Access, and other services directly through the TV app. Apple TV + puts the company in direct competition to Netflix, Hulu, and Disney who also offer their own video subscription services. The company is trying to find new ways to market the Apple experience to consumers. With iPhone’s sales slowly receding, Apple’s foray into video subscription services is a new initiative to bring everyone into the walled orchids of Apple. https://twitter.com/DylanByers/status/1110144534132908037 Media subscription service: Apple news+ Next in line, is the Apple News Plus service, which adds newspapers and magazines to the Apple News app. The service costing $9.99 per month will feature almost 300 magazines and newspapers including People, Vogue, National Geographic Magazine, ELLE, Glamour, The Wall Street Journal, Los Angeles Times, and more. Surprisingly, The New York Times and The Washington Post, have opted out of joining the subscription service. Although the publishers were not authorized to speak publicly about the plans, it is speculated that opting out from this subscription service is because of two major reasons. First, Apple is asking for a cut of roughly half of the subscription revenue involved in the service. Second, Apple has also asked publishers to give unlimited access to all their content which is concerning. Combined, the subscriptions provided through Apple News+ would cost more than $8,000 per year. Apple News Plus will also come with “Live Covers,” which shows animated images instead of static photos for a magazine’s cover. https://youtu.be/Im5c5WR9vMQ Apple has been quite vocal about maintaining privacy. A striking feature of Apple news+ is the heavy emphasis on private recommendations inside the news app, including magazines. The app downloads a set of articles and manages recommendations on-device. It also does not give any data to advertisers. The company noted in the live stream, "Apple doesn't know what you read." Apple News+ is available in the U.S. and Canada. The first month is free. In Canada, the service will be offered at $12.99 per month. Later this year, Apple News+ will arrive in Europe and Australia. Game subscription service: Apple Arcade Apple is now your new gaming zone with a new game subscription service, the Apple Arcade. Presented as the “world’s first game subscription service for mobile, desktop, and living room”, it will feature over 100 new and exclusive games. These games will be from acclaimed indie developers, major studios as well as renowned creators. Apple will also be contributing to the development costs for such games. With the subscription service, players can try any game in the service without risk. Every game includes access to the full experience, including all game features, content and future updates with no additional purchases required. Apple says Arcade games don’t track usage, gameplay or the titles a user plays more. Apple Arcade will launch in fall 2019 in more than 150 countries. Arcade as a single subscription package may also possibly bring premium games the traction they may have been lacking otherwise. People also pointed out that Apple’s primary target for Arcade may be parents. A comment on Hacker News reads, “I think a lot of folks looking at this from the point of view of an adult gamer are missing the point: the audience for this is parents. For 10 bucks (or whatever) a month you can load the iPad up with games and not worry about microtransactions or scummy ads targeting your kids. "Curated" is a signal that can trust age recommendations and not worry about inappropriate content.” Netizens also believe that gaming subscription services will payout more than traditional models. “The difference between this and music is that most people do not want to hear the same songs over and over again. The $10 is spread across so many artists. Video games will capture the attention of a person for hours in a month. I can see that a big chunk of the monthly fee going to a couple of titles.”, reads a comment on Hacker News. Payment subscription service: Apple Card Probably the most important service, Apple is now venturing into the banking sector, with a new digital credit card with simpler applications, no fees, lower interest rates, and daily rewards. The Apple Card is created in partnership with Goldman Sachs and Mastercard. It is available as two options. First, as a digital card which users will be able to access by signing up on their iPhone in the Apple Wallet app.  Second, as a physical titanium card with no credit card number, CVV, expiration date, or signature. All of the authorization information is stored directly in the Apple Wallet app. The card makes use of machine learning and Apple Maps to label stores and categorize them based on color. Users can easily track purchases across categories like “food and drink” or “shopping.” It also has a rewards program, “Daily Cash,” which adds 2 percent of the daily purchase amount in cash to your Apple Cash account, also within the Wallet app. Though, purchases made through the physical card will get just 1 percent cash back. Again, privacy is the most important feature here. Apple will store the spending, tracking and other information directly on the device. Jennifer Bailey, VP of Apple Pay said, “Apple doesn’t know what you bought, where you bought it, and how much you paid for it. Goldman Sachs will never sell your data to third parties for marketing and advertising.” This is probably the service that has got people the most excited. https://twitter.com/byjacobward/status/1110237925889851393 https://twitter.com/DylanByers/status/1110248441152561158 Apple Card will be available in the US this summer. Why is Apple changing focus to services? With iPhone sales growth slowing, Apple needs new measures to bring in a large number of users to its world. What better than to foray into subscription streaming services and premium original content. With the new announcements made, Apple is indeed playing the perfect middleman between its users and TV, gaming, news, and other services, bolstering privacy as their major selling point, while also earning huge revenues. As perfectly summed by a report from NBC News, “The better Apple's suite of services — movies and shows, but also music, news, fitness tracking, mobile payments, etc. — the more revenue Apple will see from subscribers.” Spotify files an EU antitrust complaint against Apple; Apple says Spotify’s aim is to make more money off other’s work. Donald Trump called Apple CEO Tim Cook ‘Tim Apple’ Apple to merge the iPhone, iPad, and Mac apps by 2021
Read more
  • 0
  • 0
  • 28249
article-image-dealing-interrupts
Packt
02 Mar 2015
19 min read
Save for later

Dealing with Interrupts

Packt
02 Mar 2015
19 min read
This article is written by Francis Perea, the author of the book Arduino Essentials. In all our previous projects, we have been constantly looking for events to occur. We have been polling, but looking for events to occur supposes a relatively big effort and a waste of CPU cycles to only notice that nothing happened. In this article, we will learn about interrupts as a totally new way to deal with events, being notified about them instead of looking for them constantly. Interrupts may be really helpful when developing projects in which fast or unknown events may occur, and thus we will see a very interesting project which will lead us to develop a digital tachograph for a computer-controlled motor. Are you ready? Here we go! (For more resources related to this topic, see here.) The concept of an interruption As you may have intuited, an interrupt is a special mechanism the CPU incorporates to have a direct channel to be noticed when some event occurs. Most Arduino microcontrollers have two of these: Interrupt 0 on digital pin 2 Interrupt 1 on digital pin 3 But some models, such as the Mega2560, come with up to five interrupt pins. Once an interrupt has been notified, the CPU completely stops what it was doing and goes on to look at it, by running a special dedicated function in our code called Interrupt Service Routine (ISR). When I say that the CPU completely stops, I mean that even functions such as delay() or millis() won't be updated while the ISR is being executed. Interrupts can be programmed to respond on different changes of the signal connected to the corresponding pin and thus the Arduino language has four predefined constants to represent each of these four modes: LOW: It will trigger the interrupt whenever the pin gets a LOW value CHANGE: The interrupt will be triggered when the pins change their values from HIGH to LOW or vice versa RISING: It will trigger the interrupt when signal goes from LOW to HIGH FALLING: It is just the opposite of RISING; the interrupt will be triggered when the signal goes from HIGH to LOW The ISR The function that the CPU will call whenever an interrupt occurs is so important to the micro that it has to accomplish a pair of rules: They can't have any parameter They can't return anything The interrupts can be executed only one at a time Regarding the first two points, they mean that we can neither pass nor receive any data from the ISR directly, but we have other means to achieve this communication with the function. We will use global variables for it. We can set and read from a global variable inside an ISR, but even so, these variables have to be declared in a special way. We have to declare them as volatile as we will see this later on in the code. The third point, which specifies that only one ISR can be attended at a time, is what makes the function millis() not being able to be updated. The millis() function relies on an interrupt to be updated, and this doesn't happen if another interrupt is already being served. As you may understand, ISR is critical to the correct code execution in a microcontroller. As a rule of thumb, we will try to keep our ISRs as simple as possible and leave all heavy weight processing that occurs outside of it, in the main loop of our code. The tachograph project To understand and manage interrupts in our projects, I would like to offer you a very particular one, a tachograph, a device that is present in all our cars and whose mission is to account for revolutions, normally the engine revolutions, but also in brake systems such as Anti-lock Brake System (ABS) and others. Mechanical considerations Well, calling it mechanical perhaps is too much, but let's make some considerations regarding how we are going to make our project account for revolutions. For this example project, I have used a small DC motor driven through a small transistor and, like in lots of industrial applications, an encoded wheel is a perfect mechanism to read the number of revolutions. By simply attaching a small disc of cardboard perpendicularly to your motor shaft, it is very easy to achieve it. By using our old friend, the optocoupler, we can sense something between its two parts, even with just a piece of cardboard with a small slot in just one side of its surface. Here, you can see the template I elaborated for such a disc, the cross in the middle will help you position the disc as perfectly as possible, that is, the cross may be as close as possible to the motor shaft. The slot has to be cut off of the black rectangle as shown in the following image: The template for the motor encoder Once I printed it, I glued it to another piece of cardboard to make it more resistant and glued it all to the crown already attached to my motor shaft. If yours doesn't have a surface big enough to glue the encoder disc to its shaft, then perhaps you can find a solution by using just a small piece of dough or similar to it. Once the encoder disc is fixed to the motor and spins attached to the motor shaft, we have to find a way to place the optocoupler in a way that makes it able to read through the encoder disc slot. In my case, just a pair of drops of glue did the trick, but if your optocoupler or motor doesn't allow you to apply this solution, I'm sure that a pair of zip ties or a small piece of dough can give you another way to fix it to the motor too. In the following image, you can see my final assembled motor with its encoder disc and optocoupler ready to be connected to the breadboard through alligator clips: The complete assembly for the motor encoder Once we have prepared our motor encoder, let's perform some tests to see it working and begin to write code to deal with interruptions. A simple interrupt tester Before going deep inside the whole code project, let's perform some tests to confirm that our encoder assembly is working fine and that we can correctly trigger an interrupt whenever the motor spins and the cardboard slot passes just through the optocoupler. The only thing you have to connect to your Arduino at the moment is the optocoupler; we will now operate our motor by hand and in a later section, we will control its speed from the computer. The test's circuit schematic is as follows: A simple circuit to test the encoder Nothing new in this circuit, it is almost the same as the one used in the optical coin detector, with the only important and necessary difference of connecting the wire coming from the detector side of the optocoupler to pin 2 of our Arduino board, because, as said in the preceding text, the interrupt 0 is available only through that pin. For this first test, we will make the encoder disc spin by hand, which allows us to clearly perceive when the interrupt triggers. For the rest of this example, we will use the LED included with the Arduino board connected to pin 13 as a way to visually indicate that the interrupts have been triggered. Our first interrupt and its ISR Once we have connected the optocoupler to the Arduino and prepared things to trigger some interrupts, let's see the code that we will use to test our assembly. The objective of this simple sketch is to commute the status of an LED every time an interrupt occurs. In the proposed tester circuit, the LED status variable will be changed every time the slot passes through the optocoupler: /*  Chapter 09 - Dealing with interrupts  A simple tester  By Francis Perea for Packt Publishing */   // A LED will be used to notify the change #define ledPin 13   // Global variables we will use // A variable to be used inside ISR volatile int status = LOW;   // A function to be called when the interrupt occurs void revolution(){   // Invert LED status   status=!status; }   // Configuration of the board: just one output void setup() {   pinMode(ledPin, OUTPUT);   // Assign the revolution() function as an ISR of interrupt 0   // Interrupt will be triggered when the signal goes from   // LOW to HIGH   attachInterrupt(0, revolution, RISING); }   // Sketch execution loop void loop(){    // Set LED status   digitalWrite(ledPin, status); } Let's take a look at its most important aspects. The LED pin apart, we declare a variable to account for changes occurring. It will be updated in the ISR of our interrupt; so, as I told you earlier, we declare it as follows: volatile int status = LOW; Following which we declare the ISR function, revolution(), which as we already know doesn't receive any parameter nor return any value. And as we said earlier, it must be as simple as possible. In our test case, the ISR simply inverts the value of the global volatile variable to its opposite value, that is, from LOW to HIGH and from HIGH to LOW. To allow our ISR to be called whenever an interrupt 0 occurs, in the setup() function, we make a call to the attachInterrupt() function by passing three parameters to it: Interrupt: The interrupt number to assign the ISR to ISR: The name without the parentheses of the function that will act as the ISR for this interrupt Mode: One of the following already explained modes that define when exactly the interrupt will be triggered In our case, the concrete sentence is as follows: attachInterrupt(0, revolution, RISING); This makes the function revolution() be the ISR of interrupt 0 that will be triggered when the signal goes from LOW to HIGH. Finally, in our main loop there is little to do. Simply update the LED based on the current value of the status variable that is going to be updated inside the ISR. If everything went right, you should see the LED commute every time the slot passes through the optocoupler as a consequence of the interrupt being triggered and the revolution() function inverting the value of the status variable that is used in the main loop to set the LED accordingly. A dial tachograph For a more complete example in this section, we will build a tachograph, a device that will present the current revolutions per minute of the motor in a visual manner by using a dial. The motor speed will be commanded serially from our computer by reusing some of the codes in our previous projects. It is not going to be very complicated if we include some way to inform about an excessive number of revolutions and even cut the engine in an extreme case to protect it, is it? The complete schematic of such a big circuit is shown in the following image. Don't get scared about the number of components as we have already seen them all in action before: The tachograph circuit As you may see, we will use a total of five pins of our Arduino board to sense and command such a set of peripherals: Pin 2: This is the interrupt 0 pin and thus it will be used to connect the output of the optocoupler. Pin 3: It will be used to deal with the servo to move the dial. Pin 4: We will use this pin to activate sound alarm once the engine current has been cut off to prevent overcharge. Pin 6: This pin will be used to deal with the motor transistor that allows us to vary the motor speed based on the commands we receive serially. Remember to use a PWM pin if you choose to use another one. Pin 13: Used to indicate with an LED an excessive number of revolutions per minute prior to cutting the engine off. There are also two more pins which, although not physically connected, will be used, pins 0 and 1, given that we are going to talk to the device serially from the computer. Breadboard connections diagram There are some wires crossed in the previous schematic, and perhaps you can see the connections better in the following breadboard connection image: Breadboard connection diagram for the tachograph The complete tachograph code This is going to be a project full of features and that is why it has such a number of devices to interact with. Let's resume the functioning features of the dial tachograph: The motor speed is commanded from the computer via a serial communication with up to five commands: Increase motor speed (+) Decrease motor speed (-) Totally stop the motor (0) Put the motor at full throttle (*) Reset the motor after a stall (R) Motor revolutions will be detected and accounted by using an encoder and an optocoupler Current revolutions per minute will be visually presented with a dial operated with a servomotor It gives visual indication via an LED of a high number of revolutions In case a maximum number of revolutions is reached, the motor current will be cut off and an acoustic alarm will sound With such a number of features, it is normal that the code for this project is going to be a bit longer than our previous sketches. Here is the code: /*  Chapter 09 - Dealing with interrupt  Complete tachograph system  By Francis Perea for Packt Publishing */   #include <Servo.h>   //The pins that will be used #define ledPin 13 #define motorPin 6 #define buzzerPin 4 #define servoPin 3   #define NOTE_A4 440 // Milliseconds between every sample #define sampleTime 500 // Motor speed increment #define motorIncrement 10 // Range of valir RPMs, alarm and stop #define minRPM  0 #define maxRPM 10000 #define alarmRPM 8000 #define stopRPM 9000   // Global variables we will use // A variable to be used inside ISR volatile unsigned long revolutions = 0; // Total number of revolutions in every sample long lastSampleRevolutions = 0; // A variable to convert revolutions per sample to RPM int rpm = 0; // LED Status int ledStatus = LOW; // An instace on the Servo class Servo myServo; // A flag to know if the motor has been stalled boolean motorStalled = false; // Thr current dial angle int dialAngle = 0; // A variable to store serial data int dataReceived; // The current motor speed int speed = 0; // A time variable to compare in every sample unsigned long lastCheckTime;   // A function to be called when the interrupt occurs void revolution(){   // Increment the total number of   // revolutions in the current sample   revolutions++; }   // Configuration of the board void setup() {   // Set output pins   pinMode(motorPin, OUTPUT);   pinMode(ledPin, OUTPUT);   pinMode(buzzerPin, OUTPUT);   // Set revolution() as ISR of interrupt 0   attachInterrupt(0, revolution, CHANGE);   // Init serial communication   Serial.begin(9600);   // Initialize the servo   myServo.attach(servoPin);   //Set the dial   myServo.write(dialAngle);   // Initialize the counter for sample time   lastCheckTime = millis(); }   // Sketch execution loop void loop(){    // If we have received serial data   if (Serial.available()) {     // read the next char      dataReceived = Serial.read();      // Act depending on it      switch (dataReceived){        // Increment speed        case '+':          if (speed<250) {            speed += motorIncrement;          }          break;        // Decrement speed        case '-':          if (speed>5) {            speed -= motorIncrement;          }          break;                // Stop motor        case '0':          speed = 0;          break;            // Full throttle           case '*':          speed = 255;          break;        // Reactivate motor after stall        case 'R':          speed = 0;          motorStalled = false;          break;      }     //Only if motor is active set new motor speed     if (motorStalled == false){       // Set the speed motor speed       analogWrite(motorPin, speed);     }   }   // If a sample time has passed   // We have to take another sample   if (millis() - lastCheckTime > sampleTime){     // Store current revolutions     lastSampleRevolutions = revolutions;     // Reset the global variable     // So the ISR can begin to count again     revolutions = 0;     // Calculate revolution per minute     rpm = lastSampleRevolutions * (1000 / sampleTime) * 60;     // Update last sample time     lastCheckTime = millis();     // Set the dial according new reading     dialAngle = map(rpm,minRPM,maxRPM,180,0);     myServo.write(dialAngle);   }   // If the motor is running in the red zone   if (rpm > alarmRPM){     // Turn on LED     digitalWrite(ledPin, HIGH);   }   else{     // Otherwise turn it off     digitalWrite(ledPin, LOW);   }   // If the motor has exceed maximum RPM   if (rpm > stopRPM){     // Stop the motor     speed = 0;     analogWrite(motorPin, speed);     // Disable it until a 'R' command is received     motorStalled = true;     // Make alarm sound     tone(buzzerPin, NOTE_A4, 1000);   }   // Send data back to the computer   Serial.print("RPM: ");   Serial.print(rpm);   Serial.print(" SPEED: ");   Serial.print(speed);   Serial.print(" STALL: ");   Serial.println(motorStalled); } It is the first time in this article that I think I have nothing to explain regarding the code that hasn't been already explained before. I have commented everything so that the code can be easily read and understood. In general lines, the code declares both constants and global variables that will be used and the ISR for the interrupt. In the setup section, all initializations of different subsystems that need to be set up before use are made: pins, interrupts, serials, and servos. The main loop begins by looking for serial commands and basically updates the speed value and the stall flag if command R is received. The final motor speed setting only occurs in case the stall flag is not on, which will occur in case the motor reaches the stopRPM value. Following with the main loop, the code looks if it has passed a sample time, in which case the revolutions are stored to compute real revolutions per minute (rpm), and the global revolutions counter incremented inside the ISR is set to 0 to begin again. The current rpm value is mapped to an angle to be presented by the dial and thus the servo is set accordingly. Next, a pair of controls is made: One to see if the motor is getting into the red zone by exceeding the max alarmRPM value and thus turning the alarm LED on And another to check if the stopRPM value has been reached, in which case the motor will be automatically cut off, the motorStalled flag is set to true, and the acoustic alarm is triggered When the motor has been stalled, it won't accept changes in its speed until it has been reset by issuing an R command via serial communication. In the last action, the code sends back some info to the Serial Monitor as another way of feedback with the operator at the computer and this should look something like the following screenshot: Serial Monitor showing the tachograph in action Modular development It has been quite a complex project in that it incorporates up to six different subsystems: optocoupler, motor, LED, buzzer, servo, and serial, but it has also helped us to understand that projects need to be developed by using a modular approach. We have worked and tested every one of these subsystems before, and that is the way it should usually be done. By developing your projects in such a submodular way, it will be easy to assemble and program the whole of the system. As you may see in the following screenshot, only by using such a modular way of working will you be able to connect and understand such a mess of wires: A working desktop may get a bit messy Summary I'm sure you have got the point regarding interrupts with all the things we have seen in this article. We have met and understood what an interrupt is and how does the CPU attend to it by running an ISR, and we have even learned about their special characteristics and restrictions and that we should keep them as little as possible. On the programming side, the only thing necessary to work with interrupts is to correctly attach the ISR with a call to the attachInterrupt() function. From the point of view of hardware, we have assembled an encoder that has been attached to a spinning motor to account for its revolutions. Finally, we have the code. We have seen a relatively long sketch, which is a sign that we are beginning to master the platform, are able to deal with a bigger number of peripherals, and that our projects require more complex software every time we have to deal with these peripherals and to accomplish all the other necessary tasks to meet what is specified in the project specifications. Resources for Article: Further resources on this subject: The Arduino Mobile Robot? [article] Using the Leap Motion Controller with Arduino [article] Android and Udoo Home Automation [article]
Read more
  • 0
  • 0
  • 28248

article-image-ibm-acquired-red-hat-for-34-billion-making-it-the-biggest-open-source-acquisition-ever
Sugandha Lahoti
29 Oct 2018
4 min read
Save for later

IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever

Sugandha Lahoti
29 Oct 2018
4 min read
In probably the biggest open source acquisition ever, IBM has acquired all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total enterprise value of approximately $34 billion. However, if this deal is more of a business proposition than a community contributor is a question. Red Hat has been struggling on the market recently. Red Hat missed its most recent revenue estimates and its guidance fell below Wall Street targets. Prior to this deal, it had a market capitalization of about $20.5 billion. With this deal, Red Hat may soon take control of it’s sinking ship. It will also remain a distinct unit within IBM. The company will continue to be led by Jim Whitehurst, Red Hat’s CEO and Red Hat's current management team. Jim Whitehurst also will join IBM's senior management team and report to Ginni Rometty, IBM Chairman, President, and Chief Executive Officer. Why is Red Hat joining forces with IBM? In the announcement, Jim assured that IBM’s acquisition of Red Hat will help them accelerate without compromising their culture and policies. He said, "Open source is the default choice for modern IT solutions, and I'm incredibly proud of the role Red Hat has played in making that a reality in the enterprise.” He also added that, “Joining forces with IBM will provide us with a greater level of scale, resources, and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience--all while preserving our unique culture and unwavering commitment to open source innovation." What is IBM gaining from this acquisition? IBM believes this acquisition to be a game changer. "It changes everything about the cloud market," said Ginni, "IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses. IBM and Red Hat will accelerate hybrid multi-cloud adoption across all companies. They plan to together, “help clients create cloud-native business applications faster, drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management.” "IBM is committed to being an authentic multi-cloud provider, and we will prioritize the use of Red Hat technology across multiple clouds," said Arvind Krishna, Senior Vice President, IBM Hybrid Cloud. "In doing so, IBM will support open source technology wherever it runs, allowing it to scale significantly within commercial settings around the world." IBM assures that it will continue to build and enhance Red Hat partnerships with major cloud providers. It will also remain committed to Red Hat's open governance, open source contributions, participation in the open source community and development model. The company is keen on preserving the independence and neutrality of Red Hat's open source development culture and go-to-market strategy. The news was well received by the top Red Hat decision makers who embraced this with open arms. However, ZDNet reported that many RedHat employees were skeptical: "I can't imagine a bigger culture clash." "I'll be looking for a job with an open-source company." "As a Red Hat employee, almost everyone here would prefer it if we were bought out by Microsoft." People’s reactions on twitter on this acquisition are also varied: https://twitter.com/samerkamal/status/1056611186584604672 https://twitter.com/pnuojua/status/1056787520845955074 https://twitter.com/CloudStrategies/status/1056666824434020352 https://twitter.com/svenpet/status/1056646295002247169 Read more about the news on IBM’s newsroom. Red Hat infrastructure migration solution for proprietary and siloed infrastructure. IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support IBM Watson announces pre-trained AI tools to accelerate IoT operations
Read more
  • 0
  • 0
  • 28198

article-image-working-closures
Packt
03 Jan 2017
31 min read
Save for later

Working with Closures

Packt
03 Jan 2017
31 min read
In this article, Jon Hoffman, the author of the book Mastering Swift 3 - Linux, talks about most major programming languages have functionalities similar to what closures offer. Some of these implementations are really hard to use (Objective-C blocks), while others are easy (Java lambdas and C# delegates). I found that the functionality that closures provide is especially useful when developing frameworks. I have also used them extensively when communicating with remote services over a network connection. While blocks in Objective-C are incredibly useful (and I have used them quite a bit), their syntax used to declare a block was absolutely horrible. Luckily, when Apple was developing the Swift language, they made the syntax of closures much easier to use and understand. In this article, we will cover the following topics: An introduction to closures Defining a closure Using a closure Several useful examples of closures How to avoid strong reference cycles with closures (For more resources related to this topic, see here.) An introduction to closures Closures are self-contained blocks of code that can be passed around and used throughout our application. We can think of an Int type as a type that stores an integer, and a String type as a type that stores a string. In this context, a closure can be thought of as a type that contains a block of code. What this means is that we can assign closures to a variable, pass them as arguments to functions, and also return them from functions. Closures have the ability to capture and store references to any variable or constant from the context in which they were defined. This is known as closing over the variables or constants, and the best thing is, for the most part, Swift will handle the memory management for us. The only exception is when we create a strong reference cycle, and we will look at how to resolve this in the Creating strong reference cycles with closures section of this article. Closures in Swift are similar to blocks in Objective-C; however, closures in Swift are a lot easier to use and understand. Let's look at the syntax used to define a closure in Swift: { (parameters) -> return-type in statements } As we can see, the syntax used to create a closure looks very similar to the syntax we use to create functions in Swift, and actually, in Swift, global and nested functions are closures. The biggest difference in the format between closures and functions is the in keyword. The in keyword is used in place of curly brackets to separate the definition of the closure's parameter and return types from the body of the closure. There are many uses for closures, and we will go over a number of them later in this article, but first we need to understand the basics of closures. Let's start by looking at some very basic uses for closures so that we can get a better understanding of what they are, how to define them, and how to use them. Simple closures We will begin by creating a very simple closure that does not accept any arguments and does not return a value. All it does is print Hello World to the console. Let's take a look at the following code: let clos1 = { () -> Void in print("Hello World") } In this example, we create a closure and assign it to the constant clos1. Since there are no parameters defined between the parentheses, this closure will not accept any parameters. Also, the return type is defined as Void; therefore, this closure will not return any value. The body of the closure contains one line that prints Hello World to the console. There are many ways to use closures; in this example, all we want to do is execute it. We execute this closure such as this: clos1() When we execute the closure, we will see that Hello World is printed to the console. At this point, closures may not seem that useful, but as we get further along in this article, we will see how useful and powerful they can be. Let's look at another simple closure example. This closure will accept one string parameter named name, but will still not return a value. Within the body of the closure, we will print out a greeting to the name passed into the closure through the name parameter. Here is the code for this second closure: let clos2 = { (name: String) -> Void in print("Hello (name)") } The big difference between clos2 defined in this example and the previous clos1 closure is that we define a single string parameter between the parentheses. As we can see, we define parameters for closures just like we define parameters for functions. We can execute this closure in the same way in which we executed clos1. The following code shows how this is done: clos2(name: "Jon") This example, when executed, will print the message Hello Jon to the console. Let's look at another way we can use the clos2 closure. Our original definition of closures stated, closures are self-contained blocks of code that can be passed around and used throughout our application code. What this tells us is that we can pass our closure from the context that it was created in to other parts of our code. Let's look at how to pass our clos2 closure into a function. We will define a function that accepts our clos2 closure such as this: func testClosure(handler:(String)->Void) { handler("Dasher") } We define the function just like we would any other function; however, in our parameter list, we define a parameter named handler, and the type defined for the handler parameter is (String)->Void. If we look closely, we can see that the (String)->Void definition of the handler parameter matches the parameter and return types that we defined for clos2 closure. This means that we can pass the clos2 closure into the function. Let's look at how to do this: testClosure(handler: clos2) We call the testClosure() function just like any other function and the closure that is being passed in looks like any other variable. Since the clos2 closure executed in the testClosure() function, we will see the message, Hello Dasher, printed to the console when this code is executed. As we will see a little later in this article, the ability to pass closures to functions is what makes closures so exciting and powerful. As the final piece to the closure puzzle, let's look at how to return a value from a closure. The following example shows this: let clos3 = { (name: String) -> String in return "Hello (name)" } The definition of the clos3 closure looks very similar to how we defined the clos2 closure. The difference is that we changed the Void return type to a String type. Then, in the body of the closure, instead of printing the message to the console, we used the return statement to return the message. We can now execute the clos3 closure just like the previous two closures, or pass the closure to a function like we did with the clos2 closure. The following example shows how to execute clos3 closure: var message = clos3("Buddy") After this line of code is executed, the message variable will contain the Hello Buddy string. The previous three examples of closures demonstrate the format and how to define a typical closure. Those who are familiar with Objective-C can see that the format of closures in Swift is a lot cleaner and easier to use. The syntax for creating closures that we have shown so far in this article is pretty short; however, we can shorten it even more. In this next section, we will look at how to do this. Shorthand syntax for closures In this section, we will look at a couple of ways to shorten the definition of closures. Using the shorthand syntax for closures is really a matter of personal preference. There are a lot of developers that like to make their code as small and compact as possible and they take great pride in doing so. However, at times, this can make code hard to read and understand by other developers. The first shorthand syntax for closures that we are going to look at is one of the most popular and is the syntax we saw when we were using algorithms with arrays. This format is mainly used when we want to send a really small (usually one line) closure to a function, like we did with the algorithms for arrays. Before we look at this shorthand syntax, we need to write a function that will accept a closure as a parameter: func testFunction(num: Int, handler:()->Void) { for _ in 0..< num { handler() } } This function accepts two parameters—the first parameter is an integer named num, and the second parameter is a closure named handler that does not have any parameters and does not return any value. Within the function, we create a for loop that will use the num integer to define how many times it loops. Within the for loop, we call the handler closure that was passed into the function. Now lets create a closure and pass it to the testFunction()such as this: let clos = { () -> Void in print("Hello from standard syntax") } testFunction(num: 5,handler: clos) This code is very easy to read and understand; however, it does take five lines of code. Now, let's look at how to shorten this code by writing the closure inline within the function call: testFunction(num: 5,handler: {print("Hello from Shorthand closure")}) In this example, we created the closure inline within the function call using the same syntax that we used with the algorithms for arrays. The closure is placed in between two curly brackets ({}), which means the code to create our closure is {print("Hello from Shorthand closure")}. When this code is executed, it will print out the message, Hello from Shorthand closure, five times on the screen. Let's look at how to use parameters with this shorthand syntax. We will begin by creating a new function that will accept a closure with a single parameter. We will name this function testFunction2. The following example shows what the new testFunction2 function does: func testFunction2(num: Int, handler:(name: String)->Void) { for _ in 0..< num { handler(name: "Me") } } In testFunction2, we define our closure such as this: (name: String)->Void. This definition means that the closure accepts one parameter and does not return any value. Now, let's see how to use the same shorthand syntax to call this function: testFunction2(num: 5,handler: {print("Hello from ($0)")}) The difference between this closure definition and the previous one is $0. The $0 parameter is shorthand for the first parameter passed into the function. If we execute this code, it prints out the message, Hello from Me, five times. Using the dollar sign ($) followed by a number with inline closures allows us to define the closure without having to create a parameter list in the definition. The number after the dollar sign defines the position of the parameter in the parameter list. Let's examine this format a bit more because we are not limited to only using the dollar sign ($) and number shorthand format with inline closures. This shorthand syntax can also be used to shorten the closure definition by allowing us to leave the parameter names off. The following example demonstrates this: let clos5: (String, String) ->Void = { print("($0) ($1)") } In this example, our closure has two string parameters defined; however, we do not give them names. The parameters are defined such as this: (String, String). We can then access the parameters within the body of the closure using $0 and $1. Also, note that closure definition is after the colon (:), using the same syntax that we use to define a variable type, rather than inside the curly brackets. When we use anonymous arguments, this is how we would define the closure. It will not be valid to define the closure such as this: let clos5b = { (String, String) -> Void in print("($0) ($1)") } In this example, we will receive the Anonymous closure arguments cannot be used inside a closure that has explicit arguments error. We will use the clos5 closure such as this: clos5("Hello","Kara") Since Hello is the first string in the parameter list, it is accessed with $0, and as Kara is the second string in the parameter list, it is accessed with $1. When we execute this code, we will see the message Hello Kara printed to the console. This next example is used when the closure doesn't return any value. Rather than defining the return type as Void, we can use parentheses, as the following example shows: let clos6: () -> () = { print("Howdy") } In this example, we define the closure as () -> (). This tells Swift that the closure does not accept any parameters and also does not return a value. We will execute this closure such as this: clos6() As a personal preference, I am not very fond of this shorthand syntax. I think the code is much easier to ready when the void keyword is used rather than the parentheses. We have one more shorthand closure example to demonstrate before we begin showing some really useful examples of closures. In this last example, we will demonstrate how we can return a value from the closure without the need to include the return keyword. If the entire closure body consists of only a single statement, then we can omit the return keyword, and the results of the statement will be returned. Let's take a look at an example of this: let clos7 = { (first: Int, second: Int) -> Int in first + second } In this example, the closure accepts two parameters of the Int type and will return an Int type. The only statement within the body of the closure adds the first parameter to the second parameter. However, if you notice, we do not include the return keyword before the addition statement. Swift will see that this is a single statement closure and will automatically return the results, just as if we put the return keyword before the addition statement. We do need to make sure the result type of our statement matches the return type of the closure. All of the examples that were shown in the previous two sections were designed to show how to define and use closures. On their own, these examples did not really show off the power of closures and they did not show how incredibly useful closures are. The remainder of this article is written to demonstrate the power and usefulness of closures in Swift. Using closures with Swift's array algorithms Now that we have a better understanding of closures, let's see how we can expand on these algorithms using more advanced closures. In this section, we will primarily be using the map algorithm for consistency purposes; however, we can use the basic ideas demonstrated with any of the algorithms. We will start by defining an array to use: let guests = ["Jon", "Kim", "Kailey", "Kara"] This array contains a list of names and the array is named guests. This array will be used for all the examples in this section, except for the very last ones. Now that we have our guests array, let's add a closure that will print a greeting to each of the names in the guests array: guests.map({ (name: String) -> Void in print("Hello (name)") }) Since the map algorithm applies the closure to each item of the array, this example will print out a greeting for each name within the guests array. After the first section in this article, we should have a pretty good understanding of how this closure works. Using the shorthand syntax that we saw in the last section, we could reduce the preceding example down to the following single line of code: guests.map({print("Hello ($0)")}) This is one of the few times, in my opinion, where the shorthand syntax may be easier to read than the standard syntax. Now, let's say that rather than printing the greeting to the console, we wanted to return a new array that contained the greetings. For this, we would have returned a string type from our closure, as shown in the following example: var messages = guests.map({ (name:String) -> String in return "Welcome (name)" }) When this code is executed, the messages array will contain a greeting to each of the names in the guests array while the guests array will remain unchanged. The preceding examples in this section showed how to add a closure to the map algorithm inline. This is good if we only had one closure that we wanted to use with the map algorithm, but what if we had more than one closure that we wanted to use, or if we wanted to use the closure multiple times or reuse them with different arrays. For this, we could assign the closure to a constant or variable and then pass in the closure, using its constant or variable name, as needed. Let's see how to do this. We will begin by defining two closures. One of the closures will print a greeting for each name in the guests array, and the other closure will print a goodbye message for each name in the guests array: let greetGuest = { (name:String) -> Void in print("Hello guest named (name)") } let sayGoodbye = { (name:String) -> Void in print("Goodbye (name)") } Now that we have two closures, we can use them with the map algorithm as needed. The following code shows how to use these closures interchangeably with the guests array: guests.map(greetGuest) guests.map(sayGoodbye) Whenever we use the greetGuest closure with the guests array, the greetings message is printed to the console, and whenever we use the sayGoodbye closure with the guests array, the goodbye message is printed to the console. If we had another array named guests2, we could use the same closures for that array, as shown in the following example: guests.map(greetGuest) guests2.map(greetGuest) guests.map(sayGoodbye) guests2.map(sayGoodbye) All of the examples in this section so far have either printed a message to the console or returned a new array from the closure. We are not limited to such basic functionality in our closures. For example, we can filter the array within our closure, as shown in the following example: let greetGuest2 = { (name:String) -> Void in if (name.hasPrefix("K")) { print("(name) is on the guest list") } else { print("(name) was not invited") } } In this example, we print out a different message depending on whether the name starts with the letter K or not. As we mentioned earlier in the article, closures have the ability to capture and store references to any variable or constant from the context in which they were defined. Let's look at an example of this. Let's say that we have a function that contains the highest temperature for the last seven days at a given location, and this function accepts a closure as a parameter. This function will execute the closure on the array of temperature. The function can be written such as this: func temperatures(calculate:(Int)->Void) { var tempArray = [72,74,76,68,70,72,66] tempArray.map(calculate) } This function accepts a closure defined as (Int)->Void. We then use the map algorithm to execute this closure for each item of the tempArray array. The key to using a closure correctly in this situation is to understand that the temperatures function does not know or care what goes on inside the calculate closure. Also, be aware that the closure is also unable to update or change the items within the function's context, which means that the closure cannot change any other variable within the temperature's function; however, it can update variables in the context that it was created in. Let's look at the function that we will create the closure in. We will name this function testFunction. Let's take a look at the following code: func testFunction() { var total = 0 var count = 0 let addTemps = { (num: Int) -> Void in total += num count += 1 } temperatures(calculate: addTemps) print("Total: (total)") print("Count: (count)") print("Average: (total/count)") } In this function, we begin by defining two variables named total and count, where both variables are of the Int type. We then create a closure named addTemps that will be used to add all of the temperatures from the temperatures function together. The addTemps closure will also count how many temperatures there are in the array. To do this, the addTemps closure calculates the sum of each item in the array and keeps the total in the total variable that was defined at the beginning of the function. The addTemps closure also keeps track of the number of items in the array by incrementing the count variable for each item. Notice that neither the total nor count variables are defined within the closure; however, we are able to use them within the closure because they were defined in the same context as the closure. We then call the temperatures function and pass it the addTemps closure. Finally, we print the total, count, and average temperature to the console. When the testFunction is executed, we see the following output to the console: Total: 498 Count: 7 Average: 71 As we can see from the output, the addTemps closure is able to update and use items that are defined within the context that it was created in, even when the closure is used in a different context. Now that we have looked at using closures with the array map algorithm, let's look at using closures by themselves. We will also look at the ways we can clean up our code to make it easier to read and use. Changing functionality Closures also give us the ability to change the functionality of classes on the fly. With closures, we are able to write functions and classes whose functionality can change, based on the closure that is passed into it as a parameter. In this section, we will show how to write a function whose functionality can be changed with a closure. Let's begin by defining a class that will be used to demonstrate how to swap out functionality. We will name this class TestClass: class TestClass { typealias getNumClosure = ((Int, Int) -> Int) var numOne = 5 var numTwo = 8 var results = 0 func getNum(handler: getNumClosure) -> Int { results = handler(numOne,numTwo) return results } } We begin this class by defining a type alias for our closure that is namedgetNumClosure. Any closure that is defined as a getNumClosure closure will take two integers and return an integer. Within this closure, we assume that it does something with the integers that we pass in to get the value to return, but it really doesn't have to. To be honest, this class doesn't really care what the closure does as long as it conforms to the getNumClosure type. Next, we define three integers that are named numOne, NumTwo, and results. We also define a method named getNum(). This method accepts a closure that confirms the getNumClosure type as its only parameter. Within the getNum() method, we execute the closure by passing in the numOne and numTwo class variables, and the integer that is returned is put into the results class variable. Now, let's look at several closures that conform to the getNumClosure type that we can use with the getNum() method: var max: TestClass.getNumClosure = { if $0 > $1 { return $0 } else { return $1 } } var min: TestClass.getNumClosure = { if $0 < $1 { return $0 } else { return $1 } } var multiply: TestClass.getNumClosure = { return $0 * $1 } var second: TestClass.getNumClosure = { return $1 } var answer: TestClass.getNumClosure = { var tmp = $0 + $1 return 42 } In this code, we define five closures that conform to the getNumClosure type: max: This returns the maximum value of the two integers that are passed in min: This returns the minimum value of the two integers that are passed in multiply: This multiplies both the values that are passed in and returns the product second: This returns the second parameter that was passed in answer: This returns the answer to life, the universe, and everything In the answer closure, we have an extra line that looks like it does not have a purpose: var tmp = $0 + $1. We do this purposely because the following code is not valid: var answer: TestClass.getNumClosure = { return 42 } This class gives us the error: contextual type for closure argument list expects 2 arguments, which cannot be implicitly ignored error. As we can see by the error, Swift does not think that our closure accepts any parameters unless we use $0 and $1 within the body of the closure. In the closure named second, Swifts assumes that there are two parameters because $1 specifies the second parameter. We can now pass each one of these closures to the getNum method of our TestClass to change the functionality of the function to suit our needs. The following code illustrates this: var myClass = TestClass() myClass.getNum(handler: max) myClass.getNum(handler: min) myClass.getNum(handler: multiply) myClass.getNum(handler: second) myClass.getNum(handler: answer) When this code is run, we will receive the following results for each of the closures: max: results = 8 min: results = 5 multiply: results = 40 second: results = 8 answer: results = 42 The last example we are going to show you in this article is one that is used a lot in frameworks, especially the ones that have a functionality that is designed to be run asynchronously. Selecting a closure based on results In the final example, we will pass two closures to a method, and then depending on some logic, one, or possibly both, of the closures will be executed. Generally, one of the closures is called if the method was successfully executed and the other closure is called if the method failed. Let's start off by creating a class that will contain a method that will accept two closures and then execute one of the closures based on the defined logic. We will name this class TestClass. Here is the code for the TestClass class: class TestClass { typealias ResultsClosure = ((String) -> Void) func isGreater(numOne: Int, numTwo:Int, successHandler: ResultsClosure, failureHandler: ResultsClosure) { if numOne > numTwo { successHandler("(numOne) is greater than (numTwo)") } else { failureHandler("(numOne) is not greater than (numTwo)") } } } We begin this class by creating a type alias that defines the closure that we will use for both the successful and failure closures. We will name this type alias ResultsClosure. This example also illustrates why to use a type alias, rather than retyping the closure definition. It saves us a lot of typing and also prevents us from making mistakes. In this example, if we did not use a type alias, we would need to retype the closure definition four times, and if we needed to change the closure definition, we would need to change it in four spots. With the type alias, we only need to type the closure definition once and then use the alias throughout the remaining code. We then create a method named isGreater that takes two integers as the first two parameters and then two closures as the next two parameters. The first closure is named successHandler, and the second closure is named failureHandler. Within the isGreater method, we check whether the first integer parameter is greater than the second one. If the first integer is greater, the successHandler closure is executed; otherwise, the failureHandler closure is executed. Now, let's create two of our closures. The code for these two closures is as follows: var success: TestClass. ResultsClosure = { print("Success: ($0)") } var failure: TestClass. ResultsClosure = { print("Failure: ($0)") } Note that both closures are defined as the TestClass.ResultsClosure type. In each closure, we simply print a message to the console to let us know which closure was executed. Normally, we would put some functionality in the closure. We will then call the method with both the closures such as this: var test = TestClass() test.isGreater(numOne: 8, numTwo: 6, successHandler:success, failureHandler:failure) Note that in the method call, we are sending both the success closure and the failure closure. In this example, we will see the message, Success: 8 is greater than 6. If we reversed the numbers, we would see the message, Failure: 6 is not greater than 8. This use case is really good when we call asynchronous methods, such as loading data from a web service. If the web service call was successful, the success closure is called; otherwise, the failure closure is called. One big advantage of using closures such as this is that the UI does not freeze while we wait for the web service call to complete. As an example, try to retrieve data from a web service such as this: var data = myWebClass.myWebServiceCall(someParameter) Our UI would freeze while we wait for the response to come back, or we would have to make the call in a separate thread so that the UI would not hang. With closures, we pass the closures to the networking framework and rely on the framework to execute the appropriate closure when it is done. This does rely on the framework to implement concurrency correctly to make the calls asynchronously, but a decent framework should handle that for us. Creating strong reference cycles with closures Earlier in this article, we said, the best thing is, for the most part, Swift will handle the memory management for us. The for the most part section of the quote means that if everything is written in a standard way, Swift will handle the memory management of the closures for us. However, there are times where memory management fails us. Memory management will work correctly for all of the examples that we have seen in this article so far. It is possible to create a strong reference cycle that prevents Swift's memory management from working correctly. Let's look at what happens if we create a strong reference cycle with closures. A strong reference cycle may happen if we assign a closure to a property of a class instance and within that closure, we capture the instance of the class. This capture occurs because we access a property of that particular instance using self, such as self.someProperty, or we assign self to a variable or constant, such as let c = self. By capturing a property of the instance, we are actually capturing the instance itself, thereby creating a strong reference cycle where the memory manager will not know when to release the instance. As a result, the memory will not be freed correctly. Let's begin by creating a class that has a closure and an instance of the String type as its two properties. We will also create a type alias for the closure type in this class and define a deinit() method that prints a message to the console. The deinit() method is called when the class gets released and the memory is freed. We will know when the class gets released when the message from the deinit() method is printed to the console. This class will be named TestClassOne. Let's take a look at the following code: class TestClassOne { typealias nameClosure = (() -> String) var name = "Jon" lazy var myClosure: nameClosure = { return self.name } deinit { print("TestClassOne deinitialized") } } Now, let's create a second class that will contain a method that accepts a closure that is of the nameClosure type that was defined in the TestClassOne class. This class will also have a deinit() method, so we can also see when it gets released. We will name this class TestClassTwo. Let's take a look at the following code: class TestClassTwo { func closureExample(handler: TestClassOne.nameClosure) { print(handler()) } deinit { print("TestClassTwo deinitialized") } } Now, let's see this code in action by creating instances of each class and then trying to manually release the instance by setting them to nil: var testClassOne: TestClassOne? = TestClassOne() var testClassTwo: TestClassTwo? = TestClassTwo() testClassTwo?.closureExample(testClassOne!.myClosure) testClassOne = nil print("testClassOne is gone") testClassTwo = nil print("testClassTwo is gone") What we do in this code is create two optionals that may contain an instance of our two test classes or nil. We need to create these variables as optionals because we will be setting them to nil later in the code so that we can see whether the instances are released properly. We then call the closureExample() method of the TestClassTwo instance and pass it the myClosure property from the TestClassOne instance. We now try to release the TestClassOne and TestClassTwo instances by setting them to nil. Keep in mind that when an instance of a class is released, it attempts to call the deinit() method of the class, if it exists. In our case, both classes have a deinit() method that prints a message to the console, so we know when the instances are actually released. If we run this project, we will see the following messages printed to the console: testClassOne is gone TestClassTwo deinitialized testClassTwo is gone As we can see, we do attempt to release the TestClassOne instances, but the deinit() method of the class is never called, indicating that it was not actually released; however, the TestClassTwo instance was properly released because the deinit() method of that class was called. To see how this is supposed to work without the strong reference cycle, change the myClosure closure to return a string type that is defined within the closure itself, as shown in the following code: lazy var myClosure: nameClosure = { return "Just Me" } Now, if we run the project, we should see the following output: TestClassOne deinitialized testClassOne is gone TestClassTwo deinitialized testClassTwo is gone This shows that the deinit() methods from both the TestClassOne and TestClassTwo instances were properly called, indicating that they were both released properly. In the first example, we capture an instance of the TestClassOne class within the closure because we accessed a property of the TestClassOne class using self.name. This created a strong reference from the closure to the instance of the TestClassOne class, preventing memory management from releasing the instance. Swift does provide a very easy and elegant way to resolve strong reference cycles in closures. We simply need to tell Swift not to create a strong reference by creating a capture list. A capture list defines the rules to use when capturing reference types within a closure. We can declare each reference to be a weak or un owned reference rather than a strong reference. A weak keyword is used when there is the possibility that the reference will become nil during its lifetime; therefore, the type must be an optional. The unowned keyword is used when there is not a possibility of the reference becoming nil. We define the capture list by pairing the weak or unowned keywords with a reference to a class instance. These pairings are written within square brackets ([ ]). Therefore, if we update the myClosure closure and define an unowned reference to self, we should eliminate the strong reference cycle. The following code shows what the new myClosure closure will look similar to: lazy var myClosure: nameClosure = { [unowned self] in return self.name } Notice the new line—[unowned self] in. This line says that we do not want to create a strong reference to the instance of self. If we run the project now, we should see the following output: TestClassOne deinitialized testClassOne is gone TestClassTwo deinitialized testClassTwo is gone This shows that both the TestClassOne and TestClassTwo instances were properly released. Summary In this article, we saw that we can define a closure just like we can define an Int or String type. We can assign closures to a variable, pass them as an argument to functions, and also return them from functions. Closures capture a store references to any constants or variables from the context in which the closure was defined. We have to be careful with this functionality to make sure that we do not create a strong reference cycle, which would lead to memory leaks in our applications. Swift closures are very similar to blocks in Objective-C, but they have a much cleaner and eloquent syntax. This makes them a lot easier to use and understand. Having a good understanding of closures is vital to mastering the Swift programming language and will make it easier to develop great applications that are easy to maintain. They are also essential for creating first class frameworks that are both easy to use and maintain. The three use cases that we saw in this article are by no means the only three useful uses for closures. I can promise you that the more you use closures in Swift, the more uses you will find for them. Closures are definitely one of the most powerful and useful features of the Swift language, and Apple did a great job by implementing them in the language. Resources for Article: Further resources on this subject: Revisiting Linux Network Basics [article] An Introduction to the Terminal [article] Veil-Evasion [article]
Read more
  • 0
  • 0
  • 28184
article-image-making-game-console-unity-part-1
Eliot Lash
14 Dec 2015
6 min read
Save for later

Making an In-Game Console in Unity Part 1

Eliot Lash
14 Dec 2015
6 min read
This article is intended for intermediate-level Unity developers or above. One of my favorite tools that I learned about while working in the game industry is the in-game console. It’s essentially a bare-bones command-line environment where you can issue text commands to your game and get text output. Unlike Unity’s built-in console (which is really just a log,) it can take input and display output on whatever device the game is running on. I’m going to walk you through making a console using uGUI, Unity’s built-in GUI framework available in Unity 4.6 and later. I’m assuming you have some familiarity with it already, so if not I’d recommend reading the UI Overview before continuing. I’ll also show how to implement a simple input parser. I’ve included a unitypackage with an example scene showing the console all set up, as well as a prefab console that you can drop into an existing scene if you’d like. However, I will walk you through setting everything up from scratch below. If you don’t have a UI Canvas in your scene yet, make one by selecting GameObject > UI > Canvas from the menu bar. Now, let’s get started by making a parent object for our console view. Right click on the Canvas in the hierarchy and select “Create Empty”. There will now be an empty GameObject in the Canvas hierarchy, that we can use as the parent for our console. Rename this object to “ConsoleView”. I personally like to organize my GUI hierarchy a bit to make it easier to do flexible layouts and turn elements of the GUI off and on, so I also made some additional parent objects for “HUD” (GUI elements that are part of a layer that draws over the game, usually while the game is running) and a child of that for “DevHUD”, those HUD elements that are only needed during development. This makes it easier to disable or delete the DevHUD when making a production build of my game. However, this is optional. Enter 2D selection mode and scale the ConsoleView so it fills the width of the Canvas and most of its height. Then set its anchor mode to “stretch top”. Now right click on ConsoleView in the hierarchy and select “Create Empty”. Rename this new child “ConsoleViewContainer”. Drag to the same size as its parent, and set its anchor mode to “stretch stretch”. We need this additional container as the console needs the ability to show and hide during gameplay, so we will be enabling and disabling ConsoleViewContainer. But we still need the ConsoleView object to stay enabled so that it can listen for the special gesture/keypress which the user will input to bring up the console. Next, we’ll create our text input field. Right click on ConsoleViewContainer in the hierarchy and select UI > Input Field. Align the InputField with the upper left corner of ConsoleViewContainer and drag it out to about 80% of the screen width. Then set the anchor mode to “stretch top”. I prefer a dark console, so I changed the Image Color to dark grey. Open up the children of the InputField and you can edit the placeholder text, I set mine to “Console input”. You may also change the Placeholder and Text color to white if you want to use a dark background. On some platforms at this time of writing, Unity won’t handle the native enter/submit button correctly, so we’re going to add a fallback enter button next. (If you’re sure this won’t be an issue on your platforms, you can skip this paragraph and resize the console input to fill the width of the container.) Right click on ConsoleViewContainer in the hierarchy and select UI > Button. Align the button to the right of the InputField and set the anchor to “stretch top”. Rename the Button to EnterBtn. Select its text child in the hierarchy and edit the text to say “Ent”. Next, we’re going to make the view for the console log output. Right click on ConsoleViewContainer in the hierarchy and select UI > Image. Drag the image to fill the area below the InputField and set the anchor to “stretch stretch”. Rename Image to LogView. If you want a dark console (you know you do!) change the image color to black. Now at the bottom of the inspector, click “Add Component” and select UI > Mask. Again, click “Add Component” and select UI > Scroll Rect. Right click on LogView in the hierarchy and select UI > Text. Scale it to fill the LogView and set the anchor to “stretch bottom”. Rename it to LogText. Set the text to bottom align. If you’re doing a dark console, set the text color to white. To make sure we’ve got everything set up properly, add a few paragraphs of placeholder text (my favorite source for this is the hipster ipsum generator.) Now drag the top way up past the top of the canvas to give room for the log scrollback. If it’s too short, the log rotation code we’ll write later might not work properly. Now, we’ll make the scrollbar. Right click on ConsoleViewContainer in the hierarchy and select UI > Scrollbar. In the Scrollbar component, set the Direction to “Bottom To Top”, and set the Value to 0. Size it to fit between the LogView and the edge of the container and set the anchor to “stretch stretch”. Finally, we’ll hook up our complete scroll view. Select LogView and in the Scroll Rect component, drag in LogText to the “Content” property, and Scrollbar into the “Vertical Scrollbar” property. Then, uncheck the “Horizontal” box. Go ahead and run the game to make sure we’ve set everything up correctly. You should be able to drag the scroll bar and watch the text scroll down. If not, go back through the previous steps and try to figure out if you missed anything. This concludes part one. Stay tuned for part two of this series where you will learn how to program the behavior of the console. Find more Unity game development tutorials and content on our Unity page.  About the Author Eliot Lash is an independent game developer and consultant with several mobile titles in the works. In the past, he has worked on Tiny Death Star and the Tap Tap Revenge series. You can find him at eliotlash.com.
Read more
  • 0
  • 2
  • 28139

article-image-string-management-in-swift
Jorge Izquierdo
21 Sep 2016
7 min read
Save for later

String management in Swift

Jorge Izquierdo
21 Sep 2016
7 min read
One of the most common tasks when building a production app is translating the user interface into multiple languages. I won't go into much detail explaining this or how to set it up, because there are lots of good articles and tutorials on the topic. As a summary, the default system is pretty straightforward. You have a file named Localizable.strings with a set of keys and then different values depending on the file's language. To use these strings from within your app, there is a simple macro in Foundation, NSLocalizedString(key, comment: comment), that will take care of looking up that key in your localizable strings and return the value for the user's device language. Magic numbers, magic strings The problem with this handy macro is that as you can add a new string inline, you will presumably end up with dozens of NSLocalizedStrings in the middle of the code of your app, resulting in something like this: mainLabel.text = NSLocalizedString("Hello world", comment: "") Or maybe, you will write a simple String extension for not having to write it every time. That extension would be something like: extension String { var localized: String { return NSLocalizedString(self, comment: "") } } mainLabel.text = "Hello world".localized This is an improvement, but you still have the problem that the strings are all over the place in the app, and it is difficult to maintain a scalable format for the strings as there is not a central repository of strings that follows the same structure. The other problem with this approach is that you have plain strings inside your code, where you could change a character and not notice it until seeing a weird string in the user interface. For that not to happen, you can take advantage of Swift's awesome strongly typed nature and make the compiler catch these errors with your strings, so that nothing unexpected happens at runtime. Writing a Swift strings file So that is what we are going to do. The goal is to be able to have a data structure that will hold all the strings in your app. The idea is to have something like this: enum Strings { case Title enum Menu { case Feed case Profile case Settings } } And then whenever you want to display a string from the app, you just do: Strings.Title.Feed // "Feed" Strings.Title.Feed.localized // "Feed" or the value for "Feed" in Localizable.strings This system is not likely to scale when you have dozens of strings in your app, so you need to add some sort of organization for the keys. The basic approach would be to just set the value of the enum to the key: enum Strings: String { case Title = "app.title" enum Menu: String { case Feed = "app.menu.feed" case Profile = "app.menu.profile" case Settings = "app.menu.settings" } } But you can see that this is very repetitive and verbose. Also, whenever you add a new string, you need to write its key in the file and then add it to the Localizable.strings file. We can do better than this. Autogenerating the keys Let’s look into how you can automate this process so that you will have something similar to the first example, where you didn't write the key, but you want an outcome like the second example, where you get a reasonable key organization that will be scalable as you add more and more strings during development. We will take advantage of protocol extensions to do this. For starters, you will define a Localizable protocol to make the string enums conform to: protocol Localizable { var rawValue: String { get } } enum Strings: String, Localizable { case Title case Description } And now with the help of a protocol extension, you can get a better key organization: extension Localizable { var localizableKey: String { return self.dynamicType.entityName + "." rawValue } static var entityName: String { return String(self) } } With that key, you can fetch the localized string in a similar way as we did with the String extension: extension Localizable { var localized: String { return NSLocalizedString(localizableKey, comment: "") } } What you have done so far allows you to do Strings.Title.localized, which will look in the localizable strings file for the key Strings.Title and return the value for that language. Polishing the solution This works great when you only have one level of strings, but if you want to group a bit more, say Strings.Menu.Home.Title, you need to make some changes. The first one is that each child needs to know who its parent is in order to generate a full key. That is impossible to do in Swift in an elegant way today, so what I propose is to explicitly have a variable that holds the type of the parent. This way you can recurse back the strings tree until the parent is nil, where we assume it is the root node. For this to happen, you need to change your Localizable protocol a bit: public protocol Localizable { static var parent: LocalizeParent { get } var rawValue: String { get } } public typealias LocalizeParent = Localizable.Type? Now that you have the parent idea in place, the key generation needs to recurse up the tree in order to find the full path for the key. rivate let stringSeparator: String = "." private extension Localizable { static func concatComponent(parent parent: String?, child: String) -> String { guard let p = parent else { return child.snakeCaseString } return p + stringSeparator + child.snakeCaseString } static var entityName: String { return String(self) } static var entityPath: String { return concatComponent(parent: parent?.entityName, child: entityName) } var localizableKey: String { return self.dynamicType.concatComponent(parent: self.dynamicType.entityPath, child: rawValue) } } And to finish, you have to make enums conform to the updated protocol: enum Strings: String, Localizable { case Title enum Menu: String, Localizable { case Feed case Profile case Settings static let parent: LocalizeParent = Strings.self } static let parent: LocalizeParent = nil } With all this in place you can do the following in your app: label.text = Strings.Menu.Settings.localized And the label will have the value for the "strings.menu.settings" key in Localizable.strings. Source code The final code for this article is available on Github. You can find there the instructions for using it within your project. But also you can just add the Localize.swift and modify it according to your project's needs. You can also check out a simple example project to see the whole solution together.  Next time The next steps we would need to take in order to have a full solution is a way for the Localizable.strings file to autogenerate. The solution for this at the current state of Swift wouldn't be very elegant, because it would require either inspecting the objects using the ObjC runtime (which would be difficult to do since we are dealing with pure Swift types here) or defining all the children of a given object explicitly, in the same way as open source XCTest does. Each test case defines all of its tests in a static property. About the author Jorge Izquierdo has been developing iOS apps for 5 years. The day Swift was released, he starting hacking around with the language and built the first Swift HTTP server, Taylor. He has worked on several projects and right now works as an iOS development contractor.
Read more
  • 0
  • 0
  • 28088
Modal Close icon
Modal Close icon