Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-digging-deeper
Packt
03 Jan 2017
6 min read
Save for later

Digging Deeper

Packt
03 Jan 2017
6 min read
In the article by, Craig Clayton, author of the book, iOS 10 Programming for Beginners, we went over the basics of Swift to get you warmed up. Now, we will dig deeper and learn some more programming concepts. These concepts will build on what you have already learned. In this article, we will cover the following topics: Ranges Control flow (For more resources related to this topic, see here.) Creating a Playground project Please launch Xcode and click on Get started with a playground. The options screen for creating a new playground screen will appear: Please name your new Playground SwiftDiggingDeeper and make sure that your Platform is set to iOS. Now, let's delete everything inside of your file and toggle on the Debug panel using either the toggle button or Cmd + Shift + Y. Your screen should look like mine: Ranges These generic data types represent a sequence of numbers. Let's look at the image below to understand: Closed range Notice that, in the image above, we have numbers ranging from 10 to 20. Rather than having to write each value, we can use ranges to represent all of these numbers in shorthand form. In order to do this, let's remove all of the numbers in the image except 10 and 20. Now that we have removed those numbers, we need a way to tell Swift that we want to include all of the numbers that we just deleted. This is where the range operator (…) comes into play. So, in Playgrounds, let's create a constant called range and set it equal to 10...20: let range = 10...20 The range that we just entered says that we want the numbers between 10 and 20 as well as both 10 and 20 themselves. This type of range is known as a closed range. We also have what is called a half closed range. Half closed range Let's make another constant called half closed range and set it equal to 10..<20: let halfClosedRange = 10..<20 A half closed range is the same as a closed range except that the end value will not be included. In this example, that means that 10 through 19 will be included and 20 will be excluded. At this point, you will notice that your results panel just shows you CountableClosedRange(10...20) and CountableRange(10..<20). We cannot see all the numbers within the range. In order to see all the numbers, we need to use a loop. Control flow In Swift, we use a variety of control statements. For-In Loop One of the most common control statements is a for-in loop. A for-in loop allows you to iterate over each element in a sequence. Let's see what a for-loop looks like: for <value> in <sequence> { // Code here } So, we start our for-in loop with for, which is followed by <value>. This is actually a local constant (only the for-in loop can access it) and can be any name you like. Typically, you will want to give this value an expressive name. Next, we have in, which is followed by <sequence>. This is where we want to give it our sequence of numbers. Let's write the following into Playgrounds: for value in range { print("closed range - (value)") } Notice that, in our debug panel, we see all of the numbers we wanted in our range: Let's do the same for our variable halfClosedRange by adding the following: for index in halfClosedRange { print("half closed range - (index)") } In our debug panel, we see that we get the numbers 10 through 19. One thing to note is that these two for-in loops have different variables. In the first loop, we used value, and in the second one, we used index. You can make these whatever you choose. In addition, in the two examples, we used constants, but we could actually just use the ranges within the loop. Please add the following: for index in 0...3 { print("range inside - (index)") } Now, you will see 0 to 3 print inside the debug panel: What if you wanted numbers to go in reverse order? Let's input the following for-in loop: for index in (10...20).reversed() { print("reversed range - (index)") } We now have the numbers in descending order in our debug panel. When we add ranges into a for-in loop, we have to wrap our range inside parentheses so that Swift recognizes that our period before reversed() is not a decimal. The while Loop A while loop executes a Boolean expression at the start of the loop, and the set of statements run until a condition becomes false. It is important to note that while loops can be executed zero or more times. Here is the basic syntax of a while loop: while <condition> { // statement } Let's write a while loop in Playgrounds and see how it works. Please add the following: var y = 0 while y < 50 { y += 5 print("y:(y)") } So, this loop starts with a variable that begins at zero. Before the while loop executes, it checks to see if y is less than 50; and, if so, it continues into the loop. By using the += operator, we increment y by 5 each time. Our while loop will continue to do this until y is no longer less than 50. Now, let's add the same while loop after the one we created and see what happens: while y < 50 { y += 5 print("y:(y)") } You will notice that the second while loop never runs. This may not seem like it is important until we look at our next type of loop. The repeat-while loop A repeat-while loop is pretty similar to a while loop in that it continues to execute the set of statements until a condition becomes false. The main difference is that the repeat-while loop does not evaluate its Boolean condition until the end of the loop. Here is the basic syntax of a repeat-while loop: repeat { // statement } <condition> Let's write a repeat-while loop in Playgrounds and see how it works. Type the following into Playgrounds: var x = 0 repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed x: (x)") So, our repeat-while loop executes first and increments x by 5, and afterwards (as opposed to checking the condition before as with a while loop), it checks to see if x is less than 100. This means that our repeat-while loop will continue until the condition hits 100. But here is where it gets interesting. Let's add another repeat-while loop after the one we just created: repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed again x: (x)") This time, the repeat…while loop incremented to 105. This happens because the Boolean expression does not get evaluated until after it is incremented by 5. Knowing this behavior will help you pick the right loop for your situation. So far, we have looked at three loops: the for-in loop, the while loop, and the repeat-while loop. We will use the for-in loop again, but first we need to talk about collections. Summary The article summarizes Ranges and control flow using Xcode 8. Resources for Article: Further resources on this subject: Tools inTypeScript [article] Design with Spring AOP [article] Thinking Functionally [article]
Read more
  • 0
  • 0
  • 29494

article-image-introduction-hololens
Packt
11 Jul 2017
10 min read
Save for later

Introduction to HoloLens

Packt
11 Jul 2017
10 min read
In this article, Abhijit Jana, Manish Sharma, and Mallikarjuna Rao, the authors of the book, HoloLens Blueprints, we will be covering the following points to introduce you to using HoloLens for exploratory data analysis. Digital Reality - Under the Hood Holograms in reality Sketching the scenarios 3D Modeling workflow Adding Air Tap on speaker Real-time visualization through HoloLens (For more resources related to this topic, see here.) Digital Reality - Under the Hood Welcome to the world of Digital Reality. The purpose of Digital Reality is to bring immersive experiences, such as taking or transporting you to different world or places, make you interact within those immersive, mix digital experiences with reality, and ultimately open new horizons to make you more productive. Applications of Digital Reality are advancing day by day; some of them are in the field of gaming, education, defense, tourism, aerospace, corporate productivity, enterprise applications, and so on. The spectrum and scenarios of Digital Reality are huge. In order to understand them better, they are broken down into three different categories:  Virtual Reality (VR): It is where you are disconnected from the real world and experience the virtual world. Devices available on the market for VR are Oculus Rift, Google VR, and so on. VR is the common abbreviation of Virtual Reality. Augmented Reality (AR): It is where digital data is overlaid over the real world. Pokémon GO, one of the very famous games, is an example of the this globally. A device available on the market, which falls under this category, is Google Glass. Augmented Reality is abbreviated to AR. Mixed Reality (MR): It spreads across the boundary of the real environment and VR. Using MR, you can have a seamless and immersive integration of the virtual and the real world. Mixed Reality is abbreviated to MR. This topic is mainly focused on developing MR applications using Microsoft HoloLens devices. Although these technologies look similar in the way they are used, and sometimes the difference is confusing to understand, there is a very clear boundary that distinguishes these technologies from each other. As you can see in the following diagram, there is a very clear distinction between AR and VR. However, MR has a spectrum, that overlaps across all three boundaries of real world, AR, and MR. Digital Reality Spectrum The following table describes the differences between the three: Holograms in reality Till now, we have mentioned Hologram several times. It is evident that these are crucial for HoloLens and Holographic apps, but what is a Hologram? Virtual Reality Complete Virtual World User is completely isolated from the Real World Device examples: Oculus Rift and Google VR Augmented Reality Overlays Data over the real world Often used for mobile devices Device example: Google Glass Application example: Pokémon GO Mixed Reality Seamless integration of the real and virtual world Virtual world interacts with Real world Natural interactions Device examples: HoloLens and Meta Holograms are the virtual objects which will be made up with light and sound and blend with the real world to give us an immersive MR experience with both real and virtual worlds. In other words, a Hologram is an object like any other real-world object; the only difference is that it is made up of light rather than matter. The technology behind making holograms is known as Holography. The following figure represent two holographic objects placed on the top of a real-size table and gives the experience of placing a real object on a real surface: Holograms objects in real environment Interacting with holograms There are basically five ways that you can interact with holograms and HoloLens. Using your Gaze, Gesture, and Voice and with spatial audio and spatial mapping. Spatial mapping provides a detailed illustration of the real-world surfaces in the environment around HoloLens. This allows developers to understand the digitalized real environments and mix holograms into the world around you. Gaze is the most usual and common one, and we start the interaction with it. At any time, HoloLens would know what you are looking at using Gaze. Based on that, the device can take further decisions on the gesture and voice that should be targeted. Spatial audio is the sound coming out from HoloLens and we use spatial audio to inflate the MR experience beyond the visual. HoloLens Interaction Model Sketching the scenarios The next step after elaborating scenario details is to come up with sketches for this scenario. There is a twofold purpose for sketching; first, it will be input to the next phase of asset development for the 3D Artist, as well as helping to validate requirements from the customer, so there are no surprises at the time of delivery. For sketching, either the designer can take it up on their own and build sketches, or they can take help from the 3D Artist. Let's start with the sketch for the primary view of the scenario, where the user is viewing the HoloLens's hologram: Roam around the hologram to view it from different angles Gaze at different interactive components Sketch for user viewing hologram for the HoloLens Sketching - interaction with speakers While viewing the hologram, a user can gaze at different interactive components. One such component, identified earlier, is the speaker. At the time of gazing at the speaker, it should be highlighted and the user can then Air Tap at it. The Air Tap action should expand the speaker hologram and the user should be able to view the speaker component in detail. Sketch for expanded speakers After the speakers are expanded, the user should be able to visualize the speaker components in detail. Now, if the user Air Taps on the expanded speakers, the application should do the following: Open the textual detail component about the speakers; the user can read the content and learn about the speakers in detail Start voice narration, detailing speaker details The user can also Air Tap on the expanded speaker component, and this action should close the expanded speaker Textual and voice narration for speaker details  As you did sketching for the speakers, apply a similar approach and do sketching for other components, such as lenses, buttons, and so on. 3D Modeling workflow Before jumping to 3D Modeling, let's understand the 3D Modeling workflow across different tools that we are going to use during the course of this topic. The following diagram explains the flow of the 3D Modeling workflow: Flow of 3D Modeling workflow Adding Air Tap on speaker In this project, we will be using the left-side speaker for applying Air Tap on speaker. However, you can apply the same for the right-side speaker as well. Similar to Lenses, we have two objects here which we need to identify from the object explorer. Navigate to Left_speaker_geo and left_speaker_details_geo in Object Hierarchy window Tag them as leftspeaker and speakerDetails respectively By default, when you are just viewing the Holograms, we will be hiding the speaker details section. This section only becomes visible when we do the Air Tap, and goes back again when we Air Tap again: Speaker with Box Collider Add a new script inside the Scripts folder, and name it ShowHideBehaviour. This script will handle the Show and Hide behaviour of the speakerDetails game object. Use the following script inside the ShowHideBehaviour.cs file. This script we can use for any other object to show or hide. public class ShowHideBehaviour : MonoBehaviour { public GameObject showHideObject; public bool showhide = false; private void Start() { try { MeshRenderer render = showHideObject.GetComponent InChildren<MeshRenderer>(); if (render != null) { render.enabled = showhide; } } catch (System.Exception) { } } } The script finds the MeshRenderer component from the gameObject and enables or disables it based on the showhide property. In this script, the showhide is property exposed as public, so that you can provide the reference of the object from the Unity scene itself. Attach ShowHideBehaviour.cs as components in speakerDetails tagged object. Then drag and drop the object in the showhide property section. This just takes the reference for the current speaker details objects and will hide the object in the first instance. Attach show-hide script to the object By default, it is unchecked, showhide is set to false and it will be hidden from view. At this point in time, you must check the left_speaker_details_geo on, as we are now handling visibility using code. Now, during the Air Tapped event handler, we can handle the render object to enable visibility. Add a new script by navigating from the context menu Create | C# Scripts, and name it SpeakerGestureHandler. Open the script file in Visual Studio. Similar to SpeakerGestureHandler, by default, the SpeakerGestureHandler class will be inherited from the MonoBehaviour. In the next step, implement the InputClickHandler interface in the SpeakerGestureHandler class. This interface implement the methods OnInputClicked() that invoke on click input. So, whenever you do an Air Tap gesture, this method is invoked. RaycastHit hit; bool isTapped = false; public void OnInputClicked(InputEventData eventData) { hit = GazeManager.Instance.HitInfo; if (hit.transform.gameObject != null) { isTapped = !isTapped; var lftSpeaker = GameObject.FindWithTag("LeftSpeaker"); var lftSpeakerDetails = GameObject.FindWithTag("speakerDetails"); MeshRenderer render = lftSpeakerDetails.GetComponentInChildren <MeshRenderer>(); if (isTapped) { lftSpeaker.transform.Translate(0.0f, -1.0f * Time.deltaTime, 0.0f); render.enabled = true; } else { lftSpeaker.transform.Translate(0.0f, 1.0f * Time.deltaTime, 0.0f); render.enabled = false; } } } When it is gazed, we find the game object for both LeftSpeaker and speakerDetails by the tag name. For the LeftSpeaker object, we are applying transformation based on tapped or not tapped, which worked like what we did for lenses. In the case of speaker details object, we have also taken the reference of MeshRenderer to make it's visibility true and false based on the Air Tap. Attach the SpeakerGestureHandler class with leftSpeaker Game Object. Air Tap in speaker – see it in action Air Tap action for speaker is also done. Save the scene, build and run the solution in emulator once again. When you can see the cursor on the speaker, perform Air Tap. Default View and Air Tapped View Real-time visualization through HoloLens We have learned about the data ingress flow, where devices connect with the IoT Hub, and stream analytics processes the stream of data and pushes it to storage. Now, in this section, let's discuss how this stored data will be consumed for data visualization within holographic application. Solution to consume data through services Summary In this article, we demonstrated using HoloLens,  for exploring Digital Reality - Under the Hood, Holograms in reality, Sketching the scenarios, 3D Modeling workflow, Adding Air Tap on speaker,  and  Real-time visualization through HoloLens. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints [article] Raspberry Pi LED Blueprints [article] Exploring and Interacting with Materials using Blueprints [article]
Read more
  • 0
  • 0
  • 28306

article-image-app-and-web-development-in-2019-what-we-loved-and-what-mattered
Richard Gall
17 Dec 2019
10 min read
Save for later

App and web development in 2019: What we loved and what mattered

Richard Gall
17 Dec 2019
10 min read
For app and web developers, the world at the end of the decade is very different to the one that began it. Sure, change is inevitable, but the way the discipline(s) have evolved in just a matter of years (arguably the most significant changes came in the latter half of the decade) is a mark of how technologies, business needs, customer expectations, and harsh economic realities have conspired to shape and remold our notion of what software development actually looks like. Full-stack, cloud-native, DevOps (and maybe even ‘NoOps’): all these things have been shaping the way app and web developers work over the last ten years. And in 2019 it feels like that new world is beginning to settle into a specific pattern. Many of the trends and technologies that really defined 2019 are, in truth, trends that have been nascent and emerging for a number of years. Cloud and microservices When cloud first emerged - at some point much earlier this decade - it was largely just about resource efficiency. The idea was to ditch your on premises servers and move instead to a model whereby you rent server space from big vendors. Okay, perhaps that’s a somewhat crude summation; but it’s nevertheless the case that cloud was primarily a field dealt with by administrators and IT professionals, rather than developers. Today, of course, cloud is having a very real impact on the way developers work, giving a degree of agility and flexibility in how software is deployed and managed. With cloud partnering nicely with microservices - which allow developers to break down an application into constituent parts - it’s easy to see how these two trends are getting many app and web developers excited. They shorten the development lifecycle and allow developers to get closer to their code as it runs in production. Learn cloud development - explore Packt's range of cloud bundles. Pick up 5 for $25 throughout our $5 campaign. An essential resource for microservices development: Microservices Development Cookbook. $5 for the rest of December and into January. Go and Rust The growth of Go and Rust throughout 2019 (okay, and a bit before that too) is directly related to the increasing importance of cloud and microservices in software development. Although JavaScript has been taken beyond the browser, it isn’t the best programming language for building high performance applications; that’s where the likes of Go and Rust have been taking over a not insignificant slice of the collective developer imagination. Both languages share a similar history (as this article nicely details); at a fundamental level, moreover, both also aim to build on C++, but with accessibility and safety in mind (C++ has long had a reputation for being both complicated and sometimes vulnerable to bugs and security issues). Go is likely to continue to grow at a faster rate than Rust: it’s a lot easier to use, so for web and app developers with experience in Java or JavaScript, it’s a much gentler learning curve. But this isn’t to say that Rust won’t remain a fixture for developers. Consistently ranked the ‘most loved’ language in Stack Overflow surveys, as developers seek relentless improvements to performance alongside watertight reliability and security, Rust will remain an important language in a fast-changing development world. Search Packt's extensive selection of Go eBooks and videos - $5 throughout December and into the new year. Visit the Packt store. Learn Rust with Rust Programming Cookbook. WebAssembly It’s impossible to talk about web and application development without mentioning WebAssembly. Arguably the full implications of WebAssembly are yet to be realised (indeed, at ReactConf 2019, Richard Feldman suggested that it was unlikely to initiate a wholesale transformation of the web - that, he believes, will take a few more years), but 2019 has been a year when it has properly started to make many developers sit up and take notice. But why is WebAssembly so exciting? Essentially, it allows you to run code on the web using multiple languages at a speed that’s almost akin to native applications. Indeed, WebAssembly is making languages like Rust more attractive to web developers. If WebAssembly is a bridge between Rust and JavaScript, Rust immediately becomes more attractive to developers who previously would have paid very little attention to it. If 2019 was the year more developers decided to take note of WebAssembly, 2020 will be the year when we start to see increased adoption. Learn WebAssembly is $5 throughout this year's $5 campaign. Get it here. State management: Redux, Flux, Vuex… For many years, MVC (Model-View-Controller) was the dominant model for managing application state. However, as applications have grown in complexity, it has become more and more difficult for us to establish a ‘single source of truth’ inside our apps.That can impact performance and can also make them harder to maintain on the development side. To tackle this, we’ve started to see a number of different patterns and frameworks emerging to help us manage application state. The growth of React has been instrumental here - as a very lightweight library it gives developers the freedom to manage application state however they choose - and it’s worth noting that Flux architecture was developed by Facebook to complement the library. Watch: Why do React developers love Redux for state management? https://www.youtube.com/watch?v=7YzgZA_hA48&feature=emb_title Following Flux we’ve also had Redux and Vuex - all of them, each with subtly different approaches, have become an essential aspect of modern web and app development. And while they might not have first emerged in 2019, it feels as though the state management discourse has hit the heights that it previously has not. If you haven’t yet had time to dive into this topic, it's well worth making sure you commit to it in 2020. Learning React with Redux and Flux [Video] is $5 - purchase it here on the Packt store. Learn Vuex with Vuex Quick Start Guide. Functional programming Functional programming is on the rise. This doesn’t however mean that purely functional languages like Haskell and Lisp are dominating the programming language landscape - in fact, it’s been said that JavaScript is now the language used for functional programming (even though it isn’t a functional language). Functional programming is popular because it can help minimize complexity and make it easier to test and reuse code. When you’re dealing with a dense codebase that grows and grows as your application scales, this is immensely valuable. It’s also worth placing functional programming in the context of managing application state. Insofar as functional programming allows you to be specific in determining how different parts of a component should interact with one another - the function is a theoretical abstraction that makes it easier to get to grips with managing the state of a complex and dynamic application. Get to grips with functional programming and discover how to leverage its power. Read Mastering Functional Programming. The new JavaScript framework boom I’m not sure whether JavaScript fatigue is over. On the one hand the space has coalesced around a handful of core tools and frameworks - React, GraphQL, Node.js, among a couple of others - but on the other hand, the last year (and a bit) have been characterized by many other small projects developed to support these core tools. So, while it’s maybe a little bit easier to parse the JavaScript ecosystem at pretty high level of abstraction than it was in the past, at a deeper level you have a range of tools that are designed for very specific purposes or to be used alongside some of those frameworks and tools just mentioned. Tools ranging from Koa.js (for Node), to Polymer, Nuxt, Next, Gatsby, Hugo, Vuelidate (to name just a random assortment) are all vying for developer mindshare. You could say that many of these tools are ‘second-order’ frameworks and libraries - they don’t fundamentally change the way you think about development but instead make it easier to do specific things. It’s for this reason that I’m reluctant to suggest that JavaScript fatigue will return to its former glory - this new JavaScript framework boom is very much geared towards productivity and immediate gains rather than overhauling the way you build applications because of some principled belief in the ‘right’ or ‘best’ way to do things. Learn Nuxt: pick up Build a News Feed with Nuxt 2 and Firestore [Video] for $5 before the end of the year. Get to grips with Next.js with Next.js Quick Start Guide. Learn Koa with Hands-on Server-Side Development with Koa.js [Video] Learn Gatsby with GatsbyJS: Build a PWA Blog with GraphQL, React, and WordPress [Video] GraphQL Much of this decade has been dominated by REST when it comes to APIs. But just as the so called ‘API economy’ has gone into overdrive, GraphQL has come on the scene. Adoption has been rapid, with many developers turning to it because it allows them to handle more complex and sophisticated requests at scale without writing long and confusing lines of code. This isn’t to say, of course, that GraphQL has all but killed REST. Instead, it’s more the case that GraphQL has been found to be a better tool for managing APIs in specific domains than REST. If you’re dealing with APIs that are complex in terms of the number of entities and their relationships between one another, then GraphQL can prove immensely useful. Find out how to put GraphQL to use. Pick up GraphQL Projects for $5 for the rest of December and into January. React Hooks (and Vue Hooks) Launched with React 16.8, React Hooks “let you use state and other React features without writing a class” (that’s from the project’s site). That’s a good thing because building components with a class can sometimes be somewhat inelegant. For a better explanation of the ‘point’ of React Hooks you could do a lot worse than this article. Vue Hooks is part of Vue 3.0 - this won’t be officially released until early next year. But the fact that both leading front end frameworks are taking similar approaches to improve the developer experience demonstrates that they’re responding to a need for more flexibility and control over large projects. That means 2019 has been the year that both tools have hit maturity in the web development space. Learn how React Hooks work with Packt's new React Hooks video. Conclusion The web and app development world is becoming difficult to parse. A few years ago discussion and debate really centered on frameworks; today it feels like there are many other elements to consider. Part of this is symptomatic of a slow DevOps revolution - the gap between build and production is smaller than it has ever been, and developers now have a significant degree of accountability and responsibility for things that were the preserve of different breeds of engineers and IT professionals. Perhaps that story is a bit of a simplification - however, it’s hard to dispute that the web and app developer skill set is incredibly diverse. That means there are an array of options and opportunities out there for those developers looking to push their careers forward, but it also means that they’ll need to do some serious decision making about what they want to do and how they want to do it.
Read more
  • 0
  • 0
  • 28292

article-image-string-management-in-swift
Jorge Izquierdo
21 Sep 2016
7 min read
Save for later

String management in Swift

Jorge Izquierdo
21 Sep 2016
7 min read
One of the most common tasks when building a production app is translating the user interface into multiple languages. I won't go into much detail explaining this or how to set it up, because there are lots of good articles and tutorials on the topic. As a summary, the default system is pretty straightforward. You have a file named Localizable.strings with a set of keys and then different values depending on the file's language. To use these strings from within your app, there is a simple macro in Foundation, NSLocalizedString(key, comment: comment), that will take care of looking up that key in your localizable strings and return the value for the user's device language. Magic numbers, magic strings The problem with this handy macro is that as you can add a new string inline, you will presumably end up with dozens of NSLocalizedStrings in the middle of the code of your app, resulting in something like this: mainLabel.text = NSLocalizedString("Hello world", comment: "") Or maybe, you will write a simple String extension for not having to write it every time. That extension would be something like: extension String { var localized: String { return NSLocalizedString(self, comment: "") } } mainLabel.text = "Hello world".localized This is an improvement, but you still have the problem that the strings are all over the place in the app, and it is difficult to maintain a scalable format for the strings as there is not a central repository of strings that follows the same structure. The other problem with this approach is that you have plain strings inside your code, where you could change a character and not notice it until seeing a weird string in the user interface. For that not to happen, you can take advantage of Swift's awesome strongly typed nature and make the compiler catch these errors with your strings, so that nothing unexpected happens at runtime. Writing a Swift strings file So that is what we are going to do. The goal is to be able to have a data structure that will hold all the strings in your app. The idea is to have something like this: enum Strings { case Title enum Menu { case Feed case Profile case Settings } } And then whenever you want to display a string from the app, you just do: Strings.Title.Feed // "Feed" Strings.Title.Feed.localized // "Feed" or the value for "Feed" in Localizable.strings This system is not likely to scale when you have dozens of strings in your app, so you need to add some sort of organization for the keys. The basic approach would be to just set the value of the enum to the key: enum Strings: String { case Title = "app.title" enum Menu: String { case Feed = "app.menu.feed" case Profile = "app.menu.profile" case Settings = "app.menu.settings" } } But you can see that this is very repetitive and verbose. Also, whenever you add a new string, you need to write its key in the file and then add it to the Localizable.strings file. We can do better than this. Autogenerating the keys Let’s look into how you can automate this process so that you will have something similar to the first example, where you didn't write the key, but you want an outcome like the second example, where you get a reasonable key organization that will be scalable as you add more and more strings during development. We will take advantage of protocol extensions to do this. For starters, you will define a Localizable protocol to make the string enums conform to: protocol Localizable { var rawValue: String { get } } enum Strings: String, Localizable { case Title case Description } And now with the help of a protocol extension, you can get a better key organization: extension Localizable { var localizableKey: String { return self.dynamicType.entityName + "." rawValue } static var entityName: String { return String(self) } } With that key, you can fetch the localized string in a similar way as we did with the String extension: extension Localizable { var localized: String { return NSLocalizedString(localizableKey, comment: "") } } What you have done so far allows you to do Strings.Title.localized, which will look in the localizable strings file for the key Strings.Title and return the value for that language. Polishing the solution This works great when you only have one level of strings, but if you want to group a bit more, say Strings.Menu.Home.Title, you need to make some changes. The first one is that each child needs to know who its parent is in order to generate a full key. That is impossible to do in Swift in an elegant way today, so what I propose is to explicitly have a variable that holds the type of the parent. This way you can recurse back the strings tree until the parent is nil, where we assume it is the root node. For this to happen, you need to change your Localizable protocol a bit: public protocol Localizable { static var parent: LocalizeParent { get } var rawValue: String { get } } public typealias LocalizeParent = Localizable.Type? Now that you have the parent idea in place, the key generation needs to recurse up the tree in order to find the full path for the key. rivate let stringSeparator: String = "." private extension Localizable { static func concatComponent(parent parent: String?, child: String) -> String { guard let p = parent else { return child.snakeCaseString } return p + stringSeparator + child.snakeCaseString } static var entityName: String { return String(self) } static var entityPath: String { return concatComponent(parent: parent?.entityName, child: entityName) } var localizableKey: String { return self.dynamicType.concatComponent(parent: self.dynamicType.entityPath, child: rawValue) } } And to finish, you have to make enums conform to the updated protocol: enum Strings: String, Localizable { case Title enum Menu: String, Localizable { case Feed case Profile case Settings static let parent: LocalizeParent = Strings.self } static let parent: LocalizeParent = nil } With all this in place you can do the following in your app: label.text = Strings.Menu.Settings.localized And the label will have the value for the "strings.menu.settings" key in Localizable.strings. Source code The final code for this article is available on Github. You can find there the instructions for using it within your project. But also you can just add the Localize.swift and modify it according to your project's needs. You can also check out a simple example project to see the whole solution together.  Next time The next steps we would need to take in order to have a full solution is a way for the Localizable.strings file to autogenerate. The solution for this at the current state of Swift wouldn't be very elegant, because it would require either inspecting the objects using the ObjC runtime (which would be difficult to do since we are dealing with pure Swift types here) or defining all the children of a given object explicitly, in the same way as open source XCTest does. Each test case defines all of its tests in a static property. About the author Jorge Izquierdo has been developing iOS apps for 5 years. The day Swift was released, he starting hacking around with the language and built the first Swift HTTP server, Taylor. He has worked on several projects and right now works as an iOS development contractor.
Read more
  • 0
  • 0
  • 28088

article-image-getting-started-vr-programming
Jake Rheude
04 Jul 2016
8 min read
Save for later

Getting Started with VR Programming

Jake Rheude
04 Jul 2016
8 min read
This guide will go through some simple programming for VR apps using the Google VR SDK (software development kit) and the Unity3D game engine. This guide will assume that you already have a mobile device capable of running Google VR apps with a Google Cardboard, as well as a computer able to run Unity3D. Getting Started First and foremost, download the latest version of Unity3D from their website. Out of the four options, select “Personal” since it costs nothing to the user. Then download and run the installer. The installation process is straightforward. However, you must make sure that you select the “Android Build Support” component if you are planning on using an Android device or “iOS Build Support” for an iOS device. If you are unsure at this point, just select both, as neither of them requires a lot of space. Now that you have Unity3D installed, the next step is to set it up for the Google VR SDK which can be found here. After agreeing to the terms and conditions, you will be given a link to download the repository directly. After downloading and extracting the ZIP file, you will notice that it contains a Unity Package file. Double-click on the file, and Unity will automatically load up. You will then see a window similar to the pop up below on your screen. Click the “NEW” button on the top right corner to begin your first Google VR project. Give it any project name other than the default “New Unity Project” name. For this guide, I have chosen “VR Programming Tutorial” as the project name.   As soon as your new project loads up, so will the Google VR SDK Unity Package. The relevant files should all be selected by default, so simply click the “Import” button on the bottom right corner to include the SDK into your project.   In your project’s “Assets” folder, there should be a folder named “GoogleVR”. This is where all the necessary components are located in order to begin working with the SDK.   From the “Assets” folder, go into “GoogleVR”->”DemoScenes”->”HeadSetDemo”. Double-click on the Unity icon that is named “DemoScene”. You should see something similar to this upon opening the scene file. This is where you can preview the scene before playing it to get an idea of how the game objects will be laid out in the environment. So let’s try that by clicking on the “Play” button. The scene will start out from the user’s perspective, which would be the main camera.   There is a slight difference in how the left eye and right eye camera are displaying the environment. This is called distortion correction, which is intentionally designed that way in order to accustom the display to the Google Cardboard eye lenses. You may be wondering why you are unable to look around with your mouse. This design is also intentional to allow the developer to hover the mouse pointer in and out of the game window without disrupting the scene while it is playing. In order to look around in the environment, hold down the Ctrl key, and then the Alt key to enable head movement. Make sure to press the keys in this order, otherwise you will only be rotating the display along the Z-axis. You might also be wondering where the interactive menu on the floor canvas has gone. The menu is still there, it’s just that it does not appear in VR mode. Notice that the dot in the center of the display will turn into a halo when you move it over the hovering cube. This happens whenever the dot is placed over a game object in the environment that is interactive. So even if the menu is not visible, you are still able to select the menu items. If you happen to click on the “VR Mode” button, the left eye and right eye cameras will simply go away and the main camera will be the only camera that displays the world space. VR Mode can be enabled/disabled by clicking on the "VR Mode Enabled" checkbox in the project's inspector. Simply select "GvrMain" in the DemoScene hierarchy to have the inspector display its information. How the scene is displayed when VR mode is disabled. Note that as of the current implementation of Google VR, it is impossible to add UI components into the world space. This is due to the stereoscopic functionality of Google VR and the mathematics involved in calculating the distance of the game objects from the left eye and right eye cameras relative to the world environment. However, it is possible to add non-interactive UI elements (i.e. player HUD) as a child 3D element with the main camera being its parent. If you wish to create interactive UI components, they must be done strictly as game objects in the world space. This also implies that the interactive UI components must be selected by the user from a fixed position in the world space, as they would find it difficult to make their selections otherwise. Now that we have gone over the basics of the Google VR SDK, let’s move onto some programming. Applying Attributes to Game Objects When creating an interactive medium of any kind (in this case a VR app), some of the most basic functions can end up being more complicated than they initially seem to be. We will demonstrate that by incorporating, what seems to be, simple user movement. In the same DemoScene scene, we will add four more cubes to the environment. For the sake of cleanliness, first we will remove the existing cube as it will be an obstruction for our new cube. To delete a game object from a scene, simply right-click it in the hierarchy and select "Delete". Now that we have removed the existing cube, add a new one by clicking "Create" in the hierarchy, select "3D Object" and then "Cube".   Move the cube about 4-5 units along the X or Z axis away from the origin. You can do so by clicking and dragging the red or blue arrow. Now that we have added our cube, the next step is to add a script to the player’s perspective object. For this project, we can use the “GvrMain” game object to incorporate the player’s movement. In the inspector tab, click on the "Add Component" button, select "New Script" and create a new script titled "MoveToCube".   Once the script has been created, click on the cogwheel icon and select "Edit Script".   Copy and paste this code below into MoveToCube.cs Next, add an Event Trigger component to your cube.   Create a new script titled "CubeSelect". Then select the cogwheel icon and select "Edit Script" to open the script in the script editor.   Copy and paste the code below into your CubeSelect.cs script.     Click on the "Add New Event Type" button. Select "PointerClick". Click the + icon to add a method to refer to. In the left box, select the "Cube" game object. For the method, select "CubeSelect" and then click on "GetCubePosition". Finally, select "GvrMain" as the target game object for the method. When you are finished adding the necessary components, copy and paste the cube in the project hierarchy tab three times in order to get four cubes. They will seem as if they did not appear on the scene, only because they are overlapping each other. Change the positions of each cube so that they are separated from each other along the X and Z axis. Once completed, the scene should look something similar to this: Now you can run the VR app and see for yourself that we have now incorporated player movement in this basic implementation. Tips and General Advice Many developers recommend that you do not incorporate any acceleration and/or deceleration to the main camera. Doing so will cause nausea to the users and thus give them a negative experience with your VR application. Keep your VR app relatively simple! The user only has two modes of input: head tracking and the Cardboard trigger button. Trying to force functionality with multiple gestures (i.e. looking straight down and/or up) will not be intuitive to the user and will more than likely cause frustration. About the Author Jake Rheude is the Director of Business Development for Red Stag Fulfillment, a US-based e-commerce fulfillment provider focused primarily on serving ecommerce businesses shipping heavy, large, or valuable products to customers all around the world. Red Stag is so confident in its fulfillment software combined with their warehouse operations, that for any error, inaccuracy, or late shipment, not only will they reimburse you for that order, but they’ll write you a check for $50.
Read more
  • 0
  • 0
  • 28071

article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 27962
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-planning-and-structuring-your-test-driven-ios-app
Packt
11 Nov 2016
13 min read
Save for later

Planning and Structuring Your Test-Driven iOS App

Packt
11 Nov 2016
13 min read
In this article written by Dr. Dominik Hauser, author of the book Test–Driven iOS Development with Swift 3.0, you will learn that when starting TDD, writing unit tests would be easy for most people. The hard part is to transfer the knowledge from writing the test to driving the development. What can be assumed? What should be done before one writes the first test? What should be tested to end up with a complete app? (For more resources related to this topic, see here.) As a developer, you are used to thinking in terms of code. When you see a feature on the requirement list for an app, your brain already starts to layout the code for this feature. And for recurring problems in iOS development (such as building table views), you most probably have already developed your own best practices. In TDD, you should not think about the code while working on the test. The tests have to describe what the unit under test should do and not how it should do it. It should be possible to change the implementation without breaking the tests. To practice this approach of development, we will develop a simple to-do list app in the remainder of this book. It is, on purpose, a boring and easy app. We want to concentrate on the TDD workflow, not complex implementations. An interesting app would distract from what is important in this book—how to do TDD. This article introduces the app that we are going to build, and it shows the views that the finished app will have. We will cover the following topics in this article: The task list view The task detail view The task input view The structure of an app Getting started with Xcode Setting up useful Xcode behaviors for testing The task list view When starting the app, the user sees a list of to-do items. The items in the list consist of a title, an optional location, and the due date. New items can be added to the list by an add (+) button, which is shown in the navigation bar of the view. The task list view will look like this: User stories: As a user, I want to see the list of to-do items when I open the app As a user, I want to add to-do items to the list In a to-do list app, the user will obviously need to be able to check items when they are finished. The checked items are shown below the unchecked items, and it is possible to uncheck them again. The app uses the delete button in the UI of UITableView to check and uncheck items. Checked items will be put at the end of the list in a section with the Finished header. The user can also delete all the items from the list by tapping the trash button. The UI for the to-do item list will look like this: User stories: As a user, I want to check a to-do item to mark it as finished As a user, I want to see all the checked items below the unchecked items As a user, I want to uncheck a to-do item As a user, I want to delete all the to-do items When the user taps an entry, the details of this entry is shown in the task detail view. The task detail view The task detail view shows all the information that's stored for a to-do item. The information consists of a title, due date, location (name and address), and a description. If an address is given, a map with an address is shown. The detail view also allows checking the item as finished. The detail view looks like this: User stories: As a user, given that I have tapped a to-do item in the list, I want to see its details As a user, I want to check a to-do item from its details view The task input view When the user selects the add (+) button in the list view, the task input view is shown. The user can add information for the task. Only the title is required. The Save button can only be selected when a title is given. It is not possible to add a task that is already in the list. The Cancel button dismisses the view. The task input view will look like this: User stories: As a user, given that I have tapped the add (+) button in the item list, I want to see a form to put in the details (title, optional date, optional location name, optional address, and optional description) of a to-do item As a user, I want to add a to-do item to the list of to-do items by tapping on the Save button We will not implement the editing and deletion of tasks. But when you have worked through this book completely, it will be easy for you to add this feature yourself by writing the tests first. Keep in mind that we will not test the look and design of the app. Unit tests cannot figure out whether an app looks like it was intended. Unit tests can test features, and these are independent of their presentation. In principle, it would be possible to write unit tests for the position and color of UI elements. But such things are very likely to change a lot in the early stages of development. We do not want to have failing tests only because a button has moved 10 points. However, we will test whether the UI elements are present on the view. If your user cannot see the information for the tasks, or if it is not possible to add all the information of a task, then the app does not meet the requirements. The structure of the app The following diagram shows the structure of the app: The Table View Controller, the delegate, and the data source In iOS apps, data is often presented using a table view. Table views are highly optimized for performance; they are easy to use and to implement. We will use a table view for the list of to-do items. A table view is usually represented by UITableViewController, which is also the data source and delegate for the table view. This often leads to a massive Table View Controller, because it is doing too much: presenting the view, navigating to other view controllers, and managing the presentation of the data in the table view. It is a good practice to split up the responsibility into several classes. Therefore, we will use a helper class to act as the data source and delegate for the table view. The communication between the Table View Controller and the helper class will be defined using a protocol. Protocols define what the interface of a class looks like. This has a great benefit: if we need to replace an implementation with a better version (maybe because we have learned how to implement the feature in a better way), we only need to develop against the clear interface. The inner workings of other classes do not matter. Table view cells As you can see in the preceding screenshots, the to-do list items have a title and, optionally, they can have a due date and a location name. The table view cells should only show the set data. We will accomplish this by implementing our own custom table view cell. The model The model of the application consists of the to-do item, the location, and an item manager, which allows the addition and removal of items and is also responsible for managing the items. Therefore, the controller will ask the item manager for the items to present. The item manager will also be responsible for storing the items on disc. Beginners often tend to manage the model objects within the controller. Then, the controller has a reference to a collection of items, and the addition and removal of items is directly done by the controller. This is not recommended, because if we decide to change the storage of the items (for example, by using Core Data), their addition and removal would have to be changed within the controller. It is difficult to keep an overview of such a class; and because of this reason, it is a source of bugs. It is much easier to have a clear interface between the controller and the model objects, because if we need to change how the model objects are managed, the controller can stay the same. We could even replace the complete model layer if we just keep the interface the same. Later in the article, we will see that this decoupling also helps to make testing easier. Other view controllers The application will have two more view controllers: a task detail View Controller and a View Controller for the input of the task. When the user taps a to-do item in the list, the details of the item are presented in the task detail View Controller. From the Details screen, the user will be able to check an item. New to-do items will be added to the list of items using the view presented by the input View Controller. The development strategy In this book, we will build the app from inside out. We will start with the model, and then build the controllers and networking. At the end of the book, we will put everything together. Of course, this is not the only way to build apps. But by separating on the basis of layers instead of features, it is easier to follow and keep an overview of what is happening. When you later need to refresh your memory, the relevant information you need is easier to find. Getting started with Xcode Now, let's start our journey by creating a project that we will implement using TDD. Open Xcode and create a new iOS project using the Single View Application template. In the options window, add ToDo as the product name, select Swift as language, choose iPhone in the Devices option, and check the box next to Include Unit Tests. Let the Use Core Data and Include UI Tests boxes stay unchecked. Xcode creates a small iOS project with two targets: one for the implementation code and the other for the unit tests. The template contains code that presents a single view on screen. We could have chosen to start with the master-detail application template, because the app will show a master and a detail view. However, we have chosen the Single View Application template because it comes with hardly any code. And in TDD, we want to have all the implementation code demanded by failing tests. To take a look at how the application target and test target fit together, select the project in Project Navigator, and then select the ToDoTests target. In the General tab, you'll find a setting for the Host Application that the test target should be able to test. It will look like this: Xcode has already set up the test target correctly to allow the testing of the implementations that we will write in the application target. Xcode has also set up a scheme to build the app and run the tests. Click on the Scheme selector next to the stop button in the toolbar, and select Edit Scheme.... In the Test action, all the test bundles of the project will be listed. In our case, only one test bundle is shown—ToDoTests. On the right-hand side of the shown window is a column named Test, with a checked checkbox. This means that if we run the tests while this scheme is selected in Xcode, all the tests in the selected test suite will be run. Setting up useful Xcode behaviors for testing Xcode has a feature called behaviors. With the use of behaviors and tabs, Xcode can show useful information depending on its state. Open the Behaviors window by going to Xcode | Behaviors | Edit Behaviors. On the left-hand side are the different stages for which you can add behaviors (Build, Testing, Running, and so on). The following behaviors are useful when doing TDD. The behaviors shown here are those that I find useful. Play around with the settings to find the ones most useful for you. Overall, I recommend using behaviors because I think they speed up development. Useful build behaviors When building starts, Xcode compiles the files and links them together. To see what is going on, you can activate the build log when building starts. It is recommended that you open the build log in a new tab because this allows switching back to the code editor when no error occurs during the build. Select the Starts stage and check Show tab named. Put in the Log name and select in active window. Check the Show navigator setting and select Issue Navigator. At the bottom of the window, check Navigate to and select current log. After you have made these changes, the settings window will look like this: Build and run to see what the behavior looks like. Testing behaviors To write some code, I have an Xcode tab called Coding. Usually, in this tab, the test is open on the left-hand side, and in the Assistant Editor which is on the right-hand side, there is the code to be tested (or in the case of TDD, the code to be written). It looks like this: When the test starts, we want to see the code editor again. So, we add a behavior to show the Coding tab. In addition to this, we want to see the Test Navigator and debugger with the console view. When the test succeeds, Xcode should show a bezel to notify us that all tests have passed. Go to the Testing | Succeeds stage, and check the Notify using bezel or system notification setting. In addition to this, it should hide the navigator and the debugger, because we want to concentrate on refactoring or writing the next test. In case the testing fails (which happens a lot in TDD), Xcode will show a bezel again. I like to hide the debugger, because usually, it is not the best place to figure out what is going on in the case of a failing test. And in most of the cases in TDD, we already know what the problem is. But we want to see the failing test. Therefore, check Show navigator and select Issue navigator. At the bottom of the window, check Navigate to and select first new issue. You can even make your Mac speak the announcements. Check Speak announcements using and select the voice you like. But be careful not to annoy your coworkers. You might need their help in the future. Now, the project and Xcode are set up, and we can start our TDD journey. Summary In this article, we took a look at the app that we are going to build throughout the course of this book. We took a look at how the screens of the app will look when we are finished with it. We created the project that we will use later on and learned about Xcode behaviors. Resources for Article: Further resources on this subject: Thinking Functionally [article] Hosting on Google App Engine [article] Cloud and Async Communication [article]
Read more
  • 0
  • 0
  • 27364

article-image-react-native-tools-and-resources
Packt
03 Jan 2017
14 min read
Save for later

React Native Tools and Resources

Packt
03 Jan 2017
14 min read
In this article written by Eric Masiello and Jacob Friedmann, authors of the book Mastering React Native we will cover: Tools that improve upon the React Native development experience Ways to build React Native apps for platforms other than iOS and Android Great online resources for React Native development (For more resources related to this topic, see here.) Evaluating React Native Editors, Plugins, and IDEs I'm hard pressed to think of another topic that developers are more passionate about than their preferred code editor. Of the many options, two popular editors today are GitHub's Atom and Microsoft's Visual Studio Code (not be confused with the Visual Studio 2015). Both are cross-platform editors for Windows, macOS, and Linux that are easily extended with additional features. In this section, I'll detail my personal experience with these tools and where I have found they complement the React Native development experience. Atom and Nuclide Facebook has created a package for Atom known as Nuclide that provides a first-class development environment for React Native It features a built-in debugger similar to Chrome's DevTools, a React Native Inspector (think the Elements tab in Chrome DevTools), and support for the static type checker Flow. Download Atom from https://atom.io/ and Nuclide from https://nuclide.io/. To install the Nuclide package, click on the Atom menu and then on Preferences..., and then select Packages. Search for Nuclide and click on Install. Once installed, you can actually start and stop the React Native Packager directly from Atom (though you need launch the simulator/editor separately) and set breakpoints in Atom itself rather than using Chrome's DevTools. Take a look at the following screenshot: If you plan to use Flow, Nuclide will identify errors and display them inline. Take the following example, I've annotated the function timesTen such that it expects a number as a parameter it should return a number. However, you can see that there's some errors in the usage. Refer to the following code snippet: /* @flow */ function timesTen(x: number): number { var result = x * 10; return 'I am not a number'; } timesTen("Hello, world!"); Thankfully, the Flow integration will call out these errors in Atom for you. Refer to the following screenshot: Flow integration of Nuclide exposes two other useful features. You'll see annotated auto completion as you type. And, if you hold the Command key and click on variable or function name, Nuclide will jump straight to the source definition, even if it’s defined in a separate file. Refer to the following screenshot: Visual Studio Code Visual Studio Code is a first class editor for JavaScript authors. Out of the box, it's packaged with a built in debugger that can be used to debug Node applications. Additionally, VS Code comes with an integrated Terminal and a git tool that nicely shows visual diffs. Download Visual Studio Code from https://code.visualstudio.com/. The React Native Tools extensions for VS Code add some useful capabilities to the editor. For starters, you'll be able to execute the React Native: Run-iOS and React Native: Run Android commands directly from VS Code without needing to reach for terminal, as shown in the following screenshot: And, while a bit more involved than Atom to configure, you can use VS Code as a React Native debugger. Take a look at the following screenshot: The React Native Tools extension also provides IntelliSense for much of the React Native API, as shown in the following screenshot: When reading through the VS Code documentation, I found it (unsurprisingly) more catered toward Windows users. So, if Windows is your thing, you may feel more at home with VS Code. As a macOS user, I slightly prefer Atom/Nuclide over VS Code. VS Code comes with more useful features out of the box but that easily be addressed by installing a few Atom packages. Plus,  I found the Flow support with Nuclide really useful. But don't let me dissuade you from VS Code. Both are solid editors with great React Native support. And they're both free so no harm in trying both. Before totally switching gears, there is one more editor worth mentioning. Deco is an Integrated Development Environment (IDE) built specifically for React Native development. Standing up a new React Native project is super quick since Deco keeps a local copy of everything you'd get when running react-native in it. Deco also makes creating new stateful and stateless components super easy. Download Deco from https://www.decosoftware.com/. Once you create a new component using Deco, it gives you a nicely prefilled template including a place to add propTypes and defaultProps (something I often forget to do). Refer to the following screenshot: From there, you can drag and drop components from the sidebar directly into your code. Deco will auto-populate many of the props for you as well as add the necessary import statements. Take a look at the following code snippet: <Image style={{ width: 300, height: 200, }} resizeMode={"contain"} source={{uri:'https://unsplash.it/600/400/?random'}}/> The other nice feature Deco adds is the ability to easily launch your app from the toolbar in any installed iOS simulator or Android AVD. You don't even need to first manually open the AVD, Deco will do it all for you. Refer to the following screenshot: Currently, creating a new project with Deco starts you off with an outdated version of React Native (version 0.27.2 as of this writing). If you're not concerned with using the latest version, Deco is a great way to get a React Native app up quickly. However, if you require more advanced tooling, I suggest you look at Atom with Nuclide or Visual Studio Code with the React Native Tools extension. Taking React Native beyond iOS and Android The development experience is one of the most highly touted features by React Native proponents. But as we well know by now, React Native is more than just a great development experience. It's also about building cross-platform applications with a common language and, often times, reusable code and components. Out of the box, the Facebook team has provided tremendous support for iOS and Android. And thanks to the community, React Native has expanded to other promising platforms. In this section, I'll take you through a few of these React Native projects. I won't go into great technical depth, but I'll provide a high-level overview and how to get each running. Introducing React Native Web React Native Web is an interesting one. It treats many of React Native components you've learned about, such as View, Text, and TextInput, as higher level abstractions that map to HTML elements, such as div, span, and input, thus allowing you to build a web app that runs in a browser from your React Native code. Now if you're like me, your initial reaction might be—But why? We already have React for the web. It's called... React! However, where React Native Web shines over React is in its ability to share components between your mobile app and the web because you're still working with the same basic React Native APIs. Learn more about React Native Web at https://github.com/necolas/react-native-web. Configuring React Native Web React Native Web can be installed into your existing React Native project just like any other npm dependency: npm install --save react react-native-web Depending on the version of React Native and React Native Web you've installed, you may encounter conflicting peer dependencies of React. This may require manually adjusting which version of React Native or React Native Web is installed. Sometimes, just deleting the node_modules folder and rerunning npm install does the trick. From there, you'll need some additional tools to build the web bundle. In this example, we'll use webpack and some related tooling: npm install webpack babel-loader babel-preset-react babel-preset-es2015 babel-preset-stage-1 webpack-validator webpack-merge --save npm install webpack-dev-server --save-dev Next, create a webpack.config.js in the root of the project: const webpack = require('webpack'); const validator = require('webpack-validator'); const merge = require('webpack-merge'); const target = process.env.npm_lifecycle_event; let config = {}; const commonConfig = { entry: { main: './index.web.js' }, output: { filename: 'app.js' }, resolve: { alias: { 'react-native': 'react-native-web' } }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel', query: { presets: ['react', 'es2015', 'stage-1'] } } ] } }; switch(target) { case 'web:prod': config = merge(commonConfig, { devtool: 'source-map', plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify('production') }) ] }); break; default: config = merge(commonConfig, { devtool: 'eval-source-map' }); break; } module.exports = validator(config); Add the followingtwo entries to thescriptssection ofpackage.json: "web:dev": "webpack-dev-server --inline --hot", "web:prod": "webpack -p" Next, create an index.htmlfile in the root of the project: <!DOCTYPE html> <html> <head> <title>RNNYT</title> <meta charset="utf-8" /> <meta content="initial-scale=1,width=device-width" name="viewport" /> </head> <body> <div id="app"></div> <script type="text/javascript" src="/app.js"></script> </body> </html> And, finally, add an index.web.jsfile to the root of the project: import React, { Component } from 'react'; import { View, Text, StyleSheet, AppRegistry } from 'react-native'; class App extends Component { render() { return ( <View style={styles.container}> <Text style={styles.text}>Hello World!</Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#efefef', alignItems: 'center', justifyContent: 'center' }, text: { fontSize: 18 } }); AppRegistry.registerComponent('RNNYT', () => App); AppRegistry.runApplication('RNNYT', { rootTag: document.getElementById('app') }); To run the development build, we'll run webpackdev server by executing the following command: npm run web:dev web:prod can be substituted to create a production ready build. While developing, you can add React Native Web specific code much like you can with iOS and Android by using Platform.OS === 'web' or by creating custom *.web.js components. React Native Web still feels pretty early days. Not every component and API is supported, and the HTML that's generated looks a bit rough for my tastes. While developing with React Native Web, I think it helps to keep the right mindset. That is, think of this as I'm building a React Native mobile app, not a website. Otherwise, you may find yourself reaching for web-specific solutions that aren't appropriate for the technology. React Native plugin for Universal Windows Platform Announced at the Facebook F8 conference in April, 2016,the React Native plugin for Universal Windows Platform (UWP)lets you author React Native apps for Windows 10, desktop Windows 10 mobile, and Xbox One. Learn more about React Native plugin for UWP at https://github.com/ReactWindows/react-native-windows. You'll need to be running Windows 10 in order to build UWP apps. You'll also need to follow the React Native documentation for configuring your Windows environment for building React Native apps. If you're not concerned with building Android on Windows, you can skip installing Android Studio. The plugin itself also has a few additional requirements. You'll need to be running at least npm 3.x and to install Visual Studio 2015 Community (not be confused with Visual Studio Code). Thankfully, the Community version is free to use. The UWP plugin docs also tell you to install the Windows 10 SDK Build 10586. However, I found it's easier to do that from within Visual Studio once we've created the app so that we can save that part for later. Configuring the React Native plugin for UWP I won't walk you through every step of the installation. The UWP plugin docs detail the process well enough. Once you've satisfied the requirements, start by creating a new React Native project as normal: react-native init RNWindows cd RNWindows Next, install and initialize the UWP plugin: npm install --save-dev rnpm-plugin-windows react-native windows Running react-native windows will actually create a windows directory inside your project containing a Visual Studio solution file. If this is your first time installing the plugin, I recommend opening the solution (.sln) file with Visual Studio 2015. Visual Studio will then ask you to download several dependencies including the latest Windows 10 SDK. Once Visual Studio has installed all the dependencies, you can run the app either from within Visual Studio or by running the following command: react-native run-windows Take a look at the following screenshot: React Native macOS Much as the name implies, React Native allows you to create macOS desktop applications using React Native. This project works a little differently than the React Native Web and the React Native plugin for UWP. As best I can tell, since React Native macOS requires its own custom CLI for creating and packaging applications, you are not able to build a macOS and mobile app from the same project. Learn more about React Native macOS at https://github.com/ptmt/react-native-macos. Configuring React Native macOS Much like you did with the React Native CLI, begin by installing the custom CLI globally by using the following command: npm install react-native-macos-cli -g Then, use it to create a new React Native macOS app by running the following command: react-native-macos init RNDesktopApp cd RNDesktopApp This will set you up with all required dependencies along with an entry point file, index.macos.js. There is no CLI command to spin up the app, so you'll need to open the Xcode project and manually run it. Run the following command: open macos/RNDesktopApp.xcodeproj The documentation is pretty limited, but there is a nice UIExplorer app that can be downloaded and run to give you a good feel for what's available. While on some level it's unfortunate your macOS app cannot live alongside your iOS and Android code, I cannot think of a use case that would call for such a thing. That said, I was delighted with how easy it was to get this project up and running. Summary I think it's fair to say that React Native is moving quickly. With a new version released roughly every two weeks, I've lost count of how many versions have passed by in the course of writing this book. I'm willing to bet React Native has probably bumped a version or two from the time you started reading this book until now. So, as much as I'd love to wrap up by saying you now know everything possible about React Native, sadly that isn't the case. References Let me leave you with a few valuable resources to continue your journey of learning and building apps with React Native: React Native Apple TV is a fork of React Native for building apps for Apple's tvOS. For more information, refer to https://github.com/douglowder/react-native-appletv. (Note that preliminary tvOS support has appeared in early versions of React Native 0.36.) React Native Ubuntu is another fork of React Native for developing React Native apps on Ubuntu for Desktop Ubuntu and Ubuntu Touch. For more information, refer to https://github.com/CanonicalLtd/react-native JS.Coach is a collection of community favorite components and plugins for all things React, React Native, Webpack, and related tools. For more information, refer to https://js.coach/react-native Exponent is described as Rails for React Native. It supports additional system functionality and UI components beyond what's provided by React Native. It will also let you build your apps without needing to touch Xcode or Android Studio. For more information, refer to https://getexponent.com/ React Native Elements is a cross-platform UI toolkit for React Native. You can think of it as Bootstrap for React Native. For more information, refer to https://github.com/react-native-community/react-native-elements The Use React Native site is how I keep up with React Native releases and news in the React Native space. For more information, refer to http://www.reactnative.com/ React Native Radio is fantastic podcast hosted by Nader Dabit and a panel of hosts that interview other developers contributing to the React Native community. For more information, refer to https://devchat.tv/react-native-radio React Native Newsletter is an occasional newsletter curated by a team of React Native enthusiasts. For more information, refer to http://reactnative.cc/ And, finally, Dotan J. Nahum maintains an amazing resource titled Awesome React Native that includes articles, tutorials, videos, and well tested components you can use in your next project. For more information, refer to https://github.com/jondot/awesome-react-native Resources for Article: Further resources on this subject: Getting Started [article] Getting Started with React [article] Understanding React Native Fundamentals [article]
Read more
  • 0
  • 0
  • 26842

article-image-null-30
Packt
09 Feb 2018
13 min read
Save for later

Lets build applications for wear 2.0

Packt
09 Feb 2018
13 min read
In this article Ashok Kumar S, the author of the article Android Wear Projects will get you started on writing android wear applications. You probably already know that by the title of the article that we will be building wear applications, But you can also expect a little bit of story on every project and comprehensive explanation on the components and structure of the application. We will be covering most of the wear 2.0 standards and development practices in all the projects we are building. Why building wear applications? The culture of wearing a utility that helps to do certain actions have always been part of a modern civilization. Wrist watches for human beings have become an augmented helping tool for checking time and date. Wearing a watch lets you check time with just a glance. Technology has taken this wearing watch experience to next level, The first modern wearable watch was a combination of calculator and watch introduced to the world in 1970. Over the decades of advancement in microprocessors and wireless technology lead to introduce a concept called "ubiquitous computing". During this time most of the leading electronic industry, start-ups have started to work on their ideas which made wearable devices very popular. Going forward we will be building five projects enlisted below. Note taking application Fitness application Wear Maps application Chat messenger Watch face (For more resources related to this topic, see here.) This article will also introduce you to setting up your wear application development environment and the best practices for wear application development and new user interface components and we will also be exploring firebase technologies for chatting and notifications in one of the projects. Publishing wear application to play store will follow completely the similar procedure to mobile application publishing with a little change. Moving forward the article will help you to set your mind so resolutely with determined expectation towards accomplishing wear applications which are introduced in the article. If you are beginning to write the wear app for the first time or you have a fair bit of information on wear application development and struggling to how to get started, this article is going to be a lot of helpful resource for sure.   Note taking application There are numerous ways to take notes. You could carry a notebook and pen in your pocket, or scribble thoughts on a piece of paper. Or, better yet, you could use your Android Wear to take notes, so one can always have a way to store thoughts even if there's not a pen and note nearby. The Note Taking App provides convenient access to store and retrieve notes within Android Wear device. There are many Android smartphone notes taking app which is popular for their simplicity and elegant functionality. For the scope of a wear device, it is necessary to keep the design simple and glanceable. As a software developer we need to understand how important it is to target the different device sizes and reaching out for various types devices, To solve this android wearable support library has component called BoxInsetLayout. And having an animated feedback through DelayedConfirmationView is implemented to give the user a feedback on task completion. Thinking of a good wear application design, Google recommends using dark color to wear application for the best battery efficiency, Light color schemes used in typical material designed mobile applications are not energy efficient in wear devices. Light colors are less energy efficient in OLED display's. Light colors need to light up the pixels with brighter intensity, White colors need to light up the RGB diodes in your pixels at 100%, the more white and light color in application the less battery efficient application will be. Using custom font’s, In the world of digital design, making your application's visuals easy on users eyes is important. The Lora font from google collections has well-balanced contemporary serif with roots in calligraphy. It is a text typeface with moderate contrast well suited for body text. A paragraph set in Lora will make a memorable appearance because of its brushed curves in contrast with driving serifs. The overall typographic voice of Lora perfectly conveys the mood of a modern-day story, or an art essay. Technically Lora is optimized for screen appearance. We will also make the application to have list item animation so that users will love to use the application often. Fitness application We are living in the realm of technology! It is definitely not the highlight here, We are also living in the realm of intricate lifestyles that driving everyone's health into some sort of illness. Our existence leads us back to roots of the ocean, we all know that we are beings who have been evolved from water. If we trace back we clearly understand our body composition is made of sixty percent water and rest are muscled water. When we talk about taking care of our health we miss simple things, Considering taking care and self-nurturing us, should start from drinking sufficient water. Adequate regular water consumption will ensure great metabolism and healthy functional organs. New millennium's Advancement in technologies is an expression of how one can make use of technology for doing right things. Android wear integrates numerous sensors which can be used to help android wear users to measure their heart rate and step counts and etc. Having said that how about writing an application that reminds us to drink water every thirty minutes and measures our heart rate, step counts and few health tips. Material design will drive the mobile application development in vertical heights in this article we will be learning the wear navigation drawer and other material design components that makes the application stand out. The application will track the step counts through step counter sensor, application also has the ability to check the heart pulse rate with a animated heart beat projection, Application reminds user on hydrate alarms which in tern make the app user to drink the water often. In this article we will build a map application with a quick note taking ability on the layers of map. We humans travel to different cities, It could be domestic or international cities. How about having a track of places visited. We all use maps for different reasons but in most cases, we use maps to plan a particular activity like outdoor tours and cycling and other similar activities. Maps influence human's intelligence to find the fastest route from the source location to destination location.   Fetching the address from latitude and longitude using Geocoder class is comprehensively explained. The map applications needs to have certain visual attractions and that is carried out in this project and the story is explained comprehensively. Chatting application We could state that the belief system of Social media has been advanced and wiped out many difficulties of communication. Just about a couple of decades back, the communication medium was letters and a couple of centuries back trained birds and if we still look back we will definitely get few more stories to comprehend the way people use to communicate back those days. Now we are in the generation of IoT, wearable smart devices and era of smartphones where the communication happens across the planet in the fraction of a second. We will build a mobile and wear application that exhibits the power of google wear messaging API's to assist us in building the chat application. with a wear companion application to administer and respond to the messages being received. To help the process of chatting, the article introduces Firebase technologies for accomplishing the chatting application. We will build a wear and mobile app together and we will receive the message typed from wear device to mobile and update it to firebase. The article comprehensively explains the firebase real-time database and for notification purpose article introduces firebase functions as well. The application will have the user login page and list users to chat and chatting screen, The project is conceptualised in such a way that the article reader will be able to leverage all these techniques in his mastering skills and he can use the same ability in production applications. Data layer establishes the communication channel between two android nodes, And the article talks about the process in detail, in this project reader will be able to find whether the device has the google play services if not how to install or go forward to use the application. A brief explanation about capability API and some of the best use cases in chatting application context. Notification have always been the important component in the chatting application to know who texted them. In this article the reader will be able to understand the firebase functions for sending push notifications. Firebase functions offers triggers for all the firebase technologies and we will explore realtime database triggers from firebase functions. Reader will also learn how to work with input method framework and voice inpute in the wear device. After all the reader will be able to understand the essentials of writing a chat application with wear app companion. Watch Face A watch face, also known as the dial is part of the clock that displays the time through fixed numbers with moving hands. This expression of checking time can be designed with various artistic approaches and creativity. In this article reader will be able to start writing their own watch face. The article comprehensively explains the CanvaswatchFaceService with all the callbacks for constructing a digital watchface. The article also talks about registering watch face to the manifest similar to the wallpaperservice class. Keeping in mind that the watch face is going to be used in different form factors in wear device the watch face is written. The wear 2.0 offers watch face picker feature for setting up the watch face from the available list of watch faces. Reader will understand the watch face elements of Analog and digital watch faces. There are certain common issues when we talk about watch faces like how the watch face is going to get the data if at all it has any complications. Battery efficiency is one of the major concern while choose to write watch face. How well the network related operations is done in the watch face and what are the sensors if at all watch face is using how often the watch face have the access. Custom assets in watch face like complex SVG animations and graphical animations and how much CPU and GPU cycles is used, user’s like the more visual attractive watch faces rather than a simple analog and digital watch faces android wear 2.0 allows complications straight from the wear 2.0 SDK developers need not do the logical part of getting the data. The article also talks about Interactive watch face, the trend changes every time, In wear 2.0 the new interactive watch faces which can have unique interaction and style expression is a great update. And all watch face developers for wear might have to start thinking of interactive watch faces. The idea is to have the user to like and love watch face by giving them a delightful and useful information on a timely basis which changes the user experience of the watch face. Data integrated watch faces. Making watch face is an excellent artistic engineering, What data we should express in the watch face and how time data and date data is being displayed. More about wear 2.0 In this article reader will be able to seek the features that wear 2.0 offers and they will also be able to know wear 2.0 is the prominent update with the plenty of new features bundled, including google assistant, stand-alone applications, new watch faces and support for the third party complications. Wear 2.0 offers to give more with the happening market research and Google is working with partner companies to build the powerful ecosystem for wear. Stand-alone applications in wear is a brilliant feature which will create a lot of buzz in wear developers and wear users. Afterall who wants always to carry phone and who wants a paired devices to do some simple tasks. Stand-alone application is the powerful feature of the wear ecosystem. How cool it will be using wear apps without your phone nearby. There are various scenarios that wear devices use to be phone dependent, for example to receive new email notification wear needs to be connected to phone for the internet, Now wear device can independently connect to wifi and can sync all the apps for new updates. User can now complete more tasks with wear apps without a phone paired to it. The article explains how to identify whether the application is stand-alone or it is dependent on a companion app and installing stand-alone applications from google playstore and other wear 2.0 related new changes. Like Watch face complications and watch face picker. Brief understanding about the storage mechanism of the stand-alone wear applications. Wear device needs to talk with phone in many use cases and the article talks about the advertising the availability of and the article also talks about advertising the capabilities of the device and retrieving the capable nodes for the requested capability. If the wear app as a companion app we need to detect the companion app in the phone or in the wear device if neither the app installed we can guide the user to playstore to install the companion application. Wear 2.0 supports cloud messaging and cloud based push notifications and the article have the comprehensive explanation about the notifications. Android wear is evolving in every way, in wear 1.0 switching between screens use to be tedious and confusing to wear users. Now, google has introduced material design and interactive drawers. Which includes single and multipage navigation drawer and action drawer and more. Typing in the tiny little one and half inches screen is pretty challenging task to the wear users so wear 2.0 introduces input method framework quick types and swipes for entering the input in the wear device directly. This article is a resourceful journey to those who are planning to seek the wear development and wear 2.0 standards. Everything in the article projects a usual task that most of the developers trying to accomplish. Resources for Article:   Further resources on this subject: Getting started with Android Development [article] Building your first Android Wear Application [article] The Art of Android Development Using Android Studio [article]
Read more
  • 0
  • 0
  • 26731

article-image-intro-swift-repl-and-playgrounds
Dov Frankel
11 Sep 2016
6 min read
Save for later

Intro to the Swift REPL and Playgrounds

Dov Frankel
11 Sep 2016
6 min read
When Apple introduced Swift at WWDC (its annual Worldwide Developers Conference) in 2014, it had a few goals for the new language. Among them was being easy to learn, especially compared to other compiled languages. The following is quoted from Apple's The Swift Programming Language: Swift is friendly to new programmers. It is the first industrial-quality systems programming language that is as expressive and enjoyable as a scripting language. The REPL Swift's playgrounds embody that friendliness by modernizing the concept of a REPL (Read-Eval-Print Loop, pronounced "repple"). Introduced by the LISP language, and now a common feature of many modern scripting languages, REPLs allow you to quickly and interactively build up your code, one line at a time. A post on the Swift Blog gives an overview of the Swift REPL's features, but this is what using it looks like (to launch it, enter swift in Terminal, if you have Xcode already installed): Welcome to Apple Swift version 2.2 (swiftlang-703.0.18.1 clang-703.0.29). Type :help for assistance.   1> "Hello world" $R0: String = "Hello World"   2> let a = 1 a: Int = 1   3> let b = 2 b: Int = 2   4> a + b $R1: Int = 3   5> func aPlusB() {   6.     print("(a + b)")   7. }   8> aPlusB() 3 If you look at what's there, each line containing input you give has a small indentation. A line that starts a new command begins with the line number, followed by > (1, 2, 3, 4, 5, and 8), and each subsequent line for a given command begins with the line number, followed by  (6 and 7). These help to keep you oriented as you enter your code one line at a time. While it's certainly possible to work on more complicated code this way, it requires the programmer to keep more state about the code in his head, and it limits the output to data types that can be represented in text only. Playgrounds Playgrounds take the concept of a REPL to the next level by allowing you to see all of your code in a single editable code window, and giving you richer options to visualize your data. To get started with a Swift playground, launch Xcode, and select File > New > Playground… (⌥⇧⌘N) to create a new playground. In the following, you can see a new playground with the same code entered into the previous  REPL:   The results are shown in the gray area on the right-hand side of the window update as you type, which allows for rapid iteration. You can write standalone functions, classes, or whatever level of abstraction you wish to work in for the task at hand, removing barriers that prevent the expression of your ideas, or experimentation with the language and APIs. So, what types of goals can you accomplish? Experimentation with the language and APIs Playgrounds are an excellent place to learn about Swift, whether you're new to the language, or new to programming. You don't need to worry about main(), app bundles, simulators, or any of the other things that go into making a full-blown app. Or, if you hear about a new framework and would like to try it out, you can import it into your playground and get your hands dirty with minimal effort. Crucially, it also blows away the typical code-build-run-quit-repeat cycle that can often take up so much development time. Providing living documentation or code samples Playgrounds provide a rich experience to allow users to try out concepts they're learning, whether they're reading a technical article, or learning to use a framework. Aside from interactivity, playgrounds provide a whole other type of richness: built-in formatting via Markdown, which you can sprinkle in your playground as easily as writing comments. This allows some interesting options such as describing exercises for students to complete or providing sample code that runs without any action required of the user. Swift blog posts have included playgrounds to experiment with, as does the Swift Programming Language's A Swift Tour chapter. To author Markdown in your playground, start a comment block with /*:. Then, to see the comment rendered, click on Editor > Show Rendered Markup. There are some unique features available, such as marking page boundaries and adding fields that populate the built-in help Xcode shows. You can learn more at Apple's Markup Formatting Reference page. Designing code or UI You can also use playgrounds to interactively visualize how your code functions. There are a few ways to see what your code is doing: Individual results are shown in the gray side panel and can be Quick-Looked (including your own custom objects that implement debugQuickLookObject()). Individual or looped values can be pinned to show inline with your code. A line inside a loop will read "(10 times)," with a little circle you can toggle to pin it. For instance, you can show how a value changes with each iteration, or how a view looks: Using some custom APIs provided in the XCPlayground module, you can show live UIViews and captured values. Just import XCPlayground and set the XCPlayground.currentPage.liveView to a UIView instance, or call XCPlayground.currentPage.captureValue(someValue, withIdentifier: "My label") to fill the Assistant view. You also still have Xcode's console available to you for when debugging is best served by printing values and keeping scrolled to the bottom. As with any Swift code, you can write to the console with NSLog and print. Working with resources Playgrounds can also include resources for your code to use, such as images: A .playground file is an OS X package (a folder presented to the user as a file), which contains two subfolders: Sources and Resources. To view these folders in Xcode, show the Project Navigator in Xcode's left sidebar, the same as for a regular project. You can then drag in any resources to the Resources folder, and they'll be exposed to your playground code the same as resources are in an app bundle. You can refer to them like so: let spriteImage = UIImage(named:"sprite.png") Xcode versions starting with 7.1 even support image literals, meaning you can drag a Resources image into your source code and treat it as a UIImage instance. It's a neat idea, but makes for some strange-looking code. It's more useful for UIColors, which allow you to use a color-picker. The Swift blog post goes into more detail on how image, color, and file literals work in playgrounds: Wrap-up Hopefully this has opened your eyes to the opportunities afforded by playgrounds. They can be useful in different ways to developers of various skill and experience levels (in Swift, or in general) when learning new things or working on solving a given problem. They allow you to iterate more quickly, and visualize how your code operates more easily than other debugging methods. About the author Dov Frankel (pronounced as in "he dove swiftly into the shallow pool") works on Windows in-house software at a Connecticut hedge fund by day, and independent Mac and iOS apps by night. Find his digital comic reader, on the Mac App Store, and his iPhone app Afterglo is in the iOS App Store. He blogs when the mood strikes him, at dovfrankel.com; he's @DovFrankel on Twitter, and @abbeycode on GitHub.
Read more
  • 0
  • 0
  • 26696
article-image-welcome-new-world
Packt
23 Jan 2017
8 min read
Save for later

Welcome to the New World

Packt
23 Jan 2017
8 min read
We live in very exciting times. Technology is changing at a pace so rapid, that it is becoming near impossible to keep up with these new frontiers as they arrive. And they seem to arrive on a daily basis now. Moore's Law continues to stand, meaning that technology is getting smaller and more powerful at a constant rate. As I said, very exciting. In this article by Jason Odom, the author of the book HoloLens Beginner's Guide, we will be discussing about one of these new emerging technologies that finally is reaching a place more material than science fiction stories, is Augmented or Mixed Reality. Imagine the world where our communication and entertainment devices are worn, and the digital tools we use, as well as the games we play, are holographic projections in the world around us. These holograms know how to interact with our world and change to fit our needs. Microsoft has to lead the charge by releasing such a device... the HoloLens. (For more resources related to this topic, see here.) The Microsoft HoloLens changes the paradigm of what we know as personal computing. We can now have our Word window up on the wall (this is how I am typing right now), we can have research material floating around it, we can have our communication tools like Gmail and Skype in the area as well. We are finally no longer trapped to a virtual desktop, on a screen, sitting on a physical desktop. We aren't even trapped by the confines of a room anymore. What exactly is the HoloLens? The HoloLens is a first of its kind, head-worn standalone computer with a sensor array which includes microphones and multiple types of cameras, spatial sound speaker array, a light projector, and an optical waveguide. The HoloLens is not only a wearable computer; it is also a complete replacement for the standard two-dimensional display. HoloLens has the capability of using holographic projection to create multiple screens throughout and environment as well as fully 3D- rendered objects as well. With the HoloLens sensor array these holograms can fully interact with the environment you are in. The sensor array allows the HoloLens to see the world around it, to see input from the user's hands, as well as for it to hear voice commands. While Microsoft has been very quiet about what the entire sensor array includes we have a good general idea about the components used in the sensor array, let's have a look at them: One IMU: The Inertia Measurement Unit (IMU) is a sensor array that includes, an Accelerometer, a Gyroscope, and a Magnetometer. This unit handles head orientation tracking and compensates for drift that comes from the Gyroscopes eventual lack of precision. Four environment understanding sensors: These together form the spatial mapping that the HoloLens uses to create a mesh of the world around the user. One depth camera:Also known as a structured--light 3D scanner. This device is used for measuring the three-dimensional shape of an object using projected light patterns and a camera system. Microsoft first used this type of camera inside the Kinect for the Xbox 360 and Xbox One.  One ambient light sensor:Ambient light sensors or photosensors are used for ambient light sensing as well as proximity detection. 2 MP photo/HD video camera:For taking pictures and video. Four-microphone array: These do a great job of listening to the user and not the sounds around them. Voice is one of the primary input types with HoloLens. Putting all of these elements together forms a Holographic computer that allows the user to see, hear and interact with the world around in new and unique ways. What you need to develop for the HoloLens The HoloLens development environment breaks down to two primary tools, Unity and Visual Studio. Unity is the 3D environment that we will do most of our work in. This includes adding holograms, creating user interface elements, adding sound, particle systems and other things that bring a 3D program to life. If Unity is the meat on the bone, Visual Studio is a skeleton. Here we write scripts or machine code to make our 3D creations come to life and add a level of control and immersion that Unity can not produce on its own. Unity Unity is a software framework designed to speed up the creation of games and 3D based software. Generally speaking, Unity is known as a game engine, but as the holographic world becomes more apparently, the more we will use such a development environment for many different kinds of applications. Unity is an application that allows us to take 3D models, 2D graphics, particle systems, and sound to make them interact with each other and our user. Many elements are drag and drop, plug and play, what you see is what you get. This can simplify the iteration and testing process. As developers, we most likely do not want to build and compile forever little change we make in the development process. This allows us to see the changes in context to make sure they work, then once we hit a group of changes we can test on the HoloLens ourselves. This does not work for every aspect of HoloLens--Unity development but it does work for a good 80% - 90%. Visual Studio community Microsoft Visual Studio Community is a great free Integrated Development Environment (IDE). Here we use programming languages such as C# or JavaScript to code change in the behavior of objects, and generally, make things happen inside of our programs. HoloToolkit - Unity The HoloToolkit--Unity is a repository of samples, scripts, and components to help speed up the process of development. This covers a large selection of areas in HoloLens Development such as: Input:Gaze, gesture, and voice are the primary ways in which we interact with the HoloLens Sharing:The sharing repository helps allow users to share holographic spaces and connect to each other via the network. Spatial Mapping:This is how the HoloLens sees our world. A large 3D mesh of our space is generated and give our holograms something to interact with or bounce off of. Spatial Sound:The speaker array inside the HoloLens does an amazing work of giving the illusion of space. Objects behind us sound like they are behind us. HoloLens emulator The HoloLens emulator is an extension to Visual Studio that will simulate how a program will run on the HoloLens. This is great for those who want to get started with HoloLens development but do not have an actual HoloLens yet. This software does require the use of Microsoft Hyper-V , a feature only available inside of the Windows 10 Pro operating system. Hyper-V is a virtualization environment, which allows the creation of a virtual machine. This virtual machine emulates the specific hardware so one can test without the actual hardware. Visual Studio tools for Unity This collection of tools adds IntelliSense and debugging features to Visual Studio. If you use Visual Studio and Unity this is a must have: IntelliSense:An intelligent code completion tool for Microsoft Visual Studio. This is designed to speed up many processes when writing code. The version that comes with Visual Studios tools for Unity has unity specific updates. Debugging:Up to the point that this extension exists debugging Unity apps proved to be a little tedious. With this tool, we can now debug Unity applications inside Visual Studio speeding of the bug squashing process considerably. Other useful tools Following mentioned are some the useful tools that are required: Image editor: Photoshop or Gimp are both good examples of programs that allow us to create 2D UI elements and textures for objects in our apps. 3D Modeling Software: 3D Studio Max, Maya, and Blender are all programs that allow us to make 3D objects that can be imported in Unity. Sound Editing Software: There are a few resources for free sounds out of the web with that in mind, Sound Forge is a great tool for editing those sounds, layering sounds together to create new sounds. Summary In this article, we have gotten to know a little bit about the HoloLens, so we can begin our journey into this new world. Here the only limitations are our imaginations. Resources for Article: Further resources on this subject: Creating a Supercomputer [article] Building Voice Technology on IoT Projects [article] C++, SFML, Visual Studio, and Starting the first game [article]
Read more
  • 0
  • 0
  • 26530

article-image-toy-bin
Packt
08 Mar 2017
8 min read
Save for later

Toy Bin

Packt
08 Mar 2017
8 min read
In this article by Steffen Damtoft Sommer and Jim Campagno, the author of the book Swift 3 Programming for Kids, we will walk you through what an array is. These are considered collection types in Swift and are very powerful. (For more resources related to this topic, see here.) Array An array stores values of the same type in an ordered list. The following is an example of an array: let list = ["Legos", "Dungeons and Dragons", "Gameboy", "Monopoly", "Rubix Cube"] This is an array (which you can think of as a list). Arrays are an ordered collections of values. We've created a constant called list of the [String] type and assigned it a value that represents our list of toys that we want to take with us. When describing arrays, you surround the type of the values that are being stored in the array by square brackets, [String]. Following is another array called numbers which contains four values being 5, 2, 9 and 22: let numbers = [5, 2, 9, 22] You would describe numbers as being an array which contains Int values which can be written as [Int]. We can confirm this by holding Alt and selecting the numbers constant to see what its type is in a playground file: What if we were to go back and confirm that list is an array of String values. Let's Alt click that constant to make sure: Similar to how we created instances of String and Int without providing any type information, we're doing the same thing here when we create list and numbers. Both list and numbers are created taking advantage of type inference. In creating our two arrays, we weren't explicit in providing any type information, we just created the array and Swift was able to figure out the type of the array for us. If we want to, though, we can provide type information, as follows: let colors: [String] = ["Red", "Orange", "Yellow"] colors is a constant of the [String] type. Now that we know how to create an array in swift, which can be compared to a list in real life, how can we actually use it? Can we access various items from the array? If so, how? Also, can we add new items to the list in case we forgot to include any items? Yes to all of these questions. Every element (or item) in an array is indexed. What does that mean? Well, you can think of being indexed as being numbered. Except that there's one big difference between how we humans number things and how arrays number things. Humans start from 1 when they create a list (just like we did when we created our preceding list). An array starts from 0. So, the first element in an array is considered to be at index 0:  Always remember that the first item in any array begins at 0. If we want to grab the first item from an array, we will do so as shown using what is referred to as subscript syntax: That 0 enclosed in two square brackets is what is known as subscript syntax. We are looking to access a certain element in the array at a certain index. In order to do that, we need to use subscript index, including the index of the item we want within square brackets. In doing so, it will return the value at the index. The value at the index in our preceding example is Legos. The = sign is also referred to as the assignment operator. So, we are assigning the Legos value to a new constant, called firstItem. If we were to print out firstItem, Legos should print to the console: print(firstItem) // Prints "Legos" If we want to grab the last item in this array, how do we do it? Well, there are five items in the array, so the last item should be at index 5, right? Wrong! What if we wrote the following code (which would be incorrect!): let lastItem = list[5] This would crash our application, which would be bad. When working with arrays, you need to ensure that you don't attempt to grab an item at a certain index which doesn't exist. There is no item in our array at index 5, which would make our application crash. When you run your app, you will receive the fatal error: Index out of range error. This is shown in the screenshot below: Let's correctly grab the last item in the array: let lastItem = list[4] print("I'm not as good as my sister, but I love solving the (lastItem)") // Prints "I'm not as good as my sister, but I love solving the Rubix Cube" Comments in code are made by writing text after //. None of this text will be considered code and will not be executed; it's a way for you to leave notes in your code. All of a sudden, you've now decided that you don't want to take the rubix cube as it's too difficult to play with. You were never able to solve it on Earth, so you start wondering why bringing it to the moon would help solve that problem. Bringing crayons is a much better idea. Let's swap out the rubix cube for crayons, but how do we do that? Using subscript syntax, we should be able to assign a new value to the array. Let's give it a shot: list[4] = "Crayons" This will not work! But why, can you take a guess? It's telling us that we cannot assign through subscript because list is a constant (we declared it using the let keyword). Ah! That's exactly how String and Int work. We decide whether or not we can change (mutate) the array based upon the let or var keyword just like every other type in Swift. Let's change the list array to a variable using the var keyword: var list = ["Legos", "Dungeons and Dragons", "Gameboy", "Monopoly", "Rubix Cube"] After doing so, we should be able to run this code without any problem: list[4] = "Crayons" If we decide to print the entire array, we will see the following print to console: ["Legos", "Dungeons and Dragons", "Gameboy", "Monopoly", "Crayons"] Note how Rubix Cube is no longer the value at index 4 (our last index); it has been changed to Crayons.  That's how we can mutate (or change) elements at certain indexes in our array. What if we want to add a new item to the array, how do we do that? We've just saw that trying to use subscript syntax with an index that doesn't exist in our array crashes our application, so we know we can't use that to add new items to our array. Apple (having created Swift) has created hundreds, if not thousands, of functions that are available in all the different types (like String, Int, and array). You can consider yourself an instance of a person (person being the name of the type). Being an instance of a person, you can run, eat, sleep, study, and exercise (among other things). These things are considered functions (or methods) that are available to you. Your pet rock doesn't have these functions available to it, why? This is because it's an instance of a rock and not an instance of a person. An instance of a rock doesn't have the same functions available to it that an instance of a person has. All that being said, an array can do things that a String and Int can't do. No, arrays can't run or eat, but they can append (or add) new items to themselves. An array can do this by calling the append(_:) method available to it. This method can be called on an instance of an array (like the preceding list) using what is known as dot syntax. In dot syntax, you write the name of the method immediately after the instance name, separated by a period (.), without any space: list.append("Play-Doh") Just as if we were to tell a person to run, we are telling the list to append. However, we can't just tell it to append, we have to pass an argument to the append function so that it can add it to the list. Our list array now looks like this: ["Legos", "Dungeons and Dragons", "Gameboy", "Monopoly", "Crayons", "Play-Doh"] Summary We have covered a lot of material important to understanding Swift and writing iOS apps here. Feel free to reread what you've read so far as well as write code in a playground file. Create your own arrays, add whatever items you want to it, and change values at certain indexes. Get used to the syntax of working with creating an arrays as well as appending new items. If you can feel comfortable up to this point with how arrays work, that's awesome, keep up the great work! Resources for Article:  Further resources on this subject: Introducing the Swift Programming Language [article] The Swift Programming Language [article] Functions in Swift [article]
Read more
  • 0
  • 0
  • 25411

article-image-xamarinforms
Packt
09 Dec 2016
11 min read
Save for later

Xamarin.Forms

Packt
09 Dec 2016
11 min read
Since the beginning of Xamarin's lifetime as a company, their motto has always been to present the native APIs on iOS and Android idiomatically to C#. This was a great strategy in the beginning, because applications built with Xamarin.iOS or Xamarin.Android were pretty much indistinguishable from a native Objective-C or Java applications. Code sharing was generally limited to non-UI code, which left a potential gap to fill in the Xamarin ecosystem—a cross-platform UI abstraction. Xamarin.Forms is the solution to this problem, a cross-platform UI framework that renders native controls on each platform. Xamarin.Forms is a great framework for those that know C# (and XAML), but also may not want to get into the full details of using the native iOS and Android APIs. In this article by Jonathan Peppers, author of the book Xamarin 4.x Cross-Platform Application Development - Third Edition, we will discuss the following topics: Use XAML with Xamarin.Forms Cover data binding and MVVM with Xamarin.Forms (For more resources related to this topic, see here.) Using XAML in Xamarin.Forms In addition to defining Xamarin.Forms controls from C# code, Xamarin has provided the tooling for developing your UI in XAML (Extensible Application Markup Language). XAML is a declarative language that is basically a set of XML elements that map to a certain control in the Xamarin.Forms framework. Using XAML is comparable to what you would think of using HTML to define the UI on a webpage, with the exception that XAML in Xamarin.Forms is creating a C# objects that represent a native UI. To understand how XAML works in Xamarin.Forms, let's create a new page with lots of UI on it. Return to your HelloForms project from earlier, and open the HelloFormsPage.xaml file. Add the following XAML code between the <ContentPage> tag: <StackLayout Orientation="Vertical" Padding="10,20,10,10"> <Label Text="My Label" XAlign="Center" /> <Button Text="My Button" /> <Entry Text="My Entry" /> <Image Source="https://www.xamarin.com/content/images/ pages/branding/assets/xamagon.png" /> <Switch IsToggled="true" /> <Stepper Value="10" /> </StackLayout> Go ahead and run the application on iOS and Android, your application will look something like the following screenshots: First, we created a StackLayout control, which is a container for other controls. It can layout controls either vertically or horizontally one by one as defined by the Orientation value. We also applied a padding of 10 around the sides and bottom, and 20 from the top to adjust for the iOS status bar. You may be familiar with this syntax for defining rectangles if you are familiar with WPF or Silverlight. Xamarin.Forms uses the same syntax of left, top, right, and bottom values delimited by commas.  We also used several of the built-in Xamarin.Forms controls to see how they work: Label: We used this earlier in the article. Used only for displaying text, this maps to a UILabel on iOS and a TextView on Android. Button: A general purpose button that can be tapped by a user. This control maps to a UIButton on iOS and a Button on Android. Entry: This control is a single-line text entry. It maps to a UITextField on iOS and an EditText on Android. Image: This is a simple control for displaying an image on the screen, which maps to a UIImage on iOS and an ImageView on Android. We used the Source property of this control, which loads an image from a web address. Using URLs on this property is nice, but it is best for performance to include the image in your project where possible. Switch: This is an on/off switch or toggle button. It maps to a UISwitch on iOS and a Switch on Android. Stepper: This is a general-purpose input for entering numbers via two plus and minus buttons. On iOS this maps to a UIStepper, while on Android Xamarin.Forms implements this functionality with two Buttons. This are just some of the controls provided by Xamarin.Forms. There are also more complicated controls such as the ListView and TableView you would expect for delivering mobile UIs. Even though we used XAML in this example, you could also implement this Xamarin.Forms page from C#. Here is an example of what that would look like: public class UIDemoPageFromCode : ContentPage { public UIDemoPageFromCode() { var layout = new StackLayout { Orientation = StackOrientation.Vertical, Padding = new Thickness(10, 20, 10, 10), }; layout.Children.Add(new Label { Text = "My Label", XAlign = TextAlignment.Center, }); layout.Children.Add(new Button { Text = "My Button", }); layout.Children.Add(new Image { Source = "https://www.xamarin.com/content/images/pages/ branding/assets/xamagon.png", }); layout.Children.Add(new Switch { IsToggled = true, }); layout.Children.Add(new Stepper { Value = 10, }); Content = layout; } } So you can see where using XAML can be a bit more readable, and is generally a bit better at declaring UIs. However, using C# to define your UIs is still a viable, straightforward approach. Using data-binding and MVVM At this point, you should be grasping the basics of Xamarin.Forms, but are wondering how the MVVM design pattern fits into the picture. The MVVM design pattern was originally conceived for its use along with XAML and the powerful data binding features XAML provides, so it is only natural that it is a perfect design pattern to be used with Xamarin.Forms. Let's cover the basics of how data-binding and MVVM is setup with Xamarin.Forms: Your Model and ViewModel layers will remain mostly unchanged from the MVVM pattern. Your ViewModels should implement the INotifyPropertyChanged interface, which facilitates data binding. To simplify things in Xamarin.Forms, you can use the BindableObject base class and call OnPropertyChanged when values change on your ViewModels. Any Page or control in Xamarin.Forms has a BindingContext, which is the object that it is data bound to. In general, you can set a corresponding ViewModel to each view's BindingContext property. In XAML, you can setup a data binding by using syntax of the form Text="{Binding Name}". This example would bind the Text property of the control to a Name property of the object residing in the BindingContext. In conjunction with data binding, events can be translated to commands using the ICommand interface. So for example, a Button's click event can be data bound to a command exposed by a ViewModel. There is a built-in Command class in Xamarin.Forms to support this. Data binding can also be setup from C# code in Xamarin.Forms via the Binding class. However, it is generally much easier to setup bindings from XAML, since the syntax has been simplified with XAML markup extensions. Now that we have covered the basics, let's go through step-by-step and to use Xamarin.Forms. For the most part we can reuse most of the Model and ViewModel layers, although we will have to make a few minor changes to support data binding from XAML. Let's begin by creating a new Xamarin.Forms application backed by a PCL named XamSnap: First create three folders in the XamSnap project named Views, ViewModels, and Models. Add the appropriate ViewModels and Models. Build the project, just to make sure everything is saved. You will get a few compiler errors we will resolve shortly. The first class we will need to edit is the BaseViewModel class, open it and make the following changes: public class BaseViewModel : BindableObject { protected readonly IWebService service = DependencyService.Get<IWebService>(); protected readonly ISettings settings = DependencyService.Get<ISettings>(); bool isBusy = false; public bool IsBusy { get { return isBusy; } set { isBusy = value; OnPropertyChanged(); } } } First of all, we removed the calls to the ServiceContainer class, because Xamarin.Forms provides it's own IoC container called the DependencyService. It has one method, Get<T>, and registrations are setup via an assembly attribute that we will setup shortly. Additionally we removed the IsBusyChanged event, in favor of the INotifyPropertyChanged interface that supports data binding. Inheriting from BindableObject gave us the helper method, OnPropertyChanged, which we use to inform bindings in Xamarin.Forms that the value has changed. Notice we didn't pass a string containing the property name to OnPropertyChanged. This method is using a lesser-known feature of .NET 4.0 called CallerMemberName, which will automatically fill in the calling property's name at runtime. Next, let's setup our needed services with the DependencyService. Open App.xaml.cs in the root of the PCL project and add the following two lines above the namespace declaration: [assembly: Dependency(typeof(XamSnap.FakeWebService))] [assembly: Dependency(typeof(XamSnap.FakeSettings))] The DependencyService will automatically pick up these attributes and inspect the types we declared. Any interfaces these types implement will be returned for any future callers of DependencyService.Get<T>. I normally put all Dependency declarations in the App.cs file, just so they are easy to manage and in one place. Next, let's modify LoginViewModel by adding a new property: public Command LoginCommand { get; set; } We'll use this shortly for data binding a button's command. One last change in the view model layer is to setup INotifyPropertyChanged for the MessageViewModel: Conversation[] conversations; public Conversation[] Conversations { get { return conversations; } set { conversations = value; OnPropertyChanged(); } } Likewise, you could repeat this pattern for the remaining public properties throughout the view model layer, but this is all we will need for this example. Next, let's create a new Forms ContentPage Xaml file under the Views folder named LoginPage. In the code-behind file, LoginPage.xaml.cs, we'll just need to make a few changes: public partial class LoginPage : ContentPage { readonly LoginViewModel loginViewModel = new LoginViewModel(); public LoginPage() { Title = "XamSnap"; BindingContext = loginViewModel; loginViewModel.LoginCommand = new Command(async () => { try { await loginViewModel.Login(); await Navigation.PushAsync(new ConversationsPage()); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } }); InitializeComponent(); } } We did a few important things here, including setting the BindingContext to our LoginViewModel. We setup the LoginCommand, which basically invokes the Login method and displays a message if something goes wrong. It also navigates to a new page if successful. We also set the Title, which will show up in the top navigation bar of the application. Next, open LoginPage.xaml and we'll add the following XAML code inside the ContentPage's content: <StackLayout Orientation="Vertical" Padding="10,10,10,10"> <Entry Placeholder="Username" Text="{Binding UserName}" /> <Entry Placeholder="Password" Text="{Binding Password}" IsPassword="true" /> <Button Text="Login" Command="{Binding LoginCommand}" /> <ActivityIndicator IsVisible="{Binding IsBusy}" IsRunning="true" /> </StackLayout> This will setup the basics of two text fields, a button, and a spinner complete with all the bindings to make everything work. Since we setup the BindingContext from the LoginPage code behind, all the properties are bound to the LoginViewModel. Next, create a ConversationsPage as a XAML page just like before, and edit the ConversationsPage.xaml.cs code behind: public partial class ConversationsPage : ContentPage { readonly MessageViewModel messageViewModel = new MessageViewModel(); public ConversationsPage() { Title = "Conversations"; BindingContext = messageViewModel; InitializeComponent(); } protected async override void OnAppearing() { try { await messageViewModel.GetConversations(); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } } } In this case, we repeated a lot of the same steps. The exception is that we used the OnAppearing method as a way to load the conversations to display on the screen. Now let's add the following XAML code to ConversationsPage.xaml: <ListView ItemsSource="{Binding Conversations}"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding UserName}" /> </DataTemplate> </ListView.ItemTemplate> </ListView> In this example, we used a ListView to data bind a list of items and display on the screen. We defined a DataTemplate class, which represents a set of cells for each item in the list that the ItemsSource is data bound to. In our case, a TextCell displaying the Username is created for each item in the Conversations list. Last but not least, we must return to the App.xaml.cs file and modify the startup page: MainPage = new NavigationPage(new LoginPage()); We used a NavigationPage here so that Xamarin.Forms can push and pop between different pages. This uses a UINavigationController on iOS, so you can see how the native APIs are being used on each platform. At this point, if you compile and run the application, you will get a functional iOS and Android application that can login and view a list of conversations: Summary In this article we covered the basics of Xamarin.Forms and how it can be very useful for building your own cross-platform applications. Xamarin.Forms shines for certain types of apps, but can be limiting if you need to write more complicated UIs or take advantage of native drawing APIs. We discovered how to use XAML for declaring our Xamarin.Forms UIs and understood how Xamarin.Forms controls are rendered on each platform. We also dove into the concepts of data binding and how to use the MVVM design pattern with Xamarin.Forms. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 24378
article-image-unit-testing-apps-android-studio
Troy Miles
15 Mar 2016
6 min read
Save for later

Unit Testing Apps with Android Studio

Troy Miles
15 Mar 2016
6 min read
We will need to create an Android app, get it all set up, then add a test project to it. Let's begin. 1. Start Android Studio and select new project. 2. Change the Application name to UTest. Click Next . 3. Click Next again. 4. Click Finish. Now that we have the project started, let’s set it up. Open the layout resource file:activity_main.xml. Add an ID to TextView. It should look as follows: <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:paddingBottom="@dimen/activity_vertical_margin" tools:context="com.tekadept.utest.app.MainActivity" > <TextView android:id="@+id/greeting" android:text="@string/hello_world" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </RelativeLayout> The random message Next we modify the MainActivity class. We are going to add some code that will display a random greeting message to the user. Modify MainActivity so that it looks like the following code: TextViewtxtGreeting; @Override protected void onCreate(Bundle savedInstanceState) { super .onCreate(savedInstanceState); setContentView(R.layout.activity_main); txtGreeting = (TextView)findViewById(R.id.greeting); Random rndGenerator = new Random(); int rnd = rndGenerator.nextInt(4); String greeting = getGreeting(rnd); txtGreeting.setText(greeting); } private String getGreeting(intmsgNumber) { String greeting; switch (msgNumber){ case 0: greeting = "Holamundo"; break ; case 1: greeting = "Bonjour tout le monde"; break ; case 2: greeting = "Ciao mondo"; break ; case 3: greeting = "Hallo Welt"; break ; default : greeting = "Hello world"; break ; } return greeting; } At this point, if you run the app, it should display one of four random greetings each time you run. We want to test the getGreeting method. We need to be sure that the string it returns matches the number we sent it. Currently, however, we have no way to know that. In order to add a test package, we need to hover over the package name. For my app, the package name is com.tekadept.utest.app . It is the line directly below the Java directory. The rest of the steps are as follows: Right click on the package name and choose New-> Package. Give your new package the name tests . Click OK. Right click on tests and choose New -> Java Class . Enter MainActivityTest as your name. Click OK from inside MainActivityTest. Currently, we are not extending from the proper base class. Let's fix that. Change the MainActivityTest class so it looks like the following code: package com.tekadept.utest.app.tests; import android.test.ActivityInstrumentationTestCase2; import com.tekadept.utest.app.MainActivity; public class MainActivityTestextends ActivityInstrumentationTestCase2<MainActivity>{ public MainActivityTest() { super (MainActivity.class); } } We've done two things. First, we changed the base class to ActivityInstrumentationTestCase2. Secondly, we added a constructor method. Before we can test the logic of the getGreeting method, we need to make it visible to outside classes by changing its modifier from private to public. Once we've done that, return to the MainActivityTest class and add a new method, testGetGreetings. This is shown in the following code: public void testGetGreeting() throws Exception { MainActivity activity = getActivity(); int count = 0; String result = activity.getGreeting(count); Assert.assertEquals("Holamundo", result); count = 1; result = activity.getGreeting(count); Assert.assertEquals("Bonjour tout le monde", result); count = 2; result = activity.getGreeting(count); Assert.assertEquals("Ciao mondo", result); count = 3; result = activity.getGreeting(count); Assert.assertEquals("Hallo Welt", result); } Time to test All we need to do now is create a configuration for our test package. Click Run -> Edit Configurations…. On the Run/Debug Configurations click the plus sign in the upper left hand corner. Click onAndroid Tests. For the name, enter test. Make sure the General tab is selected. For Module , choose app . For Test, choose All in Package . For Package , browse down to the test folder. The Android unit test must run on a device or emulator. I prefer having the choose dialog come up, so I've selected that option. You should select whichever option works best for you. Then click OK. At this point, you have a working app complete with a functioning unit test. To run the unit test, choose the test configuration from the drop-down menu to the left of the run button. Then click the run button. After building your app and running it on your selected device, Android Studio will show the test results. If you don't see the results, click the run button in the lower left hand corner of Android Studio. Green is good. Red means one or more tests have failed. Currently our one test should be passing, so everything should be green. In order to see a test fail, let's make a temporary change to the getGreeting method. Change the first greeting from "Holamundo" to "Adios mundo". Save your change and click the run button to run the tests again. This time the test should fail. You should see a message something like the following: The test runner shows the failure message and includes a stack trace of the failure. The first line of the stack trace shows that the test failed on line 17 of MainActivityTest . Don't forget to restore the MainActivity class' getGreeting method back to fix the failing unit test. Conclusion That is it for this post. You now know how to add a unit test package to Android Studio. If you had any trouble with this post, be sure to check out the complete source code to the UTest project on my GitHub repo at: https://github.com/Rockncoder/UTest. From 14th-20th March we're throwing the spotlight on iOS and Android, and asking you which one you think will win out in the future. Tell us - then save 50% on a selection of our very best Android and iOS titles! About the author Troy Miles, also known asthe Rockncoder, currently has fun writing full stack code with ASP.NET MVC or Node.js on the backend and web or mobile up front. He started coding over 30 years ago, cutting his teeth writing games for C64, Apple II, and IBM PCs. After burning out, he moved on to Windows system programming before catching Internet fever just before the dot net bubble burst. After realizing that mobile devices were the perfect window into backend data, he added mobile programming to his repertoire. He loves competing in hackathons and randomly posting interesting code nuggets on his blog: http://therockncoder.blogspot.com/.
Read more
  • 0
  • 0
  • 22907

article-image-fake-swift-enums-user-friendly-frameworks
Daniel Leping
28 Sep 2016
7 min read
Save for later

Fake Swift Enums and User-Friendly Frameworks

Daniel Leping
28 Sep 2016
7 min read
Swift Enums are a great and useful hybrid of classic enums with tuples. One of the most attractive points of consideration here is their short .something API when you want to pass one as an argument to a function. Still, they are quite limited in certain situations. OK, enough about enums themselves, because this post is about fake enums (don't take my word literally here). Why? Everybody is talking about usability, UX, and how to make the app user-friendly. This all makes sense, because at the end of the day we all are humans and perceive things with feelings. So the app just leaves the impression footprint. There are a lot of users who would prefer a more friendly app to the one providing a richer functionality. All these talks are proven by Apple and their products. But what about a developer, and what tools and bricks are we using? Right now I'm not talking about the IDEs, but rather about the frameworks and the APIs. Being an open source framework developer myself (right now I'm working on Swift Express, a web application framework in Swift, and the foundation around it), I'm concerned about the looks of the APIs we are providing. It matches one-to-one to the looks of the app, so it has the same importance. Call it Framework's UX if you would like. If the developer is pleased with the API the framework provides, he is 90% already your user. This post was particularly inspired by the Event sub-micro-framework, a part of the Reactive Swift foundation I'm creating right now to make Express solid. We took the basic idea from node.js EvenEmitter, which is very easy to use in my opinion. Though, instead of using the String Event ID approach provided by node, we wanted to use the .something approach (read above about what I think of nice APIs) and we are hoping that enums would work great, we encountered limitations with it. The hardest thing was to create a possibility to use different arguments for closures of different event types. It's all very simple with dynamic languages like JavaScript, but well, here we are type-safe. You know... The problem Usually, when creating a new framework, I try to first write an ideal API, or the one I would like to see at the very end. And here is what I had: eventEmitter.on(.stringevent) { string in print("string:", string) } eventEmitter.on(.intevent) { i in print("int:", i) } eventEmitter.on(.complexevent) { (s, i) in print("complex: string:", s, "int:", i) } This seems easy enough for the final user. What I like the most here is that different EventEmitters can have different sets of events with specific data payloads and still provide the .something enum style notation. This is easier to say than to do. With enums, you cannot have an associated type bound to a specific case. It must be bound to the entire enum, or nothing. So there is no possibility to provide a specific data payload for a particular case. But I was very clear that I want everything type-safe, so there can be no dynamic arguments. The research First of all, I began investigating if there is a possibility to use the .something notation without using enums. The first thing I recalled was the OptionSetType that is mainly used to combine the flags for C APIs. And it allows the .somthing notation. You might want to investigate this protocol as it's just cool and useful in many situations where enums are not enough. After a bit of experimenting, I found out that any class or struct having a static member of Self type can mimic an enum. Pretty much like this: struct TestFakeEnum { private init() { } static let one:TestFakeEnum = TestFakeEnum() static let two:TestFakeEnum = TestFakeEnum() static let three:TestFakeEnum = TestFakeEnum() } func funWithEnum(arg:TestFakeEnum) { } func testFakeEnum() { funWithEnum(.one) funWithEnum(.two) funWithEnum(.three) } This code will compile and run correctly. These are the basics of any fake enum. Even though the example above does not provide any particular benefit over built-in enums, it demonstrates the fundamental possibility. Getting generic Let's move on. First of all, to make our events have a specific data payload, we've added an EventProtocol (just keep in mind; it will be important later): //do not pay attention to Hashable, it's used for internal event routing mechanism. Not a subject here public protocol EventProtocol : Hashable { associatedtype Payload } To make our emitter even better we should not limit it to a restricted set of events, but rather allow the user to extend it. To achieve this, I've added a notion of EventGroup. It's not a particular type but rather an informal concept, so every event group should follow. Here is an example of an EventGroup: struct TestEventGroup<E : EventProtocol> { internal let event:E private init(_ event:E) { self.event = event } static var string:TestEventGroup<TestEventString> { return TestEventGroup<TestEventString>(.event) } static var int:TestEventGroup<TestEventInt> { return TestEventGroup<TestEventInt>(.event) } static var complex:TestEventGroup<TestEventComplex> { return TestEventGroup<TestEventComplex>(.event) } } Here is what TestEventString, TestEventInt and TestEventComplex are (real enums are used here only to have conformance with Hashable and to be a case singleton, so don't bother): //Notice, that every event here has its own Payload type enum TestEventString : EventProtocol { typealias Payload = String case event } enum TestEventInt : EventProtocol { typealias Payload = Int case event } enum TestEventComplex : EventProtocol { typealias Payload = (String, Int) case event } So to get a generic with .something notation, you have to create a generic class or struct having static members of the owner type with a particular generic param applied. Now, how can you use it? How can you discover what generic type is associated with a specific option? For that, I used the following generic function: // notice, that Payload is type-safety extracted from the associated event here with E.Payload func on<E : EventProtocol>(groupedEvent: TestEventGroup<E>, handler:E.Payload->Void) -> Listener { //implementation here is not of the subject of the article } Does this thing work? Yes. You can use the API exactly like it was outlined in the very beginning of this post: let eventEmitter = EventEmitterTest() eventEmitter.on(.string) { s in print("string:", s) } eventEmitter.on(.int) { i in print("int:", i) } eventEmitter.on(.complex) { (s, i) in print("complex: string:", s, "int:", i) } All the type inferences work. It's type safe. It's user friendly. And it is all thanks to a possibility to associate the type with the .something enum-like member. Conclusion Pity that this functionality is not available out of the box with built-in enums. For all the experiments to make this happen, I had to spend several hours. Maybe in one of the upcoming versions of Swift (3? 4.0?), Apple will let us get the type of associated values of an enum or something. But… okay. Those are dreams and out of the scope of this post. For now, we have what we have, and I'm really glad that are abele to have an associatedtype with enum-like entity, even if it's not straightforward. The examples were taken from the Event project. The complete code can be found here and it was tested with Swift 2.2 (Xcode 7.3), which is the latest at the time of writing. Thanks for reading. Use user-friendly frameworks only and enjoy your day! About the Author Daniel Leping is the CEO of Crossroad Labs. He has been working with Swift since the early beta releases and continues to do so at the Swift Express project. His main interests are reactive and functional programming with Swift, Swift-based web technologies, and bringing the best of modern techniques to the Swift world. He can be found on GitHub.
Read more
  • 0
  • 0
  • 22708
Modal Close icon
Modal Close icon