Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-cooking-cupcakes-towers
Packt
03 Jan 2017
6 min read
Save for later

Cooking cupcakes towers

Packt
03 Jan 2017
6 min read
In this article by Francesco Sapio author of the book Getting Started with Unity 2D Game Development - Second Edition we will see how to create our towers. This is not an easy task, but at the end we will acquire a lot of scripting skills. (For more resources related to this topic, see here.) What a cupcake Tower does First of all, it's useful to write down what we want to achieve and define what exactly a cupcake tower is supposed to do. The best way is to write down a list, to have clear idea of what we are trying to achieve: A cupcake tower is able to detect pandas within a certain range. A cupcake tower shoots a different kind of projectile according to its typology against the pandas within a certain range. Furthermore, among this range, it uses a policy to decide which panda to shoot. There is a reload time, before the cupcake tower is able to shoot again. The cupcake tower can be upgraded (in a bigger cupcake!), increasing its stats and therefore changing its appearance. Scripting the cupcake tower There are a lot of things to implement. Let's start by creating a new script and naming it CupcakeTowerScript. As we already mentioned for the Projectile Script, in this article, we implement the main logic, but of course there is always space to improve. Shooting to pandas Even if we don't have enemies yet, we can already start to program the behavior of the cupcake towers to shoot to the enemies. In this article we will learn a bit about using Physics to detect objects within a range. Let's start by defining four variables. The first three are public, so we can set them in the Inspector, the last one is private, since we only need it to check how much time is elapsed. In particular, the first three variables store the parameters of our tower. So, the projectile prefab, its range and its reload time. We can write the following: public float rangeRadius; //Maximum distance that the Cupcake Tower can shoot public float reloadTime; //Time before the Cupcake Tower is able to shoot again public GameObject projectilePrefab; //Projectile type that is fired from the Cupcake Tower private float elapsedTime; //Time elapsed from the last time the Cupcake Tower has shot Now, in the Update() function we need to check if enough time has elapsed in order to shoot. This can be easily done by using an if-statement. In any case, at the end, the time elapsed should be increased: void Update () { if (elapsedTime >= reloadTime) { //Rest of the code } elapsedTime += Time.deltaTime; } Within the if statement, we need to reset the elapsed time, so to be able to shoot the next time. Then, we need to check if within its range there are some game objects or not. if (elapsedTime >= reloadTime) { //Reset elapsed Time elapsedTime = 0; //Find all the gameObjects with a collider within the range of the Cupcake Tower Collider2D[] hitColliders = Physics2D.OverlapCircleAll(transform.position, rangeRadius); //Check if there is at least one gameObject found if (hitColliders.Length != 0) { //Rest of the code } } If there are enemies within range, we need to decide a policy about which enemy the tower should be targeted. There are different ways to do this and different strategies that the tower itself could choose. Here, we are going to implement one where the nearest enemy to the tower will be the one targeted. To implement this policy, we need to loop all all the game objects that we have found in range, check if they actually are enemies, and using distances, pick the nearest one. To achieve this, write the following code inside the previous if statement: if (hitColliders.Length != 0) { //Loop over all the gameObjects to identify the closest to the Cupcake Tower float min = int.MaxValue; int index = -1; for (int i = 0; i < hitColliders.Length; i++) { if (hitColliders[i].tag == "Enemy") { float distance = Vector2.Distance(hitColliders[i].transform.position, transform.position); if (distance < min) { index = i; min = distance; } } } if (index == -1) return; //Rest of the code } Once we got the target, we need to get the direction, that the tower will use to throw the projectile. So, let's write this: //Get the direction of the target Transform target = hitColliders[index].transform; Vector2 direction = (target.position - transform.position).normalized; Finally, we need to instantiate a new Projectile, and assign to it the direction of the enemy, as the following: //Create the Projectile GameObject projectile = GameObject.Instantiate(projectilePrefab, transform.position, Quaternion.identity) as GameObject; projectile.GetComponent<ProjectileScript>().direction = direction; Instantiate Game Objects it is usually slow, and it should be avoided. However, for the learning propose we can live with that. And that is it for shooting to the enemies. Upgrading the cupcake tower, making it even tastier In order to create a function to upgrade the tower, we first need to define a variable to store the actual level of the tower: public int upgradeLevel; //Level of the Cupcake Tower Then, we need an array with all the Sprites for the different upgrades, like the following: public Sprite[] upgradeSprites; //Different sprites for the different levels of the Cupcake Tower Finally, we can create our Upgrade function. We need to upgrade the graphics, and increase the stats. Feel free to tweak this values as you prefer. However, don't forget to increase the level of the tower as well as to assign the new sprite. At the end, you should have something like the following: public void Upgrade() { rangeRadius += 1f; reloadTime -= 0.5f; upgradeLevel++; GetComponent<SpriteRenderer>().sprite = upgradeSprites[upgradeLevel]; } Save the script, and for now we have done with it. A pre-cooked cupcake tower through Prefabs As we have done with the Sprinkle, we need to do something similar for the cupcake Tower. In the Prefabs folder in the Project Panel, create a new Prefab by right clicking and then navigate to Create | Prefab. Name it SprinklesCupcakeTower. Now, drag and drop the Sprinkles_Cupcake_Tower_0 from the Graphics/towers folder (within the cupcake_tower_sheet-01 file) in the Scene View. Attach the CupcakeTowerScript to the object by navigating to Add Component | Script | CupcakeTowerScript. The Inspector should look like the following: We need to assign the Pink_Sprinkle_Projectile_Prefab to the Projectile Prefab variable. Then, we need to assign the different Sprites for the upgrades. In particular, we can use Sprinkles_Cupcake_Tower_* (replacing the * with the level of the cupcake tower) from the same sheet as before. Don't worry too much about the other parameters of the tower, like the range radius or the reload time, since we will see how to balance the game later on. At the end, this is what we should see: The last step is to drag this game object inside the prefab. As a result, our cupcake tower is ready. Summary In this article we covered the topic of creating a cupcake tower and scripting it. Resources for Article: Further resources on this subject: Animating a Game Character [article] What's Your Input? [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 21668

article-image-our-first-program
Packt
03 Jan 2017
5 min read
Save for later

Our First Program!

Packt
03 Jan 2017
5 min read
In this article by Syed Omar Faruk Towaha, author of Learning C for Arduino, we will be learning how we can connect our Arduino to our system and we will learn how to write program on the Arduino IDE. (For more resources related to this topic, see here.) First connect your A-B cable to your PC and then connect the cable to the PC. Your PC will make a sound which will confirm that the Arduino has connected to the PC. Now open the Arduino IDE. From menu go to Tools | Board:"Arduino/Genuino Uno". You can select any board you have bought from the list. See the following screenshot for the list: You have to select the port on which the Arduino is connected. There are a lot of things you can do to see on which port your Arduino is connected. Hello Arduino! Let's write our first program on the Arduino IDE. Go to File and click on New. A text editor will open with few lines of code. Delete those lines first, and then type the following code: void setup() { Serial.begin(9600); } void loop() { Serial.print("Hello Arduino!n"); } From menu go to Sketch and click Upload. It is a good practice to verify or compile the code before uploading to the Arduino board. To verify or compile the code you need to go to Sketch and click Verify/Compile. You will see a message Done compiling. on the bottom of the IDE if the code is error free. See the following figure for the explanation: After the successful upload, you need to open the serial monitor of the Arduino IDE. To open the serial monitor, you need to go to Tools and click on Serial Monitor. You will see the following screen: setup() function The setup() function helps to initialize variables, set pin modes, and we can also use libraries here. This function is called first when we compile or upload the whole code. The setup() runs only for the first time it is uploaded to the board and later it runs every time we press the reset button on the Arduino. On our code we used Serial.begin(9600); which sets the data rate per second for the serial communication. We already know that serial communication is the process by which we send data one bit at a time over a communication channel. For Arduino to communicate with the computer we used the data rate 9600 which is a standard data rate. This is also known as baud rate. We can set the baud rate any of the following depending on the connection speed and type. I will recommend using 9600 for general data communication purposes. If you use higher baud rate, the characters we print might be broken: 300 1200 2400 4800 9600 19200 38400 57600 74880 115200 230400 250000 If we do not declare the baud rate inside the setup() function there will be no output on the serial monitor. Baud rate 9600 means 960 characters per second. This is because in serial communication, you need to transmit one start bit, eight data bits and one stop bit, for a total of 10 bits at a speed of 9600 bits per second. loop() function loop() function will let the code run again and again until we disconnect the Arduino. We have written a print statement on the loop() function which will execute for the infinity time. To print something on the serial monitor we need to write the following line: Serial.print("Anything We Want To Print"); Between the quotations we can write anything we want to print. On the last code we have written Serial.print("Hello Arduino!n");. That's why on the serial monitor we have seen Hello Arduino! had been printing for an infinity time. We have used n after Hello Arduino!.This is called escape sequence. For now, just remember we need to put this after each of our line inside the print statement to break a line and print the next command on the net line. We can use Serial.println("Hello Arduino!"); instead of Serial.print("Hello Arduino!n");. Both will give the same result. We need to put Serial.println("Hello Arduino!"); inside the setup() function. Now let's see what happens if we put a print statement inside the setup() function. Have a look at the following figure. Hello Arduino! is printed for only one time: Since C is a case sensitive language we need to be careful about the casing. We cannot use serial.print("Hello Arduino!"); instead of Serial.print("Hello Arduino!n");. Summary In this article we have learned how to connect the Arduino to our system and uploaded the code to our board. We have learned how the Arduino code work. If you are new to Arduino this article is most important. Resources for Article: Further resources on this subject: Zabbix Configuration [article] A Configuration Guide [article] Thorium and Salt API [article]
Read more
  • 0
  • 0
  • 15024

article-image-point-point-networks
Packt
03 Jan 2017
6 min read
Save for later

Point-to-Point Networks

Packt
03 Jan 2017
6 min read
In this article by Jan Just Keijser, author of the book OpenVPN Cookbook - Second Edition, we will cover the following recipes: (For more resources related to this topic, see here.) OpenVPN secret keys Using IPv6 OpenVPN secret keys This recipe uses OpenVPN secret keys to secure the VPN tunnel. This shared secret key is used to encrypt the traffic between the client and the server. Getting ready Install OpenVPN 2.3.9 or higher on two computers. Make sure that the computers are connected over a network. For this recipe, the server computer was running CentOS 6 Linux and OpenVPN 2.3.9 and the client was running Windows 7 Pro 64bit and OpenVPN 2.3.10. How to do it... First, we generate a secret key on the server (listener): [root@server]# openvpn --genkey --secret secret.key We transfer this key to the client side over a secure channel (for example, using scp): Next, we launch the server (listening)-side OpenVPN process: [root@server]# openvpn --ifconfig 10.200.0.1 10.200.0.2 --dev tun --secret secret.key Then, we launch the client-side OpenVPN process: [WinClient] C:>"Program FilesOpenVPNbinopenvpn.exe" --ifconfig 10.200.0.2 10.200.0.1 --dev tun --secret secret.key --remote openvpnserver.example.com The connection is established: How it works... The server listens to the incoming connections on the UDP port 1194. The client connects to the server on this port. After the initial handshake, the server configures the first available TUN device with the IP address 10.200.0.1 and it expects the remote end (peer address) to be 10.200.0.2. The client does the opposite. There's more... By default, OpenVPN uses two symmetric keys when setting up a point-to-point connection: A Cipher key to encrypt the contents of the packets being exchanged. An HMAC key to sign packets. When packets arrive that are not signed using the appropriate HMAC key they are dropped immediately. This is the first line of defense against a "Denial of Service" attack. The same set of keys are used on both ends, and both the keys are derived from the file specified using the --secret parameter. An OpenVPN secret key file is formatted as follows: # # 2048 bit OpenVPN static key # -----BEGIN OpenVPN Static key V1----- <16 lines of random bytes> -----END OpenVPN Static key V1----- From the random bytes, the OpenVPN Cipher and HMAC keys are derived. Note that these keys are the same for each session! Using IPv6 In this recipe, we extend the complete site-to-site network to include support for IPv6. Getting ready Install OpenVPN 2.3.9 or higher on two computers. Make sure that the computers are connected over a network. For this recipe, the server computer was running CentOS 6 Linux and OpenVPN 2.3.9 and the client was running Fedora 22 Linux and OpenVPN 2.3.10. We'll use the secret.key file from the OpenVPN Secret keys recipe here. We use the following network layout: How to do it... Create the server configuration file: dev tun proto udp local openvpnserver.example.com lport 1194 remote openvpnclient.example.com rport 1194 secret secret.key 0 ifconfig 10.200.0.1 10.200.0.2 route 192.168.4.0 255.255.255.0 tun-ipv6 ifconfig-ipv6 2001:db8:100::1 2001:db8:100::2 user nobody group nobody # use "group nogroup" on some distros persist-tun persist-key keepalive 10 60 ping-timer-rem verb 3 daemon log-append /tmp/openvpn.log Save it as example1-9-server.conf. On the client side, we create the configuration file: dev tun proto udp local openvpnclient.example.com lport 1194 remote openvpnserver.example.com rport 1194 secret secret.key 1 ifconfig 10.200.0.2 10.200.0.1 route 172.31.32.0 255.255.255.0 tun-ipv6 ifconfig-ipv6 2001:db8:100::2 2001:db8:100::1 user nobody group nobody # use "group nogroup" on some distros persist-tun persist-key keepalive 10 60 ping-timer-rem verb 3 Save it as example1-9-client.conf. We start the tunnel on both ends: [root@server]# openvpn --config example1-9-server.conf And: [root@client]# openvpn --config example1-9-client.conf Now our site-to-site tunnel is established. After the connection comes up, the machines on the LANs behind both end points can be reached over the OpenVPN tunnel. Note that the client OpenVPN session is running in the foreground. Next, we ping the IPv6 address of the server endpoint to verify that IPv6 traffic over the tunnel is working: [client]$ ping6 -c 4 2001:db8:100::1 PING 2001:db8:100::1(2001:db8:100::1) 56 data bytes 64 bytes from 2001:db8:100::1: icmp_seq=1 ttl=64 time=7.43 ms 64 bytes from 2001:db8:100::1: icmp_seq=2 ttl=64 time=7.54 ms 64 bytes from 2001:db8:100::1: icmp_seq=3 ttl=64 time=7.77 ms 64 bytes from 2001:db8:100::1: icmp_seq=4 ttl=64 time=7.42 ms --- 2001:db8:100::1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 7.425/7.546/7.778/0.177 ms Finally, we abort the client-side session by pressing Ctrl + C. The following screenshot lists the full client-side log: How it works... The following command enables IPv6 support next to the default IPv4 support: tun-ipv6 ifconfig-ipv6 2001:db8:100::2 2001:db8:100::1 Also, in the client configuration, the options daemon and log-append are not present, hence all OpenVPN output is sent to the screen and the process continues running in the foreground. There's more... Log file errors If we take a closer look at the client-side connection output, we see a few error messages after pressing Ctrl + C, most notably the following: RTNETLINK answers: operation not permitted This is a side-effect when using the user nobody option to protect an OpenVPN setup, and it often confuses new users. What happens is this: OpenVPN starts as the root user, opens the appropriate tun device, and sets the right IPv4 and IPv6 addresses on this tun interface. For extra security, OpenVPN then switches to the nobody user, dropping all privileges associated with the user root. When OpenVPN terminates (in our case, by pressing Ctrl + C), it closes access to the tun device and tries to remove the IPv4 and IPv6 addresses assigned to that device. At this point, the error messages appear, as the user nobody is not allowed to perform these operations. Upon termination of the OpenVPN process, the Linux kernel closes the tun device and all configuration settings are removed. In this case, these error messages are harmless, but in general, one should pay close attention to the warning and error messages that are printed by OpenVPN. IPv6-only tunnel With OpenVPN 2.3, it is required to always enable IPv4 support. From OpenVPN 2.4 onward, it is possible to set up an "IPv6-only" connection. Summary In this article, we extended the complete site-to-site network to include support for IPv6 with OpenVPN secret keys. Resources for Article: Further resources on this subject: Introduction to OpenVPN [Article] A quick start – OpenCV fundamentals [Article] Untangle VPN Services [Article]
Read more
  • 0
  • 0
  • 12024

article-image-react-native-tools-and-resources
Packt
03 Jan 2017
14 min read
Save for later

React Native Tools and Resources

Packt
03 Jan 2017
14 min read
In this article written by Eric Masiello and Jacob Friedmann, authors of the book Mastering React Native we will cover: Tools that improve upon the React Native development experience Ways to build React Native apps for platforms other than iOS and Android Great online resources for React Native development (For more resources related to this topic, see here.) Evaluating React Native Editors, Plugins, and IDEs I'm hard pressed to think of another topic that developers are more passionate about than their preferred code editor. Of the many options, two popular editors today are GitHub's Atom and Microsoft's Visual Studio Code (not be confused with the Visual Studio 2015). Both are cross-platform editors for Windows, macOS, and Linux that are easily extended with additional features. In this section, I'll detail my personal experience with these tools and where I have found they complement the React Native development experience. Atom and Nuclide Facebook has created a package for Atom known as Nuclide that provides a first-class development environment for React Native It features a built-in debugger similar to Chrome's DevTools, a React Native Inspector (think the Elements tab in Chrome DevTools), and support for the static type checker Flow. Download Atom from https://atom.io/ and Nuclide from https://nuclide.io/. To install the Nuclide package, click on the Atom menu and then on Preferences..., and then select Packages. Search for Nuclide and click on Install. Once installed, you can actually start and stop the React Native Packager directly from Atom (though you need launch the simulator/editor separately) and set breakpoints in Atom itself rather than using Chrome's DevTools. Take a look at the following screenshot: If you plan to use Flow, Nuclide will identify errors and display them inline. Take the following example, I've annotated the function timesTen such that it expects a number as a parameter it should return a number. However, you can see that there's some errors in the usage. Refer to the following code snippet: /* @flow */ function timesTen(x: number): number { var result = x * 10; return 'I am not a number'; } timesTen("Hello, world!"); Thankfully, the Flow integration will call out these errors in Atom for you. Refer to the following screenshot: Flow integration of Nuclide exposes two other useful features. You'll see annotated auto completion as you type. And, if you hold the Command key and click on variable or function name, Nuclide will jump straight to the source definition, even if it’s defined in a separate file. Refer to the following screenshot: Visual Studio Code Visual Studio Code is a first class editor for JavaScript authors. Out of the box, it's packaged with a built in debugger that can be used to debug Node applications. Additionally, VS Code comes with an integrated Terminal and a git tool that nicely shows visual diffs. Download Visual Studio Code from https://code.visualstudio.com/. The React Native Tools extensions for VS Code add some useful capabilities to the editor. For starters, you'll be able to execute the React Native: Run-iOS and React Native: Run Android commands directly from VS Code without needing to reach for terminal, as shown in the following screenshot: And, while a bit more involved than Atom to configure, you can use VS Code as a React Native debugger. Take a look at the following screenshot: The React Native Tools extension also provides IntelliSense for much of the React Native API, as shown in the following screenshot: When reading through the VS Code documentation, I found it (unsurprisingly) more catered toward Windows users. So, if Windows is your thing, you may feel more at home with VS Code. As a macOS user, I slightly prefer Atom/Nuclide over VS Code. VS Code comes with more useful features out of the box but that easily be addressed by installing a few Atom packages. Plus,  I found the Flow support with Nuclide really useful. But don't let me dissuade you from VS Code. Both are solid editors with great React Native support. And they're both free so no harm in trying both. Before totally switching gears, there is one more editor worth mentioning. Deco is an Integrated Development Environment (IDE) built specifically for React Native development. Standing up a new React Native project is super quick since Deco keeps a local copy of everything you'd get when running react-native in it. Deco also makes creating new stateful and stateless components super easy. Download Deco from https://www.decosoftware.com/. Once you create a new component using Deco, it gives you a nicely prefilled template including a place to add propTypes and defaultProps (something I often forget to do). Refer to the following screenshot: From there, you can drag and drop components from the sidebar directly into your code. Deco will auto-populate many of the props for you as well as add the necessary import statements. Take a look at the following code snippet: <Image style={{ width: 300, height: 200, }} resizeMode={"contain"} source={{uri:'https://unsplash.it/600/400/?random'}}/> The other nice feature Deco adds is the ability to easily launch your app from the toolbar in any installed iOS simulator or Android AVD. You don't even need to first manually open the AVD, Deco will do it all for you. Refer to the following screenshot: Currently, creating a new project with Deco starts you off with an outdated version of React Native (version 0.27.2 as of this writing). If you're not concerned with using the latest version, Deco is a great way to get a React Native app up quickly. However, if you require more advanced tooling, I suggest you look at Atom with Nuclide or Visual Studio Code with the React Native Tools extension. Taking React Native beyond iOS and Android The development experience is one of the most highly touted features by React Native proponents. But as we well know by now, React Native is more than just a great development experience. It's also about building cross-platform applications with a common language and, often times, reusable code and components. Out of the box, the Facebook team has provided tremendous support for iOS and Android. And thanks to the community, React Native has expanded to other promising platforms. In this section, I'll take you through a few of these React Native projects. I won't go into great technical depth, but I'll provide a high-level overview and how to get each running. Introducing React Native Web React Native Web is an interesting one. It treats many of React Native components you've learned about, such as View, Text, and TextInput, as higher level abstractions that map to HTML elements, such as div, span, and input, thus allowing you to build a web app that runs in a browser from your React Native code. Now if you're like me, your initial reaction might be—But why? We already have React for the web. It's called... React! However, where React Native Web shines over React is in its ability to share components between your mobile app and the web because you're still working with the same basic React Native APIs. Learn more about React Native Web at https://github.com/necolas/react-native-web. Configuring React Native Web React Native Web can be installed into your existing React Native project just like any other npm dependency: npm install --save react react-native-web Depending on the version of React Native and React Native Web you've installed, you may encounter conflicting peer dependencies of React. This may require manually adjusting which version of React Native or React Native Web is installed. Sometimes, just deleting the node_modules folder and rerunning npm install does the trick. From there, you'll need some additional tools to build the web bundle. In this example, we'll use webpack and some related tooling: npm install webpack babel-loader babel-preset-react babel-preset-es2015 babel-preset-stage-1 webpack-validator webpack-merge --save npm install webpack-dev-server --save-dev Next, create a webpack.config.js in the root of the project: const webpack = require('webpack'); const validator = require('webpack-validator'); const merge = require('webpack-merge'); const target = process.env.npm_lifecycle_event; let config = {}; const commonConfig = { entry: { main: './index.web.js' }, output: { filename: 'app.js' }, resolve: { alias: { 'react-native': 'react-native-web' } }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel', query: { presets: ['react', 'es2015', 'stage-1'] } } ] } }; switch(target) { case 'web:prod': config = merge(commonConfig, { devtool: 'source-map', plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify('production') }) ] }); break; default: config = merge(commonConfig, { devtool: 'eval-source-map' }); break; } module.exports = validator(config); Add the followingtwo entries to thescriptssection ofpackage.json: "web:dev": "webpack-dev-server --inline --hot", "web:prod": "webpack -p" Next, create an index.htmlfile in the root of the project: <!DOCTYPE html> <html> <head> <title>RNNYT</title> <meta charset="utf-8" /> <meta content="initial-scale=1,width=device-width" name="viewport" /> </head> <body> <div id="app"></div> <script type="text/javascript" src="/app.js"></script> </body> </html> And, finally, add an index.web.jsfile to the root of the project: import React, { Component } from 'react'; import { View, Text, StyleSheet, AppRegistry } from 'react-native'; class App extends Component { render() { return ( <View style={styles.container}> <Text style={styles.text}>Hello World!</Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#efefef', alignItems: 'center', justifyContent: 'center' }, text: { fontSize: 18 } }); AppRegistry.registerComponent('RNNYT', () => App); AppRegistry.runApplication('RNNYT', { rootTag: document.getElementById('app') }); To run the development build, we'll run webpackdev server by executing the following command: npm run web:dev web:prod can be substituted to create a production ready build. While developing, you can add React Native Web specific code much like you can with iOS and Android by using Platform.OS === 'web' or by creating custom *.web.js components. React Native Web still feels pretty early days. Not every component and API is supported, and the HTML that's generated looks a bit rough for my tastes. While developing with React Native Web, I think it helps to keep the right mindset. That is, think of this as I'm building a React Native mobile app, not a website. Otherwise, you may find yourself reaching for web-specific solutions that aren't appropriate for the technology. React Native plugin for Universal Windows Platform Announced at the Facebook F8 conference in April, 2016,the React Native plugin for Universal Windows Platform (UWP)lets you author React Native apps for Windows 10, desktop Windows 10 mobile, and Xbox One. Learn more about React Native plugin for UWP at https://github.com/ReactWindows/react-native-windows. You'll need to be running Windows 10 in order to build UWP apps. You'll also need to follow the React Native documentation for configuring your Windows environment for building React Native apps. If you're not concerned with building Android on Windows, you can skip installing Android Studio. The plugin itself also has a few additional requirements. You'll need to be running at least npm 3.x and to install Visual Studio 2015 Community (not be confused with Visual Studio Code). Thankfully, the Community version is free to use. The UWP plugin docs also tell you to install the Windows 10 SDK Build 10586. However, I found it's easier to do that from within Visual Studio once we've created the app so that we can save that part for later. Configuring the React Native plugin for UWP I won't walk you through every step of the installation. The UWP plugin docs detail the process well enough. Once you've satisfied the requirements, start by creating a new React Native project as normal: react-native init RNWindows cd RNWindows Next, install and initialize the UWP plugin: npm install --save-dev rnpm-plugin-windows react-native windows Running react-native windows will actually create a windows directory inside your project containing a Visual Studio solution file. If this is your first time installing the plugin, I recommend opening the solution (.sln) file with Visual Studio 2015. Visual Studio will then ask you to download several dependencies including the latest Windows 10 SDK. Once Visual Studio has installed all the dependencies, you can run the app either from within Visual Studio or by running the following command: react-native run-windows Take a look at the following screenshot: React Native macOS Much as the name implies, React Native allows you to create macOS desktop applications using React Native. This project works a little differently than the React Native Web and the React Native plugin for UWP. As best I can tell, since React Native macOS requires its own custom CLI for creating and packaging applications, you are not able to build a macOS and mobile app from the same project. Learn more about React Native macOS at https://github.com/ptmt/react-native-macos. Configuring React Native macOS Much like you did with the React Native CLI, begin by installing the custom CLI globally by using the following command: npm install react-native-macos-cli -g Then, use it to create a new React Native macOS app by running the following command: react-native-macos init RNDesktopApp cd RNDesktopApp This will set you up with all required dependencies along with an entry point file, index.macos.js. There is no CLI command to spin up the app, so you'll need to open the Xcode project and manually run it. Run the following command: open macos/RNDesktopApp.xcodeproj The documentation is pretty limited, but there is a nice UIExplorer app that can be downloaded and run to give you a good feel for what's available. While on some level it's unfortunate your macOS app cannot live alongside your iOS and Android code, I cannot think of a use case that would call for such a thing. That said, I was delighted with how easy it was to get this project up and running. Summary I think it's fair to say that React Native is moving quickly. With a new version released roughly every two weeks, I've lost count of how many versions have passed by in the course of writing this book. I'm willing to bet React Native has probably bumped a version or two from the time you started reading this book until now. So, as much as I'd love to wrap up by saying you now know everything possible about React Native, sadly that isn't the case. References Let me leave you with a few valuable resources to continue your journey of learning and building apps with React Native: React Native Apple TV is a fork of React Native for building apps for Apple's tvOS. For more information, refer to https://github.com/douglowder/react-native-appletv. (Note that preliminary tvOS support has appeared in early versions of React Native 0.36.) React Native Ubuntu is another fork of React Native for developing React Native apps on Ubuntu for Desktop Ubuntu and Ubuntu Touch. For more information, refer to https://github.com/CanonicalLtd/react-native JS.Coach is a collection of community favorite components and plugins for all things React, React Native, Webpack, and related tools. For more information, refer to https://js.coach/react-native Exponent is described as Rails for React Native. It supports additional system functionality and UI components beyond what's provided by React Native. It will also let you build your apps without needing to touch Xcode or Android Studio. For more information, refer to https://getexponent.com/ React Native Elements is a cross-platform UI toolkit for React Native. You can think of it as Bootstrap for React Native. For more information, refer to https://github.com/react-native-community/react-native-elements The Use React Native site is how I keep up with React Native releases and news in the React Native space. For more information, refer to http://www.reactnative.com/ React Native Radio is fantastic podcast hosted by Nader Dabit and a panel of hosts that interview other developers contributing to the React Native community. For more information, refer to https://devchat.tv/react-native-radio React Native Newsletter is an occasional newsletter curated by a team of React Native enthusiasts. For more information, refer to http://reactnative.cc/ And, finally, Dotan J. Nahum maintains an amazing resource titled Awesome React Native that includes articles, tutorials, videos, and well tested components you can use in your next project. For more information, refer to https://github.com/jondot/awesome-react-native Resources for Article: Further resources on this subject: Getting Started [article] Getting Started with React [article] Understanding React Native Fundamentals [article]
Read more
  • 0
  • 0
  • 26842

article-image-dimensionality-reduction
Packt
03 Jan 2017
15 min read
Save for later

Dimensionality Reduction

Packt
03 Jan 2017
15 min read
In this article by Ashish Kumar and Avinash Paul the authors of the book Mastering Text Mining with R, we will Data volume and high dimensions pose an astounding challenge in text mining tasks. Inherent noise and the computational cost of processing huge amount of datasets make it even more sarduous. The science of dimensionality reduction lies in the art of losing out on only a commensurately small amount of information and still being able to reduce the high dimension space into a manageable proportion. (For more resources related to this topic, see here.) For classification and clustering techniques to be applied to text data, for different natural language processing activities, we need to reduce the dimensions and noise in the data so that each document can be represented using fewer dimensions, thus significantly reducing the noise that can hinder the performance. The curse of dimensionality Topic modeling and document clustering are common text mining activities, but the text data can be very high-dimensional, which can cause a phenomenon called curse of dimensionality. Some literature also calls it concentration of measure: Distance is attributed to all the dimensions and assumes each of them to have the same effect on the distance. The higher the dimensions, the more similar things appear to each other. The similarity measures do not take into account the association of attributes, which may result in inaccurate distance estimation. The number of samples required per attribute increases exponentially with the increase in dimensions. A lot of dimensions might be highly correlated with each other, thus causing multi-collinearity. Extra dimensions cause a rapid volume increase that can result in high sparsity, which is a major issue in any method that requires statistical significance. Also, it causes huge variance in estimates, near duplicates, and poor predictors. Distance concentration and computational infeasibility Distance concentration is a phenomenon associated with high-dimensional space wherein pairwise distances or dissimilarity between points appear indistinguishable. All the vectors in high dimensions appear to be orthogonal to each other. The distances between each data point to its neighbors, farthest or nearest, become equal. This totally jeopardizes the utility of methods that use distance based measures. Let's consider that the number of samples is n and the number of dimensions is d. If d is very large, the number of samples may prove to be insufficient to accurately estimate the parameters. For the datasets with number of dimensions d, the number of parameters in the covariance matrix will be d^2. In an ideal scenario, n should be much larger than d^2, to avoid overfitting. In general, there is an optimal number of dimensions to use for a given fixed number of samples. While it may feel like a good idea to engineer more features, if we are not able to solve a problem with less number of features. But the computational cost and model complexity increases with the rise in number of dimensions. For instance, if n number of samples look to be dense enough for a one-dimensional feature space. For a k-dimensional feature space, n^k samples would be required. Dimensionality reduction Complex and noisy characteristics of textual data with high dimensions can be handled by dimensionality reduction techniques. These techniques reduce the dimension of the textual data while still preserving its underlying statistics. Though the dimensions are reduced, it is important to preserve the inter-document relationships. The idea is to have minimum number of dimensions, which can preserve the intrinsic dimensionality of the data. A textual collection is mostly represented in form of a term document matrix wherein we have the importance of each term in a document. The dimensionality of such a collection increases with the number of unique terms. If we were to suggest the simplest possible dimensionality reduction method, that would be to specify the limit or boundary on the distribution of different terms in the collection. Any term that occurs with a significantly high frequency is not going to be informative for us, and the barely present terms can undoubtedly be ignored and considered as noise. Some examples of stop words are is, was, then, and the. Words that generally occur with high frequency and have no particular meaning are referred to as stop words. Words that occur just once or twice are more likely to be spelling errors or complicated words, and hence both these and stop words should not be considered for modeling the document in the Term Document Matrix (TDM). We will discuss a few dimensionality reduction techniques in brief and dive into their implementation using R. Principal component analysis Principal component analysis (PCA) reveals the internal structure of a dataset in a way that best explains the variance within the data. PCA identifies patterns to reduce the dimensions of the dataset without significant loss of information. The main aim of PCA is to project a high-dimensional feature space into a smaller subset to decrease computational cost. PCA helps in computing new features, which are called principal components; these principal components are uncorrelated linear combinations of the original features projected in the direction of higher variability. The important point is to map the set of features into a matrix, M, and compute the eigenvalues and eigenvectors. Eigenvectors provide simpler solutions to problems that can be modeled using linear transformations along axes by stretching, compressing, or flipping. Eigenvalues provide the length and magnitude of eigenvectors where such transformations occur. Eigenvectors with greater eigenvalues are selected in the new feature space because they enclose more information than eigenvectors with lower eigenvalues for a data distribution. The first principle component has the greatest possible variance, that is, the largest eigenvalues compared with the next principal component uncorrelated, relative to the first PC. The nth PC is the linear combination of the maximum variance that is uncorrelated with all previous PCs. PCA comprises the following steps: Compute the n-dimensional mean of the given dataset. Compute the covariance matrix of the features. Compute the eigenvectors and eigenvalues of the covariance matrix. Rank/sort the eigenvectors by descending eigenvalue. Choose x eigenvectors with the largest eigenvalues. Eigenvector values represent the contribution of each variable to the principal component axis. Principal components are oriented in the direction of maximum variance in m-dimensional space. PCA is one of the most widely used multivariate methods for discovering meaningful, new, informative, and uncorrelated features. This methodology also reduces dimensionality by rejecting low-variance features and is useful in reducing the computational requirements for classification and regression analysis. Using R for PCA R also has two inbuilt functions for accomplishing PCA: prcomp() and princomp(). These two functions expect the dataset to be organized with variables in columns and observations in rows and has a structure like a data frame. They also return the new data in the form of a data frame, and the principal components are given in columns. prcomp() and princomp() are similar functions used for accomplishing PCA; they have a slightly different implementation for computing PCA. Internally, the princomp() function performs PCA using eigenvectors. The prcomp() function uses a similar technique known as singular value decomposition (SVD). SVD has slightly better numerical accuracy, so prcomp() is generally the preferred function. princomp() fails in situations if the number of variables is larger than the number of observations. Each function returns a list whose class is prcomp() or princomp(). The information returned and terminology is summarized in the following table: prcomp() princomp() Explanation sdev sdev Standard deviation of each column Rotations Loading Principle components Center Center Subtracted value of each row or column to get the center data Scale Scale Scale factors used X Score The rotated data   n.obs Number of observations of each variable   Call The call to function that created the object Here's a list of the functions available in different R packages for performing PCA: PCA(): FactoMineR package acp(): amap package prcomp(): stats package princomp(): stats package dudi.pca(): ade4 package pcaMethods: This package from Bioconductor has various convenient  methods to compute PCA Understanding the FactoMineR package FactomineR is a R package that provides multiple functions for multivariate data analysis and dimensionality reduction. The functions provided in the package not only deals with quantitative data but also categorical data. Apart from PCA, correspondence and multiple correspondence analyses can also be performed using this package: library(FactoMineR) data<-replicate(10,rnorm(1000)) result.pca = PCA(data[,1:9], scale.unit=TRUE, graph=T) print(result.pca) Results for the principal component analysis (PCA). The analysis was performed on 1,000 individuals, described by nine variables. The results are available in the following objects: Name Description $eig Eigenvalues $var Results for the variables $var$coord coord. for the variables $var$cor Correlations variables - dimensions $var$cos2 cos2 for the variables $var$contrib Contributions of the variables $ind Results for the individuals $ind$coord coord. for the individuals $ind$cos2 cos2 for the individuals $ind$contrib Contributions of the individuals $call Summary statistics $call$centre Mean of the variables $call$ecart.type Standard error of the variables $call$row.w Weights for the individuals $call$col.w Weights for the variables Eigenvalue percentage of variance cumulative percentage of variance: comp 1 1.1573559 12.859510 12.85951 comp 2 1.0991481 12.212757 25.07227 comp 3 1.0553160 11.725734 36.79800 comp 4 1.0076069 11.195632 47.99363 comp 5 0.9841510 10.935011 58.92864 comp 6 0.9782554 10.869505 69.79815 comp 7 0.9466867 10.518741 80.31689 comp 8 0.9172075 10.191194 90.50808 comp 9 0.8542724 9.491916 100.00000 Amap package Amap is another package in the R environment that provides tools for clustering and PCA. It is an acronym for Another Multidimensional Analysis Package. One of the most widely used functions in this package is acp(), which does PCA on a data frame. This function is akin to princomp() and prcomp(), except that it has slightly different graphic represention. For more intricate details, refer to the CRAN-R resource page: https://cran.r-project.org/web/packages/lLibrary(amap/amap.pdf Library(amap acp(data,center=TRUE,reduce=TRUE) Additionally, weight vectors can also be provided as an argument. We can perform a robust PCA by using the acpgen function in the amap package: acpgen(data,h1,h2,center=TRUE,reduce=TRUE,kernel="gaussien") K(u,kernel="gaussien") W(x,h,D=NULL,kernel="gaussien") acprob(x,h,center=TRUE,reduce=TRUE,kernel="gaussien") Proportion of variance We look to construct components and to choose from them, the minimum number of components, which explains the variance of data with high confidence. R has a prcomp() function in the base package to estimate principal components. Let's learn how to use this function to estimate the proportion of variance, eigen facts, and digits: pca_base<-prcomp(data) print(pca_base) The pca_base object contains the standard deviation and rotations of the vectors. Rotations are also known as the principal components of the data. Let's find out the proportion of variance each component explains: pr_variance<- (pca_base$sdev^2/sum(pca_base$sdev^2))*100 pr_variance [1] 11.678126 11.301480 10.846161 10.482861 10.176036 9.605907 9.498072 [8] 9.218186 8.762572 8.430598 pr_variance signifies the proportion of variance explained by each component in descending order of magnitude. Let's calculate the cumulative proportion of variance for the components: cumsum(pr_variance) [1] 11.67813 22.97961 33.82577 44.30863 54.48467 64.09057 73.58864 [8] 82.80683 91.56940 100.00000 Components 1-8 explain the 82% variance in the data. Singular vector decomposition Singular vector decomposition (SVD) is a dimensionality reduction technique that gained a lot of popularity in recent times after the famous Netflix Movie Recommendation challenge. Since its inception, it has found its usage in many applications in statistics, mathematics, and signal processing. It is primarily a technique to factorize any matrix; it can be real or a complex matrix. A rectangular matrix can be factorized into two orthonormal matrices and a diagonal matrix of positive real values. An m*n matrix is considered as m points in n-dimensional space; SVD attempts to find the best k dimensional subspace that fits the data: SVD in R is used to compute approximations of singular values and singular vectors of large-scale data matrices. These approximations are made using different types of memory-efficient algorithm, and IRLBA is one of them (named after Lanczos bi-diagonalization (IRLBA) algorithm). We shall be using the irlba package here in order to implement SVD. Implementation of SVD using R The following code will show the implementation of SVD using R: # List of packages for the session packages = c("foreach", "doParallel", "irlba") # Install CRAN packages (if not already installed) inst <- packages %in% installed.packages() if(length(packages[!inst]) > 0) install.packages(packages[!inst]) # Load packages into session lapply(packages, require, character.only=TRUE) # register the parallel session for registerDoParallel(cores=detectCores(all.tests=TRUE)) std_svd <- function(x, k, p=25, iter=0 1 ) { m1 <- as.matrix(x) r <- nrow(m1) c <- ncol(m1) p <- min( min(r,c)-k,p) z <- k+p m2 <- matrix ( rnorm(z*c), nrow=c, ncol=z) y <- m1 %*% m2 q <- qr.Q(qr(y)) b<- t(q) %*% m1 #iterations b1<-foreach( i=i1:iter ) %dopar% { y1 <- m1 %*% t(b) q1 <- qr.Q(qr(y1)) b1 <- t(q1) %*% m1 } b1<-b1[[iter]] b2 <- b1 %*% t(b1) eigens <- eigen(b2, symmetric=T) result <- list() result$svalues <- sqrt(eigens$values)[1:k] u1=eigens$vectors[1:k,1:k] result$u <- (q %*% eigens$vectors)[,1:k] result$v <- (t(b) %*% eigens$vectors %*% diag(1/eigens$values))[,1:k] return(result) } svd<- std_svd(x=data,k=5)) # singular vectors svd$svalues [1] 35.37645 33.76244 32.93265 32.72369 31.46702 We obtain the following values after running SVD using the IRLBA algorithm: d: approximate singular values. u: nu approximate left singular vectors v: nv approximate right singular vectors iter: # of IRLBA algorithm iterations mprod: # of matrix vector products performed These values can be used for obtaining results of SVD and understanding the overall statistics about how the algorithm performed. Latent factors # svd$u, svd$v dim(svd$u) #u value after running IRLBA [1] 1000 5 dim(svd$v) #v value after running IRLBA [1] 10 5 A modified version of the previous function can be achieved by altering the power iterations for a robust implementation: foreach( i = 1:iter )%dopar% { y1 <- m1 %*% t(b) y2 <- t(y1) %*% y1 r2 <- chol(y2, pivot = T) q1 <- y2 %*% solve(r2) b1 <- t(q1) %*% m1 } b2 <- b1 %*% t(b1) Some other functions available in R packages are as follows: Functions Package svd() svd Irlba() irlba svdImpute bcv ISOMAP – moving towards non-linearity ISOMAP is a nonlinear dimension reduction method and is representative of isometric mapping methods. ISOMAP is one of the approaches for manifold learning. ISOMAP finds the map that preserves the global, nonlinear geometry of the data by preserving the geodesic manifold inter-point distances. Like multi-dimensional scaling, ISOMAP creates a visual presentation of distance of a number of objects. Geodesic is the shortest curve along the manifold connecting two points induced by a neighborhood graph. Multi-dimensional scaling uses the Euclidian distance measure; since the data is in a nonlinear format, ISOMPA uses geodesic distance. ISOMAP can be viewed as an extension of metric multi-dimensional scaling. At a very high level, ISOMAP can be describes in four steps: Determine the neighbor of each point Construct a neighborhood graph Compute the shortest distance path between all pairs Construct k-dimensional coordinate vectors by applying MDS Geodesic distance approximation is basically calculated in three ways: Neighboring points: input-space distance Faraway points: a sequence of short hops between neighboring points Method: Finding shortest paths in a graph with edges connecting neighboring data points source("http://bioconductor.org/biocLite.R") biocLite("RDRToolbox") library('RDRToolbox') swiss_Data=SwissRoll(N = 1000, Plot=TRUE) x=SwissRoll() open3d() plot3d(x, col=rainbow(1050)[-c(1:50)],box=FALSE,type="s",size=1) simData_Iso = Isomap(data=swiss_Data, dims=1:10, k=10,plotResiduals=TRUE) library(vegan)data(BCI) distance <- vegdist(BCI) tree <- spantree(dis) pl1 <- ordiplot(cmdscale(dis), main="cmdscale") lines(tree, pl1, col="red") z <- isomap(distance, k=3) rgl.isomap(z, size=4, color="red") pl2 <- plot(isomap(distance, epsilon=0.5), main="isomap epsilon=0.5") pl3 <- plot(isomap(distance, k=5), main="isomap k=5") pl4 <- plot(z, main="isomap k=3") Summary The idea of this article was to get you familiar with some of the generic dimensionality reduction methods and their implementation using R language. We discussed a few packages that provide functions to perform these tasks. We also covered a few custom functions that can be utilized to perform these tasks. Kudos, you have completed the basics of text mining with R. You must be feeling confident about various data mining methods, text mining algorithms (related to natural language processing of the texts) and after reading this article, dimensionality reduction. If you feel a little low on confidence, do not be upset. Turn a few pages back and try implementing those tiny code snippets on your own dataset and figure out how they help you understand your data. Remember this - to mine something, you have to get into it by yourself. This holds true for text as well. Resources for Article: Further resources on this subject: Data Science with R [Article] Machine Learning with R [Article] Data mining [Article]
Read more
  • 0
  • 0
  • 4433

article-image-digging-deeper
Packt
03 Jan 2017
6 min read
Save for later

Digging Deeper

Packt
03 Jan 2017
6 min read
In the article by, Craig Clayton, author of the book, iOS 10 Programming for Beginners, we went over the basics of Swift to get you warmed up. Now, we will dig deeper and learn some more programming concepts. These concepts will build on what you have already learned. In this article, we will cover the following topics: Ranges Control flow (For more resources related to this topic, see here.) Creating a Playground project Please launch Xcode and click on Get started with a playground. The options screen for creating a new playground screen will appear: Please name your new Playground SwiftDiggingDeeper and make sure that your Platform is set to iOS. Now, let's delete everything inside of your file and toggle on the Debug panel using either the toggle button or Cmd + Shift + Y. Your screen should look like mine: Ranges These generic data types represent a sequence of numbers. Let's look at the image below to understand: Closed range Notice that, in the image above, we have numbers ranging from 10 to 20. Rather than having to write each value, we can use ranges to represent all of these numbers in shorthand form. In order to do this, let's remove all of the numbers in the image except 10 and 20. Now that we have removed those numbers, we need a way to tell Swift that we want to include all of the numbers that we just deleted. This is where the range operator (…) comes into play. So, in Playgrounds, let's create a constant called range and set it equal to 10...20: let range = 10...20 The range that we just entered says that we want the numbers between 10 and 20 as well as both 10 and 20 themselves. This type of range is known as a closed range. We also have what is called a half closed range. Half closed range Let's make another constant called half closed range and set it equal to 10..<20: let halfClosedRange = 10..<20 A half closed range is the same as a closed range except that the end value will not be included. In this example, that means that 10 through 19 will be included and 20 will be excluded. At this point, you will notice that your results panel just shows you CountableClosedRange(10...20) and CountableRange(10..<20). We cannot see all the numbers within the range. In order to see all the numbers, we need to use a loop. Control flow In Swift, we use a variety of control statements. For-In Loop One of the most common control statements is a for-in loop. A for-in loop allows you to iterate over each element in a sequence. Let's see what a for-loop looks like: for <value> in <sequence> { // Code here } So, we start our for-in loop with for, which is followed by <value>. This is actually a local constant (only the for-in loop can access it) and can be any name you like. Typically, you will want to give this value an expressive name. Next, we have in, which is followed by <sequence>. This is where we want to give it our sequence of numbers. Let's write the following into Playgrounds: for value in range { print("closed range - (value)") } Notice that, in our debug panel, we see all of the numbers we wanted in our range: Let's do the same for our variable halfClosedRange by adding the following: for index in halfClosedRange { print("half closed range - (index)") } In our debug panel, we see that we get the numbers 10 through 19. One thing to note is that these two for-in loops have different variables. In the first loop, we used value, and in the second one, we used index. You can make these whatever you choose. In addition, in the two examples, we used constants, but we could actually just use the ranges within the loop. Please add the following: for index in 0...3 { print("range inside - (index)") } Now, you will see 0 to 3 print inside the debug panel: What if you wanted numbers to go in reverse order? Let's input the following for-in loop: for index in (10...20).reversed() { print("reversed range - (index)") } We now have the numbers in descending order in our debug panel. When we add ranges into a for-in loop, we have to wrap our range inside parentheses so that Swift recognizes that our period before reversed() is not a decimal. The while Loop A while loop executes a Boolean expression at the start of the loop, and the set of statements run until a condition becomes false. It is important to note that while loops can be executed zero or more times. Here is the basic syntax of a while loop: while <condition> { // statement } Let's write a while loop in Playgrounds and see how it works. Please add the following: var y = 0 while y < 50 { y += 5 print("y:(y)") } So, this loop starts with a variable that begins at zero. Before the while loop executes, it checks to see if y is less than 50; and, if so, it continues into the loop. By using the += operator, we increment y by 5 each time. Our while loop will continue to do this until y is no longer less than 50. Now, let's add the same while loop after the one we created and see what happens: while y < 50 { y += 5 print("y:(y)") } You will notice that the second while loop never runs. This may not seem like it is important until we look at our next type of loop. The repeat-while loop A repeat-while loop is pretty similar to a while loop in that it continues to execute the set of statements until a condition becomes false. The main difference is that the repeat-while loop does not evaluate its Boolean condition until the end of the loop. Here is the basic syntax of a repeat-while loop: repeat { // statement } <condition> Let's write a repeat-while loop in Playgrounds and see how it works. Type the following into Playgrounds: var x = 0 repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed x: (x)") So, our repeat-while loop executes first and increments x by 5, and afterwards (as opposed to checking the condition before as with a while loop), it checks to see if x is less than 100. This means that our repeat-while loop will continue until the condition hits 100. But here is where it gets interesting. Let's add another repeat-while loop after the one we just created: repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed again x: (x)") This time, the repeat…while loop incremented to 105. This happens because the Boolean expression does not get evaluated until after it is incremented by 5. Knowing this behavior will help you pick the right loop for your situation. So far, we have looked at three loops: the for-in loop, the while loop, and the repeat-while loop. We will use the for-in loop again, but first we need to talk about collections. Summary The article summarizes Ranges and control flow using Xcode 8. Resources for Article: Further resources on this subject: Tools inTypeScript [article] Design with Spring AOP [article] Thinking Functionally [article]
Read more
  • 0
  • 0
  • 29494
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-extensible-chat-bot-using-javascript-yaml
Andrea Falzetti
03 Jan 2017
6 min read
Save for later

Building an extensible Chat Bot using JavaScript & YAML

Andrea Falzetti
03 Jan 2017
6 min read
In this post, I will share with you my experience in designing and coding an extensible chat bot using YAML and JavaScript. I will show you that it's not always required to use AI, ML, or NPL to make a great bot. This won't be a step-by-step guide. My goal is to give you an overview of a project like this and inspire you to build your own. Getting Started The concept I will offer you consists of creating a collection of YAML scripts that will represent all the possible conversations that the bot can handle. When a new conversation starts, the YAML code gets converted to a JavaScript object that we can easily manage in Node.js. I have used WebSockets implemented with socket.io to transmit the messages back and forth. First, we need to agree on a set of commands that can be used to create the YAML scripts, let’s start with some essentials: messages: To send an array of messages from the bot to the user input: To ask the user for some data next: The action we want to perform next within the script script: The next script that will follow in the conversation with the user when, empty, not-empty: conditional statements YAML script example A sample script will look like the following:   Link to the gist Then we need to implement those commands in our bot engine. The Bot engine Using the WebSockets I send to the bot the name of the script that I want to play, the bot engine loads it and converts it to a JavaScript object. If the client doesn't know which script to play first, it can call a default script called “hello” that will introduce the bot. In each script, the first action to run will be the one with index 0. As per the example above, the first thing the bot will do is sending two messages to the user:   With the command next we jump to the next block, index = 1. Again we send another message and immediately after an input request.   At this point, the client receives the input request and allows the user to type the answer. Submitted the value, we send the data back to the bot that will append the information to a key-value store, where all data is received from the user live and is accessible via a key (for example,user_name). Using the when statement, we define conditional branches of the conversation. When dealing with data validation this is often this case. In the example, we want to make sure to get a valid name from the user, so if the value received is empty, we jump back in the script and ask for it again, continuing with the following steponly when the name is valid. Finally, the bot will send a message, this time containing a variable previously received and stored in the key-value store.   In the next script, we will see how to handle multiple choice and buttons, to allow the user to make a decision.   Link to the gist The conversation will start with a message from the bot,followed by an input request, with input type set as buttons_single_select which in the in client it translates to “display multiple buttons” using an array of options received with the input request:   When the user clicks on one of the options, the UI sends back the choice of the user to the bot which will eventually match it with one of the existing branches. Found the correct branch the bot engine will look for the next command to run, in this case is another another input request, this time expecting to receive an image from the client.   Once the file has been sent, the conversation ends just after the bot sends a last message confirming the file got uploaded successfully. Using YAML gives you the flexibility to build many different conversations and also allows you to easily implement A/B testing of your conversation. Implementing the bot engine with JavaScript / Node.js To build something able to play the YAML scripts above, you need to iterate the script commands until you find an explicit end command. It’s very important to keep in memory the index of current command in progress, so that you can move on as soon as the current task completes. When you meet a new command, you should pass it to a resolver that knows what each command does and is able to run the specific portion of code or function. Additionally, you will need a function that listens to the input received from the clients, validating and saving it into a key-value store. Extending the command set This approach allows you to create a large set of commands, that do different things including queries, Ajax requests, API calls to external services, etc. You can combine your command with a when statement so that a callback or promise can evolve in its specific branch depending on the result you got. Conclusion If you are wondering where the demo images come from, they are a screenshot of a React view built with the help of devices.css, a CSS package that provides the flat shape of iPhone, Android, and Windows phones in different colors only using pure CSS. I have built this view to test the bot, using socket.io-client for the WebSockets and React for the UI. This is not just a proof of concept; I am working on a project where we have currently implemented this logic. I invite you to review, think about it and leave a feedback. Thanks! About the author Andrea Falzetti is an enthusiastic full-stack developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots and machine learning. He is currently working at Activate Media, where his role is to estimate, design and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 22245

article-image-notes-field
Packt
03 Jan 2017
7 min read
Save for later

Notes from the field

Packt
03 Jan 2017
7 min read
In this article by Donabel Santos author of the book Tableau 10 Business Intelligence Cookbook would like to offer you perhaps a personal, and maybe a not-so-conventional way to introduce Tableau. I’d like to highlight a few key concepts and tricks that I think would be useful to you as you go along. These are certainly points I highlight on the board whenever I do training on Tableau. If you feel like we are jumping too far ahead, please go ahead and start with the following section Tableau Primer. Come back to this section when you are ready for the tips and tricks. (For more resources related to this topic, see here.) Instead of thinking of Tableau as this software tool that has a steep learning curve, it is useful to think of it as a blank slate. You will draw on it, keep on adding things, removing things until something makes sense or something insightful pops out. After you work with Tableau for a while and get more comfortable with its functionalities, it might even feel like an extension of your brain to some degree. When you get access to data, you might automatically open Tableau to try and understand what’s in that data. Undo is your best friend Do not be afraid to make mistakes, and do not be afraid to explore in Tableau. Do not come in with strict prejudice – for example thinking that you can only use a time series graph when you have a measure and a date field. The best way to learn and explore how powerful Tableau is to try anything and everything. It’s one of the best tools to experiment. If you make a mistake, or if you don’t like what you see, no sweat. Just click on this friendly undo button and you are back to your previous view. If you are more of a shortcut person, it will be Ctrl + Z on a PC or Command + Z on a Mac. It doesn’t change your original data This is another common concern that comes up in my training sessions or whenever I talk to people about Tableau. No, Tableau does not write back to your data source. All the changes you make will be stored in Tableau like creating calculated fields, changing data types, editing aliases will be stored in your Tableau workbook or data source. Drag and drop Tableau is a highly drag and drop software. Although you can use the menu or a right click instead of a drag and drop for the same tasks, dragging and dropping is often faster. It also flows with your train of thought. Look for visual cues Tableau leverages its visual culture in your design area, so when you create views in Tableau, some of the visual cues and icons can help you along the way. A number of the visual cues have been discussed in this section. However, there may be some lesser known (or less noticeable) visual cues: Italicized field names mean they are Tableau-generated fields: Dual axis charts create fused pills. Notice the area when the two pills touch – they’re straight instead of curved: When you zoom in to maps, or when you search for a place, your map gets pinned (or fixed to this place) until you unpin it: Know the difference between blue (discrete) and green (continuous) Knowing the difference between blue and green will take you far in the Tableau world. The data type icons you will find beside your field names in the side bar are colored either blue or green. When you drag fields onto shelves and cards, the pills are also colored blue and green. Simply speaking, blue means discrete and green means continuous. Discrete means individual, separate, countable and finite. Continuous means range, and technically, there is an infinite number of values within this range. What’s more important is how these are manifested in Tableau. A blue discrete field will produce header, and a green continuous field will produce an axis. If dropped onto the Color shelf, for example, a blue discrete field will use individual, finite colors. A green continuous field will use a range (gradient) of colors. Some confusion also arises when we see that, by default, Tableau places numeric fields under Measures and are colored green, and categorical information under Dimensions are colored blue. These won’t always be the case. We can have numeric values that are discrete – for example an Order Number. We can also see non-numerical, discrete fields under Measures. Learn a few key shortcuts Shortcuts are great, but it’s typically faster to work when you know a few of them. Here are some of my favorite shortcuts: Shortcut What it does Right click + Drag Opens the Drop Field menu, which allows you to specify exactly which variation of the field you want to use Double click Adds the field to the view I particularly like this when creating text tables. After you place your first measure in Text, you can add more measures to your text table by double clicking on the succeeding measures Ctrl + Arrow Adjusts the height/width of the rows/columns in the view Ctrl + H Presentation mode You can find the complete list of shortcuts here: http://bit.ly/tableau-shortcuts Unpackage option The .twbx file is a Tableau packaged workbook, which means it packages local files with your Tableau workbook. When you right click a .twbx file in a machine that has Tableau Desktop installed in it, you will see a new option called Unpackage. When you unpack a .twbx file, you will get the .twb file and another folder that contains all the local files that were used in the original workbook: Just keep in mind that data (at least the file-based data sources and extracts) get packaged with your .twbx files. This is an important security and data governance consideration when you are deciding how to share your workbooks with others. Table calculations are calculations on your table. How you structure or lay out your table (or view) will affect your table calculations. Table calculations are highly influenced by: Layout Filters Scope and Direction Let’s say, for example, you are calculating Percent of Total in your view. If you swap the fields in your Rows and Columns, i.e. changing the layout, your numbers will change If you filter some of the products out, your numbers will change If you decide to compute Pane Down instead of Table Across, your numbers will change If you’re looking for the common use cases for table calculations, check out the Tableau article entitled Top 10 Tableau Table Calculations which can be found here: http://bit.ly/top10tablecalcs LODs Rock Many of the tasks that required complex table calculations or data blending have been greatly simplified by LODs (Level of Detail expressions). LODs allow us to have multiple levels of detail within a single view, and this increases the possibilities in Tableau. To learn more about Level of Detail expressions, I encourage you to check out the following: Understanding Level of Detail Expressions: http://bit.ly/UnderstandingLOD Top 15 LOD Expressions: http://bit.ly/top15LOD It is possible …. Another common question that comes up is can I do <this> or is it possible to do <this>. The answer to many of the questions is yes, and many will include calculations and/or parameters. However, not all solutions will be quick and straightforward. Some may require multiple calculated fields, table calculations, LOD expressions, regular expressions, R scripts etc. Summary In this article we have seen the basics of Tableau as this software tool that has a steep learning curve, it is useful to think of it as a blank slate. You will draw on it, keep on adding things, removing things until something makes sense or something insightful pops out. After you work with Tableau for a while and get more comfortable with its functionalities, it might even feel like an extension of your brain to some degree. When you get access to data, you might automatically open Tableau to try and understand what’s in that data. Resources for Article: Further resources on this subject: Say Hi to Tableau [article] Getting Started with Tableau Public [article] R and its Diverse Possibilities [article]
Read more
  • 0
  • 0
  • 3356

article-image-getting-started-spring-boot
Packt
03 Jan 2017
17 min read
Save for later

Getting Started with Spring Boot

Packt
03 Jan 2017
17 min read
In this article by, Greg Turnquist, author of the book, Learning Spring Boot – Second Edition, we will cover the following topics: Introduction Creating a bare project using http://start.spring.io Seeing how to run our app straight inside our IDE with no stand alone containers (For more resources related to this topic, see here.) Perhaps you've heard about Spring Boot? It's only cultivated the most popular explosion in software development in years. Clocking millions of downloads per month, the community has exploded since it's debut in 2013. I hope you're ready for some fun, because we are going to take things to the next level as we use Spring Boot to build a social media platform. We'll explore its many valuable features all the way from tools designed to speed up development efforts to production-ready support as well as cloud native features. Despite some rapid fire demos you might have caught on YouTube, Spring Boot isn't just for quick demos. Built atop the de facto standard toolkit for Java, the Spring Framework, Spring Boot will help us build this social media platform with lightning speed AND stability. In this article, we'll get a quick kick off with Spring Boot using Java the programming language. Maybe that makes you chuckle? People have been dinging Java for years as being slow, bulky, and not the means for agile shops. Together, we'll see how that is not the case. At any time, if you're interested in a more visual medium, feel free to checkout my Learning Spring Boot [Video] at https://www.packtpub.com/application-development/learning-spring-boot-video. What is step #1 when we get underway with a project? We visit Stack Overflow and look for an example project to build a project! Seriously, the amount of time spent adapting another project's build file, picking dependencies, and filling in other details about our project adds up. At the Spring Initializr (http://start.spring.io), we can enter minimal details about our app, pick our favorite build system, the version of Spring Boot we wish to use, and then choose our dependencies off a menu. Click the Download button, and we have a free standing, ready-to-run application. In this article, let's take a quick test drive and build small web app. We can start by picking Gradle from the dropdown. Then, select 1.4.1.BUILD-SNAPSHOT as the version of Spring Boot we wish to use. Next, we need to pick our application's coordinates: Group - com.greglturnquist.learningspringboot Artifact - learning-spring-boot Now comes the fun part. We get to pick the ingredients for our application like picking off a delicious menu. If we start typing, for example, "Web", into the Dependencies box, we'll see several options appear. To see all the available options, click on the Switch to the full version link toward the bottom. There are lots of overrides, such as switching from JAR to WAR, or using an older version of Java. You can also pick Kotlin or Groovy as the primary language for your application. For starters, in this day and age, there is no reason to use anything older than Java 8. And JAR files are the way to go. WAR files are only needed when applying Spring Boot to an old container. To build our social media platform, we need a few ingredients as shown: Web (embedded Tomcat + Spring MVC) WebSocket JPA (Spring Data JPA) H2 (embedded SQL data store) Thymeleaf template engine Lombok (to simplify writing POJOs) The following diagram shows an overview of these ingredients: With these items selected, click on Generate Project. There are LOTS of other tools that leverage this site. For example, IntelliJ IDEA lets you create a new project inside the IDE, giving you the same options shown here. It invokes the web site's REST API, and imports your new project. You can also interact with the site via cURL or any other REST-based tool. Now let's unpack that ZIP file and see what we've got: a build.gradle build file a Gradle wrapper, so there's no need to install Gradle a LearningSpringBootApplication.java application class an application.properties file a LearningSpringBootApplicationTests.java test class We built an empty Spring Boot project. Now what? Before we sink our teeth into writing code, let's take a peek at the build file. It's quite terse, but carries some key bits. Starting from the top: buildscript { ext { springBootVersion = '1.4.1.BUILD-SNAPSHOT' } repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { classpath("org.springframework.boot:spring-boot-gradle- plugin:${springBootVersion}") } } This contains the basis for our project: springBootVersion shows us we are using Spring Boot 1.4.1.BUILD-SNAPSHOT The Maven repositories it will pull from are listed next Finally, we see the spring-boot-gradle-plugin, a critical tool for any Spring Boot project The first piece, the version of Spring Boot, is important. That's because Spring Boot comes with a curated list of 140 third party library versions extending well beyond the Spring portfolio and into some of the most commonly used libraries in the Java ecosystem. By simply changing the version of Spring Boot, we can upgrade all these libraries to newer versions known to work together. There is an extra project, the Spring IO Platform (https://spring.io/platform), which includes an additional 134 curated versions, bringing the total to 274. The repositories aren't as critical, but it's important to add milestones and snapshots if fetching a library not released to Maven central or hosted on some vendor's local repository. Thankfully, the Spring Initializr does this for us based on the version of Spring Boot selected on the site. Finally, we have the spring-boot-gradle-plugin (and there is a corresponding spring-boot-maven-plugin for Maven users). This plugin is responsible for linking Spring Boot's curated list of versions with the libraries we select in the build file. That way, we don't have to specify the version number. Additionally, this plugin hooks into the build phase and bundle our application into a runnable über JAR, also known as a shaded or fat JAR. Java doesn't provide a standardized way to load nested JAR files into the classpath. Spring Boot provides the means to bundle up third-party JARs inside an enclosing JAR file and properly load them at runtime. Read more at http://docs.spring.io/spring-boot/docs/1.4.1.BUILD-SNAPSHOT/reference/htmlsingle/#executable-jar. With an über JAR in hand, we only need put it on a thumb drive, and carry it to another machine, to a hundred virtual machines in the cloud or your data center, or anywhere else, and it simply runs where we can find a JVM. Peeking a little further down in build.gradle, we can see the plugins that are enabled by default: apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'spring-boot' The java plugin indicates the various tasks expected for a Java project The eclipse plugin helps generate project metadata for Eclipse users The spring-boot plugin is where the actual spring-boot-gradle-plugin is activated An up-to-date copy of IntelliJ IDEA can ready a plain old Gradle build file fine without extra plugins. Which brings us to the final ingredient used to build our application: dependencies. Spring Boot starters No application is complete without specifying dependencies. A valuable facet of Spring Boot are its virtual packages. These are published packages that don't contain any code, but instead simply list other dependencies. The following list shows all the dependencies we selected on the Spring Initializr site: dependencies { compile('org.springframework.boot:spring-boot-starter-data-jpa') compile('org.springframework.boot:spring-boot-starter- thymeleaf') compile('org.springframework.boot:spring-boot-starter-web') compile('org.springframework.boot:spring-boot-starter- websocket') compile('org.projectlombok:lombok') runtime('com.h2database:h2') testCompile('org.springframework.boot:spring-boot-starter-test') } If you'll notice, most of these packages are Spring Boot starters: spring-boot-starter-data-jpa pulls in Spring Data JPA, Spring JDBC, Spring ORM, and Hibernate spring-boot-starter-thymeleaf pulls in Thymeleaf template engine along with Spring Web and the Spring dialect of Thymeleaf spring-boot-starter-web pulls in Spring MVC, Jackson JSON support, embedded Tomcat, and Hibernate's JSR-303 validators spring-boot-starter-websocket pulls in Spring WebSocket and Spring Messaging These starter packages allow us to quickly grab the bits we need to get up and running. Spring Boot starters have gotten so popular that lots of other third party library developers are crafting their own. In addition to starters, we have three extra libraries: Project Lombok makes it dead simple to define POJOs without getting bogged down in getters, setters, and others details. H2 is an embedded database allowing us to write tests, noodle out solutions, and get things moving before getting involved with an external database. spring-boot-starter-test pulls in Spring Boot Test, JSON Path, JUnit, AssertJ, Mockito, Hamcrest, JSON Assert, and Spring Test, all within test scope. The value of this last starter, spring-boot-starter-test, cannot be overstated. With a single line, the most powerful test utilities are at our fingertips, allowing us to write unit tests, slice tests, and full blown our-app-inside-embedded-Tomcat tests. It's why this starter is included in all projects without checking a box on the Spring Initializr site. Now to get things off the ground, we need to shift focus to the tiny bit of code written for us by the Spring Initializr. Running a Spring Boot application The fabulous http://start.spring.io website created a tiny class, LearningSpringBootApplication as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class LearningSpringBootApplication { public static void main(String[] args) { SpringApplication.run( LearningSpringBootApplication.class, args); } } This tiny class is actually a fully operational web application! The @SpringBootApplication annotation tells Spring Boot, when launched, to scan recursively for Spring components inside this package and register them. It also tells Spring Boot to enable autoconfiguration, a process where beans are automatically created based on classpath settings, property settings, and other factors. Finally, it indicates that this class itself can be a source for Spring bean definitions. It holds a public static void main(), a simple method to run the application. There is no need to drop this code into an application server or servlet container. We can just run it straight up, inside our IDE. The amount of time saved by this feature, over the long haul, adds up fast. SpringApplication.run() points Spring Boot at the leap off point. In this case, this very class. But it's possible to run other classes. This little class is runnable. Right now! In fact, let's give it a shot. . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _` | \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |___, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.4.1.BUILD-SNAPSHOT) 2016-09-18 19:52:44.214: Starting LearningSpringBootApplication on ret... 2016-09-18 19:52:44.217: No active profile set, falling back to defaul... 2016-09-18 19:52:44.513: Refreshing org.springframework.boot.context.e... 2016-09-18 19:52:45.785: Bean 'org.springframework.transaction.annotat... 2016-09-18 19:52:46.188: Tomcat initialized with port(s): 8080 (http) 2016-09-18 19:52:46.201: Starting service Tomcat 2016-09-18 19:52:46.202: Starting Servlet Engine: Apache Tomcat/8.5.5 2016-09-18 19:52:46.323: Initializing Spring embedded WebApplicationCo... 2016-09-18 19:52:46.324: Root WebApplicationContext: initialization co... 2016-09-18 19:52:46.466: Mapping servlet: 'dispatcherServlet' to [/] 2016-09-18 19:52:46.469: Mapping filter: 'characterEncodingFilter' to:... 2016-09-18 19:52:46.470: Mapping filter: 'hiddenHttpMethodFilter' to: ... 2016-09-18 19:52:46.470: Mapping filter: 'httpPutFormContentFilter' to... 2016-09-18 19:52:46.470: Mapping filter: 'requestContextFilter' to: [/*] 2016-09-18 19:52:46.794: Building JPA container EntityManagerFactory f... 2016-09-18 19:52:46.809: HHH000204: Processing PersistenceUnitInfo [ name: default ...] 2016-09-18 19:52:46.882: HHH000412: Hibernate Core {5.0.9.Final} 2016-09-18 19:52:46.883: HHH000206: hibernate.properties not found 2016-09-18 19:52:46.884: javassist 2016-09-18 19:52:46.976: HCANN000001: Hibernate Commons Annotations {5... 2016-09-18 19:52:47.169: Using dialect: org.hibernate.dialect.H2Dialect 2016-09-18 19:52:47.358: HHH000227: Running hbm2ddl schema export 2016-09-18 19:52:47.359: HHH000230: Schema export complete 2016-09-18 19:52:47.390: Initialized JPA EntityManagerFactory for pers... 2016-09-18 19:52:47.628: Looking for @ControllerAdvice: org.springfram... 2016-09-18 19:52:47.702: Mapped "{[/error]}" onto public org.springfra... 2016-09-18 19:52:47.703: Mapped "{[/error],produces=[text/html]}" onto... 2016-09-18 19:52:47.724: Mapped URL path [/webjars/**] onto handler of... 2016-09-18 19:52:47.724: Mapped URL path [/**] onto handler of type [c... 2016-09-18 19:52:47.752: Mapped URL path [/**/favicon.ico] onto handle... 2016-09-18 19:52:47.778: Cannot find template location: classpath:/tem... 2016-09-18 19:52:48.229: Registering beans for JMX exposure on startup 2016-09-18 19:52:48.278: Tomcat started on port(s): 8080 (http) 2016-09-18 19:52:48.282: Started LearningSpringBootApplication in 4.57... Scrolling through the output, we can see several things: The banner at the top gives us a readout of the version of Spring Boot. (BTW, you can create your own ASCII art banner by creating either banner.txt or banner.png into src/main/resources/) Embedded Tomcat is initialized on port 8080, indicating it's ready for web requests Hibernate is online with the H2 dialect enabled A few Spring MVC routes are registered, such as /error, /webjars, a favicon.ico, and a static resource handler And the wonderful Started LearningSpringBootApplication in 4.571 seconds message Spring Boot uses embedded Tomcat, so there's no need to install a container on our target machine. Non-web apps don't even require Apache Tomcat. The JAR itself is the new container that allows us to stop thinking in terms of old fashioned servlet containers. Instead, we can think in terms of apps. All these factors add up to maximum flexibility in application deployment. How does Spring Boot use embedded Tomcat among other things? As mentioned earlier, it has autoconfiguration meaning it has Spring beans that are created based on different conditions. When Spring Boot sees Apache Tomcat on the classpath, it creates an embedded Tomcat instance along with several beans to support that. When it spots Spring MVC on the classpath, it creates view resolution engines, handler mappers, and a whole host of other beans needed to support that, letting us focus on coding custom routes. With H2 on the classpath, it spins up an in-memory, embedded SQL data store. Spring Data JPA will cause Spring Boot to craft an EntityManager along with everything else needed to start speaking JPA, letting us focus on defining repositories. At this stage, we have a running web application, albeit an empty one. There are no custom routes and no means to handle data. But we can add some real fast. Let's draft a simple REST controller: package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class HomeController { @GetMapping public String greeting(@RequestParam(required = false, defaultValue = "") String name) { return name.equals("") ? "Hey!" : "Hey, " + name + "!"; } } Let's examine this tiny REST controller in detail: The @RestController annotation indicates that we don't want to render views, but instead write the results straight into the response body. @GetMapping is Spring's shorthand annotation for @RequestMapping(method = RequestMethod.GET, …[]). In this case, it defaults the route to "/". Our greeting() method has one argument: @RequestParam(required = false, defaultValue = "") String name. It indicates that this value can be requested via an HTTP query (?name=Greg), the query isn't required, and in case it's missing, supply an empty string. Finally, we are returning one of two messages depending on whether or not name is empty using Java's classic ternary operator. If we re-launch the LearningSpringBootApplication in our IDE, we'll see a new entry in the console. 2016-09-18 20:13:08.149: Mapped "{[],methods=[GET]}" onto public java.... We can then ping our new route in the browser at http://localhost:8080 and http://localhost:8080?name=Greg. Try it out! That's nice, but since we picked Spring Data JPA, how hard would it be to load some sample data and retrieve it from another route? (Spoiler alert: not hard at all.) We can start out by defining a simple Chapter entity to capture book details as shown in the following code: package com.greglturnquist.learningspringboot; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import lombok.Data; @Data @Entity public class Chapter { @Id @GeneratedValue private Long id; private String name; private Chapter() { // No one but JPA uses this. } public Chapter(String name) { this.name = name; } } This little POJO let's us the details about the chapter of a book as follows: The @Data annotation from Lombok will generate getters, setters, a toString() method, a constructor for all required fields (those marked final), an equals() method, and a hashCode() method. The @Entity annotation flags this class as suitable for storing in a JPA data store. The id field is marked with JPA's @Id and @GeneratedValue annotations, indicating this is the primary key, and that writing new rows into the corresponding table will create a PK automatically. Spring Data JPA will by default create a table named CHAPTER with two columns, ID, and NAME. The key field is name, which is populated by the publicly visible constructor. JPA requires a no-arg constructor, so we have included one, but marked it private so no one but JPA may access it. To interact with this entity and it's corresponding table in H2, we could dig in and start using the autoconfigured EntityManager supplied by Spring Boot. By why do that, when we can declare a repository-based solution? To do so, we'll create an interface defining the operations we need. Check out this simple interface: package com.greglturnquist.learningspringboot; import org.springframework.data.repository.CrudRepository; public interface ChapterRepository extends CrudRepository<Chapter, Long> { } This declarative interface creates a Spring Data repository as follows: CrudRepository extends Repository, a Spring Data Commons marker interface that signals Spring Data to create a concrete implementation while also capturing domain information. CrudRepository, also from Spring Data Commons, has some pre-defined CRUD operations (save, delete, deleteAll, findOne, findAll). It specifies the entity type (Chapter) and the type of the primary key (Long). Spring Data JPA will automatically wire up a concrete implementation of our interface. Spring Data doesn't engage in code generation. Code generation has a sordid history of being out of date at some of the worst times. Instead, Spring Data uses proxies and other mechanisms to support all these operations. Never forget - the code you don't write has no bugs. With Chapter and ChapterRepository defined, we can now pre-load the database, as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.CommandLineRunner; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class LoadDatabase { @Bean CommandLineRunner init(ChapterRepository repository) { return args -> { repository.save( new Chapter("Quick start with Java")); repository.save( new Chapter("Reactive Web with Spring Boot")); repository.save( new Chapter("...and more!")); }; } } This class will be automatically scanned in by Spring Boot and run in the following way: @Configuration marks this class as a source of beans. @Bean indicates that the return value of init() is a Spring Bean. In this case, a CommandLineRunner. Spring Boot runs all CommandLineRunner beans after the entire application is up and running. This bean definition is requesting a copy of the ChapterRepository. Using Java 8's ability to coerce the args → {} lambda function into a CommandLineRunner, we are able to write several save() operations, pre-loading our data. With this in place, all that's left is write a REST controller to serve up the data! package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class ChapterController { private final ChapterRepository repository; public ChapterController(ChapterRepository repository) { this.repository = repository; } @GetMapping("/chapters") public Iterable<Chapter> listing() { return repository.findAll(); } } This controller is able to serve up our data as follows: @RestController indicates this is another REST controller. Constructor injection is used to automatically load it with a copy of the ChapterRepository. With Spring, if there is a only one constructor call, there is no need to include an @Autowired annotation. @GetMapping tells Spring that this is the place to route /chapters calls. In this case, it returns the results of the findAll() call found in CrudRepository. If we re-launch our application and visit http://localhost:8080/chapters, we can see our pre-loaded data served up as a nicely formatted JSON document: It's not very elaborate, but this small collection of classes has helped us quickly define a slice of functionality. And if you'll notice, we spent zero effort configuring JSON converters, route handlers, embedded settings, or any other infrastructure. Spring Boot is designed to let us focus on functional needs, not low level plumbing. Summary So in this article we introduced the Spring Boot concept in brief and we rapidly crafted a Spring MVC application using the Spring stack on top of Apache Tomcat with little configuration from our end. Resources for Article: Further resources on this subject: Writing Custom Spring Boot Starters [article] Modernizing our Spring Boot app [article] Introduction to Spring Framework [article]
Read more
  • 0
  • 0
  • 14149

article-image-getting-started-aurelia
Packt
03 Jan 2017
28 min read
Save for later

Getting Started with Aurelia

Packt
03 Jan 2017
28 min read
In this article by Manuel Guilbault, the author of the book Learning Aurelia, we will how Aurelia is such a modern framework. brainchild of Rob Eisenberg, father of Durandal, it is based on cutting edge Web standards, and is built on modern software architecture concepts and ideas, to offer a powerful toolset and an awesome developer experience. (For more resources related to this topic, see here.) This article will teach you how Aurelia works, and how you can use it to build real-world applications from A to Z. In fact, while reading the article and following the examples, that’s exactly what you will do. You will start by setting up your development environment and creating the project, then I will walk you through concepts such as routing, templating, data-binding, automated testing, internationalization, and bundling. We will discuss application design, communication between components, and integration of third parties. We will cover every topic most modern, real-world single-page applications require. In this first article, we will start by defining some terms that will be used throughout the article. We will quickly cover some core Aurelia concepts. Then we will take a look at the core Aurelia libraries and see how they interact with each other to form a complete, full-featured framework. We will see also what tools are needed to develop an Aurelia application and how to install them. Finally, we will start creating our application and explore its global structure. Terminology As this article is about a JavaScript framework, JavaScript plays a central role in it. If you are not completely up to date with the terminology, which has changed a lot in the last few years, let me clear things up. JavaScript (or JS) is a dialect, or implementation, of the ECMAScript (ES) standard. It is not the only implementation, but it definitely is the most popular. In this article, I will use the JS acronym to talk about actual JavaScript code or code files and the ES acronym when talking about an actual version of the ECMAScript standard. Like everything in computer programing, the ECMAScript standard evolves over time. At the moment of writing, the latest version is ES2016 and was published in June 2016. It was originally called ES7, but TC39, the committee drafting the specification, decided to change their approval and naming model, hence the new name. The previous version, named ES2015 (ES6) before the naming model changed, was published in June 2015 and was a big step forward as compared to the version before it. This older version, named ES5, was published in 2009 and was the most recent version for 6 years, so it is now widely supported by all modern browsers. If you have been writing JavaScript in the last five years, you should be familiar with ES5. When they decided to change the ES naming model, the TC39 committee also chose to change the specification’s approval model. This decision was made in an effort to publish new versions of the language at a quicker pace. As such, new features are being drafted and discussed by the community, and must pass through an approval process. Each year, a new version of the specification will be released, comprising the features that were approved during the year. Those upcoming features are often referred to as ESNext. This term encompasses language features that are approved or at least pretty close to approval but not yet published. It can be reasonable to expect that most or at least some of those features will be published in the next language version. As ES2015 and ES2016 are still recent things, they are not fully supported by most browsers. Moreover, ESNext features have typically no browser support at all. Those multiple names can be pretty confusing. To make things simpler, I will stick with the official names ES5 for the previous version, ES2016 for the current version and ESNext for the next version. Before going any further, you should make yourself familiar with the features introduced by ES2016 and with ESNext decorators, if you are not already. We will use these features throughout the article. If you don’t know where to start with ES2015 and ES2016, you can find a great overview of the new features on Babel’s website: https://babeljs.io/docs/learn-es2015/ As for ESNext decorators, Addy Osmani, a Google engineer, explained them pretty well: https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841 For further reading, you can take a look at the feature proposals (decorators, class property declarations, async functions, and so on) for future ES versions: https://github.com/tc39/proposals Core concepts Before we start getting our hands dirty, there are a couple of core concepts that need to be explained. Conventions First, Aurelia relies a lot on conventions. Most of those conventions are configurable, and can be changed if they don’t suit your needs. Each time we’ll encounter a convention throughout the article, we will see how to change it whenever possible. Components Components are a first class citizen of Aurelia. What is an Aurelia component? It is a pair made of an HTML template, called the view, and a JavaScript class, called the view-model. The view is responsible for displaying the component, while the view-model controls its data and behavior. Typically, the view sits in an .html file and the view-model in a .js file. By convention, those two files are bound through a naming rule, they must be in the same directory and have the same name (except for their extension, of course). Here’s an example of an empty component with no data, no behavior, and a static template: component.js export class MyComponent {} component.html <template> <p>My component</p> </template> A component must comply with two constraints, a view’s root HTML element must be the template element, and the view-model class must be exported from the .js file. As a rule of thumb, the only function that should be exported by a component’s JS file should be the view-model class. If multiple classes or functions are exported, Aurelia will iterate on the file’s exported functions and classes and will use the first it finds as the view-model. However, since the enumeration order of an object’s keys is not deterministic as per the ES specification, nothing guarantees that the exports will be iterated in the same order they were declared, so Aurelia may pick the wrong class as the component’s view-model. The only exception to that rule is some view resources In addition to its view-model class, a component’s JS file can export things like value converters, binding behaviors, and custom attributes basically any view resource that can’t have a view, which excludes custom elements. Components are the main building blocks of an Aurelia application. Components can use other components; they can be composed to form bigger or more complex components. Thanks to the slot mechanism, you can design a component’s template so parts of it can be replaced or customized. Architecture Aurelia is not your average monolithic framework. It is a set of loosely coupled libraries with well-defined abstractions. Each of its core libraries solves a specific and well-defined problem common to single-page applications. Aurelia leverages dependency injection and a plugin architecture so you can discard parts of the framework and replace them with third-party or even your own implementations. Or you can just throw away features you don’t need so your application is lighter and faster to load. The core Aurelia libraries can be divided into multiple categories. Let’s have a quick glance. Core features The following libraries are mostly independent and can be used by themselves if needed. They each provide a focused set of features and are at the core of Aurelia: aurelia-dependency-injection: A lightweight yet powerful dependency injection container. It supports multiple lifetime management strategies and child containers. aurelia-logging: A simple logger, supporting log levels and pluggable consumers. aurelia-event-aggregator: A lightweight message bus, used for decoupled communication. aurelia-router: A client-side router, supporting static, parameterized or wildcard routes, and child routers. aurelia-binding: An adaptive and pluggable data-binding library. aurelia-templating: An extensible HTML templating engine. Abstraction layers The following libraries mostly define interfaces and abstractions in order to decouple concerns and enable extensibility and pluggable behaviors. This does not mean that some of the libraries in the previous section do not expose their own abstractions besides their features. Some of them do. But the libraries described in the current section have almost no other purpose than defining abstractions: aurelia-loader: An abstraction defining an interface for loading JS modules, views, and other resources. aurelia-history: An abstraction defining an interface for history management used by routing. aurelia-pal: An abstraction for platform-specific capabilities. It is used to abstract away the platform on which the code is running, such as a browser or Node.js. Indeed, this means that some Aurelia libraries can be used on the server side. Default implementations The following libraries are the default implementations of abstractions exposed by libraries from the two previous sections: aurelia-loader-default: An implementation of the aurelia-loader abstraction for SystemJS and require-based loaders. aurelia-history-browser: An implementation of the aurelia-history abstraction based on standard browser hash change and push state mechanisms. aurelia-pal-browser: An implementation of the aurelia-pal abstraction for the browser. aurelia-logging-console: An implementation of the aurelia-logging abstraction for the browser console. Integration layers The following libraries’ purpose is to integrate some of the core libraries together. They provide interface implementations and adapters, along with default configuration or behaviors: aurelia-templating-router: An integration layer between the aurelia-router and the aurelia-templating libraries. aurelia-templating-binding: An integration layer between the aurelia-templating and the aurelia-binding libraries. aurelia-framework: An integration layer that brings together all of the core Aurelia libraries into a full-featured framework. aurelia-bootstrapper: An integration layer that brings default configuration for aurelia-framework and handles application starting. Additional tools and plugins If you take a look at Aurelia’s organization page on GitHub at https://github.com/aurelia, you will see many more repositories. The libraries listed in the previous sections are just the core of Aurelia—the tip of the iceberg, if I may. Many other libraries exposing additional features or integrating third-party libraries are available on GitHub, some of them developed and maintained by the Aurelia team, many others by the community. I strongly suggest that you explore the Aurelia ecosystem by yourself after reading this article, as it is rapidly growing, and the Aurelia community is doing some very exciting things. Tooling In the following section, we will go over the tools needed to develop our Aurelia application. Node.js and NPM Aurelia being a JavaScript framework, it just makes sense that its development tools are also in JavaScript. This means that the first thing you need to do when getting started with Aurelia is to install Node.js and NPM on your development environment. Node.js is a server-side runtime environment based on Google’s V8 JavaScript engine. It can be used to build complete websites or web APIs, but it is also used by a lot of front-end projects to perform development and build tasks, such as transpiling, linting, and minimizing. NPM is the de facto package manager for Node.js. It uses http://www.npmjs.com as its main repository, where all available packages are stored. It is bundled with Node.js, so if you install Node.js on your computer, NPM will also be installed. To install Node.js and NPM on your development environment, you simply need to go to https://nodejs.org/ and download the proper installer suiting your environment. If Node.js and NPM are already installed, I strongly recommend that you make sure to use at least the version 3 of NPM, as older versions may have issues collaborating with some of the other tools we’ll use. If you are not sure which version you have, you can check it by running the following command in a console: > npm –v If Node.js and NPM are already installed but you need to upgrade NPM, you can do so by running the following command: > npm install npm -g The Aurelia CLI Even though an Aurelia application can be built using any package manager, build system, or bundler you want, the preferred tool to manage an Aurelia project is the command line interface, a.k.a. the CLI. At the moment of writing, the CLI only supports NPM as its package manager and requirejs as its module loader and bundler, probably because they both are the most mature and stable. It also uses Gulp 4 behind the scene as its build system. CLI-based applications are always bundled when running, even in development environments. This means that the performance of an application during development will be very close to what it should be like in production. This also means that bundling is a recurring concern, as new external libraries must be added to some bundle in order to be available at runtime. In this article, we’ll stick with the preferred solution and use the CLI. There are however two appendices at the end of the article covering alternatives, a first for Webpack, and a second for SystemJS with JSPM. Installing the CLI The CLI being a command line tool, it should be installed globally, by opening a console and executing the following command: > npm install -g aurelia-cli You may have to run this command with administrator privileges, depending on your environment. If you already have it installed, make sure you have the latest version, by running the following command: > au -v You can then compare the version this command outputs with the latest version number tagged on GitHub, at https://github.com/aurelia/cli/releases/latest. If you don’t have the latest version, you can simply update it by running the following command: > npm install -g aurelia-cli If for some reason the command to update the CLI fails, simply uninstall then reinstall it: > npm uninstall aurelia-cli -g > npm install aurelia-cli -g This should reinstall the latest version. The project skeletons As an alternative to the CLI, project skeletons are available at https://github.com/aurelia/skeleton-navigation. This repository contains multiple sample projects, sitting on different technologies such as SystemJS with JSPM, Webpack, ASP .Net Core, or TypeScript. Prepping up a skeleton is easy. You simply need to download and unzip the archive from GitHub or clone the repository locally. Each directory contains a distinct skeleton. Depending on which one you chose, you’ll need to install different tools and run setup commands. Generally, the instructions in the skeleton’s README.md file are pretty clear. Our application Creating an Aurelia application using the CLI is extremely simple. You just need to open a console in the directory where you want to create your project and run the following command: > au new The CLI’s project creation process will start, and you should see something like this: The first thing the CLI will ask for is the name you want to give to your project. This name will be used both to create the directory in which the project will live and to set some values, such as the name property in the package.json file it will create. Let’s name our application learning-aurelia: Next, the CLI asks what technologies we want to use to develop our application. Here, you can select a custom transpiler such as TypeScript and a CSS preprocessor such as LESS or SASS. Transpiler: Little cousin of the compiler, it translates one programming language into another. In our case, it will be used to transform ESNext code, which may not be supported by all browsers, into ES5, which is understood by all modern browsers. The default choice is to use ESNext and plain CSS, and this is what we will choose: The following steps simply recap the choices we made and ask for confirmation to create the project, then ask if we want to install our project’s dependencies which it does by default. At this point, the CLI will create the project and run an npm install behind the scene. Once it completes, our application is ready to roll: At this point, the directory you ran au new in will contain a new directory named learning-aurelia. This sub-directory will contain the Aurelia project. We’ll explore it a bit in the following section. The CLI is likely to change and offer more options in the future, as there are plans to support additional tools and technologies. Don’t be surprised if you see different or new options when you run it. The path we followed to create our project uses Visual Studio Code as the default code editor. If you want to use another editor such as Atom, Sublime, or WebStorm, which are the other supported options at the moment of writing, you simply need to select option #3 custom transpilers, CSS pre-processors and more at the beginning of the creation process, then select the default answer for each question until asked to select your default code editor. The rest of the creation process should stay pretty much the same. Note that if you select a different code editor, your own experience may differ from the examples and screenshots you’ll find in this article, as Visual Studio Code is the editor that was used during writing. If you are a TypeScript developer, you may want to create a TypeScript project. I however recommend that you stick with plain ESNext, as every example and code sample in this article has been written in JS. Trying to follow with TypeScript may prove cumbersome, although you can try if you like the challenge. The Structure of a CLI-Based Project If you open the newly created project in a code editor, you should see the following file structure: node_modules: The standard NPM directory containing the project’s dependencies; src: The directory containing the application’s source code; test: The directory containing the application’s automated test suites. .babelrc: The configuration file for Babel, which is used by the CLI to transpile our application’s ESNext code into ES5 so most browsers can run it; index.html: The HTML page that loads and launches the application; karma.conf.js: The configuration file for Karma, which is used by the CLI to run unit tests; package.json: The standard Node.js project file. The directory contains other files such as .editorconfig, .eslintrc.json, and .gitignore that are of little interest to learn Aurelia, so we won’t cover them. In addition to all of this, you should see a directory named aurelia_project. This directory contains things related to the building and bundling of the application using the CLI. Let’s see what it’s made of. The aurelia.json file The first thing of importance in this directory is a file named aurelia.json. This file contains the configuration used by the CLI to test, build, and bundle the application. This file can change drastically depending on the choices you make during the project creation process. There are very few scenarios where this file needs to be modified by hand. Adding an external library to the application is such a scenario. Apart from this, this file should mostly never be updated manually. The first interesting section in this file is the platform: "platform": { "id": "web", "displayName": "Web", "output": "scripts", "index": "index.html" }, This section tells the CLI that the output directory where the bundles are written is named scripts. It also tells that the HTML index page, which will load and launch the application, is the index.html file. The next interesting part is the transpiler section: "transpiler": { "id": "babel", "displayName": "Babel", "fileExtension": ".js", "options": { "plugins": [ "transform-es2015-modules-amd" ] }, "source": "src/**/*.js" }, This section tells the CLI to transpile the application’s source code using Babel. It also defines additional plugins as some are already configured in .babelrc to be used when transpiling the source code. In this case, it adds a plugin that will output transpiled files as AMD-compliant modules, for requirejs compatibility. Tasks The aurelia_project directory contains a subdirectory named tasks. This subdirectory contains various Gulp tasks to build, run, and test the application. These tasks can be executed using the CLI. The first thing you can try is to run au without any argument: > au This will list all available commands, along with their available arguments. This list includes built-in commands such as new, which we used already, or generate, which we’ll see in the next section along with the Gulp tasks declared in the tasks directory. To run one of those tasks, simply execute au with the name of the task as its first argument: > au build This command will run the build task which is defined in aurelia_project/tasks/build.js. This task transpiles the application code using Babel, executes the CSS and markup preprocessors if any, and bundles the code in the scripts directory. After running it, you should see two new files in scripts: app-bundle.js and vendor-bundle.js. Those are the actual files that will be loaded by index.html when the application is launched. The former contains all application code both JS files and templates, while the later contains all external libraries used by the application including Aurelia libraries. You may have noticed a command named run in the list of available commands. This task is defined in aurelia_project/tasks/run.js, and executes the build task internally before spawning a local HTTP server to serve the application: > au run By default, the HTTP server will listen for requests on the port 9000, so you can open your favorite browser and go to http://localhost:9000/ to see the default, demo application in action. If you ever need to change the port number on which the development HTTP server runs, you just need to open aurelia_project/tasks/run.js, and locate the call to the browserSync function. The object passed to this function contains a property named port. You can change its value accordingly. The run task can accept a --watch switch: > au run --watch If this switch is present, the task will keep monitoring the source code and, when any code file changes, will rebuild the application and automatically refresh the browser. This can be pretty useful during development. Generators The CLI also offers a way to generate code, using classes defined in the aurelia_project/generators directory. At the moment of writing, there are generators to create custom attributes, custom elements, binding behaviors, value converters, and even tasks and generators, yes, there is a generator to generate generators. If you are not familiar with Aurelia at all, most of those concepts, value converters, binding behaviors, and custom attributes and elements probably mean nothing to you. Don’t worry. A generator can be executed using the built-in generate command: > au generate attribute This command will run the custom attribute generator. It will ask for the name of the attribute to generate then create it in the src/resources/attributes directory. If you take a look at this generator which is found in aurelia_project/generators/attribute.js, you’ll see that the file exports a single class named AttributeGenerator. This class uses the @inject decorator to declare various classes from the aurelia-cli library as dependencies and have instances of them injected in its constructor. It also defines an execute method, which is called by the CLI when running the generator. This method leverages the services provided by aurelia-cli to interact with the user and generate code files. The exact generator names available by default are attribute, element, binding-behavior, value-converter, task, and generator. Environments CLI-based applications support environment-specific configuration values. By default, the CLI supports three environments—development, staging, and production. The configuration object for each of these environments can be found in the different files dev.js, stage.js, and prod.js located in the aurelia_project/environments directory. A typical environment file looks like this: aurelia_project/environments/dev.js export default { debug: true, testing: true }; By default, the environment files are used to enable debugging logging and test-only templating features in the Aurelia framework depending on the environment we’ll see this in a next section. The environment objects can however be enhanced with whatever properties you may need. Typically, it could be used to configure different URLs for a backend, depending on the environment. Adding a new environment is simply a matter of adding a file for it in the aurelia_project/environments directory. For example, you can add a local environment by creating a local.js file in the directory. Many tasks, basically build and all other tasks using it, such as run and test expect an environment to be specified using the env argument: > au build --env prod Here, the application will be built using the prod.js environment file. If no env argument is provided, dev will be used by default. When executed, the build task just copies the proper environment file to src/environment.js before running the transpiler and bundling the output. This means that src/environment.js should never be modified by hand, as it will be automatically overwritten by the build task. The Structure of an Aurelia application The previous section described the files and folders that are specific to a CLI-based project. However, some parts of the project are pretty much the same whatever the build system and package manager are. These are the more global topics we will see in this section. The hosting page The first entry point of an Aurelia application is the HTML page loading and hosting it. By default, this page is named index.html and is located at the root of the project. The default hosting page looks like this: index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Aurelia</title> </head> <body aurelia-app="main"> <script src="scripts/vendor-bundle.js" data-main="aurelia-bootstrapper"></script> </body> </html> When this page loads, the script element inside the body element loads the scripts/vendor-bundle.js file, which contains requirejs itself along with definitions for all external libraries and references to app-bundle.js. When loading, requirejs checks the data-main attribute and uses its value as the entry point module. Here, aurelia-bootstrapper kicks in. The bootstrapper first looks in the DOM for elements with the aurelia-app attribute, we can find such an attribute on the body element in the default index.html file. This attribute identifies elements acting as application viewports. The bootstrapper uses the attribute’s value as the name of the application’s main module and locates the module, loads it, and renders the resulting DOM inside the element, overwriting any previous content. The application is now running. Even though the default application doesn’t illustrate this scenario, it is possible for an HTML file to host multiple Aurelia applications. It just needs to contain multiple elements with an aurelia-app attribute, each element referring to its own main module. The main module By convention, the main module referred to by the aurelia-app attribute is named main, and as such is located under src/main.js. This file is expected to export a configure function, which will be called by the Aurelia bootstrapping process and will be passed a configuration object used to configure and boot the framework. By default, the main configure function looks like this: src/main.js import environment from './environment'; export function configure(aurelia) { aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } aurelia.start().then(() => aurelia.setRoot()); } The configure function starts by telling Aurelia to use its defaults configuration, and to load the resources feature. It also conditionally loads the development logging plugin based on the environment’s debug property, and the testing plugin based on the environment’s testing property. This means that, by default, both plugins will be loaded in development, while none will be loaded in production. Lastly, the function starts the framework then attaches the root component to the DOM. The start method returns a Promise, whose resolution triggers the call to setRoot. If you are not familiar with Promises in JavaScript, I strongly suggest that you look it up before going any further, as they are a core concept in Aurelia. The root component At the root of any Aurelia application is a single component, which contains everything within the application. By convention, this root component is named app. It is composed of two files—app.html, which contains the template to render the component, and app.js, which contains its view-model class. In the default application, the template is extremely simple: src/app.html <template> <h1>${message}</h1> </template> This template is made of a single h1 element, which will contain the value of the view-model’s message property as text, thanks to string interpolation. The app view-model looks like this: src/app.js export class App { constructor() { this.message = 'Hello World!'; } } This file simply exports a class having a message property containing the string “Hello World!”. This component will be rendered when the application starts. If you run the application and navigate to the application in your favorite browser, you’ll see a h1 element containing “Hello World!”. You may notice that there is no reference to Aurelia in this component’s code. In fact, the view-model is just plain ESNext and it can be used by Aurelia as is. Of course, we’re going to leverage many Aurelia features in many of our view-models later on, so most of our view-models will in fact have dependencies on Aurelia libraries, but the key point here is that you don’t have to use any Aurelia library in your view-models if you don’t want to, because Aurelia is designed to be as less intrusive as possible. Conventional bootstrapping It is possible to leave the aurelia-app attribute empty in the hosting page: <body aurelia-app> In such a case, the bootstrapping process is much simpler. Instead of loading a main module containing a configure function, the bootstrapper will simply use the framework’s default configuration and load the app component as the application root. This can be a simpler way to get started for a very simple application, as it negates the need for the src/main.js file you can simply delete it. However, it means that you are stuck with the default framework configuration. You cannot load features nor plugins. For most real-life applications, you’ll need to keep the main module, which means specifying it as the aurelia-app attribute’s value. Customizing Aurelia configuration The configure function of the main module receives a configuration object, which is used to configure the framework: src/main.js //Omitted snippet… aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } //Omitted snippet… Here, the standardConfiguration() method is a simple helper that encapsulates the following: aurelia.use .defaultBindingLanguage() .defaultResources() .history() .router() .eventAggregator(); This is the default Aurelia configuration. It loads the default binding language, the default templating resources, the browser history plugin, the router plugin, and the event aggregator. This is the default set of features that a typical Aurelia application uses. All those plugins will be covered at one point or another throughout this article. All those plugins are optional except the binding language, which is needed by the templating engine. If you don’t need one, just don’t load it. In addition to the standard configuration, some plugins are loaded depending on the environment’s settings. When the environment’s debug property is true, Aurelia’s console logger is loaded using the developmentLogging() method, so traces and errors can be seen in the browser console. When the environment’s testing property is true, the aurelia-testing plugin is loaded using the plugin method. This plugin registers some resources that are useful when debugging components. The last line in the configure function starts the application and displays its root component, which is named app by convention. You may, however, bypass the convention and pass the name of your root component as the first argument to setRoot, if you named it otherwise: aurelia.start().then(() => aurelia.setRoot('root')); Here, the root component is expected to sit in the src/root.html and src/root.js files. Summary Getting started with Aurelia is very easy, thanks to the CLI. Installing the tooling and creating an empty project is simply a matter of running a couple of commands, and it takes typically more time waiting for the initial npm install to complete than doing the actual setup. We’ll go over dependency injection and logging, and we’ll start building our application by adding components and configuring routes to navigate between them. Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Breaking into Microservices Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 11752
article-image-enterprise-architecture-concepts
Packt
02 Jan 2017
8 min read
Save for later

Enterprise Architecture Concepts

Packt
02 Jan 2017
8 min read
In this article by Habib Ahmed Qureshi, Ganesan Senthilvel, and Ovais Mehboob Ahmed Khan, author of the book Enterprise Application Architecture with .NET Core, you will learn how to architect and design highly scalable, robust, clean, and highly performant applications in .NET Core 1.0. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Why we need Enterprise Architecture? Knowing the role of an architect Why we need Enterprise Architecture? We will need to define, or at least provide, some basic fixed points to identify enterprise architecture specifically. Sketch Before playing an enterprise architect role, I used to get confused with so many architectural roles and terms, such as architect, solution architect, enterprise architect, data architect, blueprint, system diagram, and so on. In general, the industry perception is that the IT architect role is to draw few boxes with few suggestions; rest is with the development community. They feel that the architect role is quite easy just by drawing the diagram and not doing anything else. Like I said, it is completely a perception of few associates in the industry, and I used to be dragged by this category earlier: However, my enterprise architect job has cleared this perception and understands the true value of an enterprise architect. Definition of Enterprise Architecture In simple terms, enterprise is nothing but human endeavor. The objective of an enterprise is where people are collaborating for a particular purpose supported by a platform. Let me explain with an example of an online e-commerce company. Employees of that company are people who worked together to produce the profit of the firm using their various platforms, such as infrastructure, software, equipment, building, and so on. Enterprise has the structure/arrangements of all these pieces/components to build the complete organization. This is the exact place where enterprise architecture plays its key role. Every enterprise has an enterprise architect. EA is a process of architecting that applies the discipline to produce the prescribed output components. This process needs the experience, skill, discipline, and descriptions. Consider the following image where EA anticipates the system in two key states: Every enterprise needs an enterprise architect, not an optional. Let me give a simple example. When you need a car for business activities, you have two choices, either drive yourself or rent a driver. Still, you will need the driving capability to operate the car. EA is pretty similar to it. As depicted in the preceding diagram, EA anticipates the system in two key states, which are as follows: How it currently is How will it be in the future Basically, they work on options/alternatives to move from current to future state of an enterprise system. In this process, Enterprise Architecture does the following: Creates the frameworks to manage the architect Details the descriptions of the architect Roadmaps to lay the best way to change/improve the architecture Defines constraint/opportunity Anticipates the costs and benefits Evaluates the risks and values In this process of architecting, the system applies the discipline to produce the prescribed output components. Stakeholders of Enterprise Architecture Enterprise Architecture is so special because to its holistic view of management and evolution of an enterprise holistically. It has the unique combination of specialist technology, such as architecture frameworks and design pattern practices. Such a special EA has the following key stakeholders/users in its eco system: S.No. Stakeholders Organizational actions 1  Strategic planner Capability planning Set strategic direction Impact analysis 2  Decision makers Investment Divestment Approvals for the project Alignment with strategic direction 3  Analyst Quality assurance Compliance Alignment with business goals 4  Architects, project managers Solution development Investigate the opportunities Analysis of the existing options Business benefits Though many organizations intervened without EAs, every firm has the strong belief that it is better to architect before creating any system. It is integrated in coherent fashion with proactively designed system instead of random ad hoc and inconsistent mode. In terms of business benefits, cost is the key factor in the meaning of Return on Investment (RoI). That is how the industry business is driven in this highly competitive IT world. EA has the opportunity to prove its value for its own stakeholders with three major benefits, ranging from tactical to strategic positions. They are as follows: Cost reduction by technology standardization Business Process Improvement (BPI) Strategic differentiation Gartner's research paper on TCO: The First Justification for Enterprise IT Architecture by Colleen Young is one of the good references to justify the business benefits of an Enterprise Architecture. Check out https://www.gartner.com/doc/388268/enterprise-architecture-benefits-justification for more information. In the grand scheme of cost saving strategy, technology standardization adds a lot of efficiency to make the indirect benefits. Let me share my experience in this space. In one of my earlier legacy organization, it was noticed that the variety of technologies and products were built to server the business purpose due to the historical acquisitions and mergers. The best solution was platform standardization. All businesses have processes; few life examples are credit card processing, employee on-boarding, student enrollment, and so on. In this methodology, there are people involved with few steps for the particular system to get things done. In means of the business growth, the processes become chaotic, which leads to the duplicate efforts across the departments. Here, we miss the cross learning of the mistakes and corrections. BPI is an industry approach that is designed to support the enterprise for the realignment of the existing business operational process into the significant improved process. It helps the enterprise to identify and adopt in a better way using the industry tools and techniques. BPI is originally designed to induce a drastic game changing effect in the enterprise performance instead of bringing the changes in the incremental steps. In the current highly competitive market, strategic differentiation efforts make the firm create the perception in customers minds of receiving something of greater value than offered by the competition. An effective differentiation strategy is the best tool to highlight a business's unique features and make it stand out from the crowd. As the outcome of strategic differentiation, the business should realize the benefits on Enterprise Architecture investment. Also, it makes the business to institute the new ways of thinking to add the new customer segments along with new major competitive strategies. Knowing the role of an architect When I planned to switch my career to architecture track, I had too many questions in mind. People were referring to so many titles in the industry, such as architect, solution architect, enterprise architect, data architect, infra architect, and so on that I didn't know where exactly do I need to start and end. Industry had so many confusions to opt for. To understand it better, let me give my own work experiences as the best use cases. In the IT industry, two higher-level architects are named as follows: Solution architect (SA) Enterprise architect (EA) In my view, Enterprise Architecture is a much broader discipline than Solution Architecture with the sum of Business Architecture, Application Architecture, Data Architecture, and Technology Architecture. It will be covered in detail in the subsequent section: SA is focused on a specific solution and addresses the technological details that are compiled to the standards, roadmaps, and strategic objectives of the business. On comparing with SA, EA is at senior level. In general, EA takes a strategic, inclusive, and long term view at goals, opportunities, and challenges facing the company. However, SA is assigned to a particular project/program in an enterprise to ensure technical integrity and consistency of the solution at every stage of its life cycle. Role comparison between EA and SA Let me explain the working experiences of two different roles—EA and SA. When I played the SA role for Internet based telephony system, my role needs to build the tools, such as code generation, automation, and so on around the existing telephony system. It needs the skill set of the Microsoft platform technology and telephony domain to understand the existing system in a better way and then provide the better solution to improve the productivity and performance of the existing ecosystem. I was not really involved in the enterprise-level decision making process. Basically, it was pretty much like an individual contributor to build effective and efficient solutions to improvise the current system. As the second work, let me share my experience on the EA role for a leading financial company. The job was to build the enterprise data hub using the emerging big data technology. Degree of comparisons If we plot EA versus SA graphically, EA needs the higher degree of strategy focus and technology breath, as depicted in the following image: In terms of roles and responsibilities, EA and SA differ in their scope. Basically, the SA scope is limited within a project team and the expected delivery is to make the system quality of the solution to the business. In the same time, the EA scope is beyond SA by identifying or envisioning the future state of an organization. Summary In this article, you understood the fundamental concepts of enterprise architecture, and its related business need and benefits. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [Article] Setting Up the Environment for ASP.NET MVC 6 [Article] How to Set Up CoreOS Environment [Article]
Read more
  • 0
  • 0
  • 17848

article-image-programming-linux
Packt
02 Jan 2017
20 min read
Save for later

Programming with Linux

Packt
02 Jan 2017
20 min read
In this article by Edward Snajder, the author of Raspberry Pi Zero Cookbook, we pick up from having our operating system installed and our Raspberry Pi Zero on our home network. We can now dive into some basic Linux commands. You will find knowing these commands useful any time you are working on a Linux machine. In this article, we'll start prepping with some Linux recipes: (For more resources related to this topic, see here.) Navigating a filesystem and viewing and searching the contents of a directory Creating a new file, editing it in an editor, and changing ownership Renaming and copying/moving the file/folder into a new directory Installing and uninstalling a program Navigating a filesystem and viewing and searching the contents of a directory If you aren’t already a Linux or Mac user, getting around the filesystem can seem pretty alien at first. Truly, if you’ve only used Windows Explorer, this is going to seem like a strange, alien process. Once you start getting the hang of things, though, you’ll find that getting around the Linux filesystem is easy and fun. Getting ready The only thing you need to get started is a client connection to your Raspberry Pi Zero. I like to use SSH, but you can certainly connect using the serial connection or a terminal in X Windows. How to do it… If you want to find out where you are in the filesystem, use pwd: pi@rpz14101:~$ pwd /home/pi This tells me I’m in the /home/pi directory, which is the default home directory for the pi user. Generally, every user you create should get a /home/username directory to keep their own files in. This can be done automatically with user creation and the adduser command. To look at the contents of the directory you are in, use the ls command: pi@rpz14101:~$ ls Desktop Downloads Pictures python_games share Videos Documents Music Public Scratch Templates To look in another directory, simply specify the directory you want to list (you may need to use sudo depending on where you are looking): pi@rpz14101:~$ sudo ls /opt/ cookbook.share pigpio sonic-pi vc minecraft-pi share testsudo.deleteme Wolfram This is a nice quick summary of what files are in the directory, but you will usually want a little more information about the files and directories. The ls command has a ton of options, all of which can be displayed with ls –help and explained in more detail with man ls. Some of the best ones to know are as follows: -a show all files (regular and hidden) -l show long format (more file information, in columns) -h human readable (turns bytes into MB or GB as appropriate) -t or -tr order in time order, or reverse time order My typical command when I start looking in a directory is this one: ls -ltrh This produces all non-hidden files, with human-readable sizes, in column format, and the newest file at the bottom: pi@rpz14101:~$ ls -ltrh /opt/ total 513M drwxr-xr-x 7 root root 4.0K May 27 04:11 vc drwxr-xr-x 3 root root 4.0K May 27 04:32 Wolfram drwxr-xr-x 3 root root 4.0K May 27 04:34 pigpio drwxr-xr-x 4 root root 4.0K May 27 04:36 minecraft-pi drwxr-xr-x 5 root root 4.0K May 27 04:36 sonic-pi -rw-r--r-- 1 root root 0 Jul 4 13:41 testsudo.deleteme drwxr-xr-x 2 root root 4.0K Jul 9 13:05 share -rwxr-xr-x 1 root root 512M Jul 24 17:53 cookbook.share One last trick: if you need this format but there are a lot of files in the directory you are searching, you will see a ton of text scroll by. Maybe you just need the most recent or largest files? We can do this with a pipe (|) and the tail command. Let’s take a directory with a lot of files, such as /usr/lib/. To list the five most recently modified files, I can pipe ls -ltrh to the tail command: pi@rpz14101:~$ ls -ltrh /usr/lib/ | tail -5 lrwxrwxrwx 1 root root 22 May 27 04:40 libwiringPiDev.so -> libwiringPiDev.so.2.32 drwxr-xr-x 2 root root 4.0K Jun 5 10:38 samba drwxr-xr-x 3 root root 4.0K Jul 4 22:48 pppd drwxr-xr-x 65 root root 60K Jul 24 15:48 arm-linux-gnueabihf drwxr-xr-x 2 root root 4.0K Jul 24 15:48 tmpfiles.d What about the five largest files? Instead of the t in -ltrh, I can use S: pi@rpz14101:~$ ls -lSrh /usr/lib/ | tail -5 -rw-r--r-- 1 root root 2.8M Sep 17 2014 libmozjs185.so.1.0.0 -rw-r--r-- 1 root root 2.8M Sep 30 2014 libqscintilla2.so.11.3.0 -rw-r--r-- 1 root root 2.9M Jun 5 2014 libcmis-0.4.so.4.0.1 -rw-r--r-- 1 root root 3.4M Jun 12 2015 libv8.so.3.14.5 -rw-r--r-- 1 root root 5.1M Aug 18 2014 libmwaw-0.3.so.3.0.1 A little creative piping and you can find exactly the file you are looking for. If not, another great tool for exploring the filesystem is tree. This gives a pseudo-graphical tree that shows how the files are structured in the system. It produces a lot of text, especially if you have it print an entire directory tree. If just looking into directory structures, you can use tree with the -d flag for directories only. The -L flag will reduce how deep you dive into nested directories: pi@rpz14101:~$ tree -d -L 2 /opt/ /opt/ ├── minecraft-pi │ ├── api │ └── data ├── pigpio │ └── cgi ├── share ├── sonic-pi │ ├── app │ ├── bin │ └── etc ├── vc │ ├── bin │ ├── include │ ├── lib │ ├── sbin │ └── src └── Wolfram └── WolframEngine Last, we will look at a couple of searching utilities, find and grep. The find command is a powerful function that finds files in whatever directories you specify. It is great for trying to find that mystery piece of software that installed itself in an odd place or the needle-in-a-haystack file in a directory that contains hundreds of files. For example, if I were to run tree in the /opt/sonic-pi/ directory, it would run on for several seconds, and thousands of files would shoot by. I, however, am only interested in finding files with cowbell in the name. I can use the find command to look for it: pi@rpz14101:~$ find /opt/sonic-pi/ -name *cowbell* /opt/sonic-pi/etc/samples/drum_cowbell.flac When looking for anything with cowbell in the filename, the find command returns the exactly location of anything that matches. There are tons of options for using the find command; start with find –help, and then try man find when you want to get really deep. The grep command can be used in a couple different ways when searching for files, and it is one of those commands you will find yourself using constantly while both loving and hating its awesome power. Let’s say you need to find something inside of a file—grep is the tool for you. It can also find things like find can, but generally, find is more efficient at finding filename patterns than grep is. If I use grep to look for cowbells in my sonic-pi directory, I’ll get a different, and more colorful, output: We don’t see the file with cowbell in the name like we did using find, but we find every file that contains cowbell inside of it. The -r flag tells grep to delve into subdirectories, and -i tells it to ignore cases with cowbells (so Cowbell and cowbell are both found, as shown in the screenshot). As you use Linux more often, both find and grep become regularly used tools for administration and file management. This won’t be the last time you use them! Creating a new file, editing it in an editor, and changing ownership There are a lot of different text editors to use on a Linux system from the command line. The program vi is the Ubuntu default, and the program you will find installed on pretty much any Linux system. Emacs is another popular editor, and lots of Linux users get quite passionate about which one is better. My preference is vim, which is generally known as vi improved. The nano text editor is another one that is commonly installed on Linux distros, and it is one of the most lightweight editors available. Getting ready For this recipe, we will work with vi, since that’s definitely going to be installed on your system. If you want to try out vim, you can install it using this: sudo apt-get vim How to do it… First we will go to our share directory: cd /home/pi/share Then, we will create an empty file using the touch command: touch ch3_touchfile.txt If you use the ls command from the previous directory, you can see that the size of the file is 0. You can also display the contents of the file with the cat command, which will return nothing in this case. The touch command is a great way to test whether you have permissions to create files in a specific directory. You can also create a new file with the editor itself: vi ch3_vifile.txt This will open the vi editor with a blank file named ch3_vifile.txt: Using vi or vim (or Emacs) for the first time is completely different from using something like OpenOffice or Microsoft Word. Vi works in two modes: insert (or edit) and command. Once you learn how to use command mode, vi becomes a very efficient editor for working on scripts in bash or Python. Edit mode, more or less, is the mode where you can type and edit text like a regular WYSIWYG editor. There are books written on becoming a power user of vi, well beyond the scope of this book. Getting a handle on the basics is the best place to start: With the empty file, you can jump into edit mode by pressing the i or a keys. The editor will switch to insert mode, as shown by the -- INSERT -- in the bottom left of the screen. Then you can you start typing in your text: To get out of insert mode, press the Esc key. The :w command will save the file, and the :q command will quit. You can combine them, so :wq saves the file and quits. You can verify that the contents were saved with the cat command: pi@rpz14101:~$ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! Let’s take another look at the ls command and some of the information the -l format includes. We will take a look at the files we’ve created so far in this recipe: pi@rpz14101:~$ ls -ltrh *.txt -rw-r--r-- 1 pi pi 43 Jul 25 11:23 ch3_vifile.txt -rw-r--r-- 1 pi pi 0 Jul 25 11:24 ch3_touchfile.txt File Permissions Number of links Owner:Group Size Modification Date File Name -rw-r--r-- 1 pi:pi 43 Jul 25 11:23 ch3_vifile.txt We can see that since we made the files as the pi user, the owner of the file and the group owner are pi. By default, when a new user is created, a group container is created as well, so root has a root group, user rpz has an rpz group, and so on. We can change the ownership settings of a file with the chown command. Be careful, since you can take away your own access, though you can always sudo your way back. The chmod command will change who is allowed to do what with a file. Let’s look at ownership changes and what impact they will have with a few examples: pi@rpz14101:~ $ ls -ltrh *.txt -rw-r--r-- 1 pi pi 0 Jul 25 13:28 ch3_touchfile.txt -rwx------ 1 pi pi 43 Jul 25 13:28 ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! pi@rpz14101:~ $ sudo chown rpz:rpz ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt cat: ch3_vifile.txt: Permission denied pi@rpz14101:~ $ sudo cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! pi@rpz14101:~ $ sudo chown rpz:pi ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt cat: ch3_vifile.txt: Permission denied pi@rpz14101:~ $ sudo chmod 750 ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! pi@rpz14101:~ $ sudo chown root:root ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt cat: ch3_vifile.txt: Permission denied pi@rpz14101:~ $ sudo chmod 755 ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! The chmod values are documented very well, and with a little practice, you can get your file permissions and ownership set up in a way that is both secure and easy to work with. Renaming and copying/moving the file/folder into a new directory A common activity on any filesystem is the practice of copying and moving files, and even directories, from one place to another. You might do it to make a backup copy of something, or you might decide that the contents should live in a more appropriate location. This recipe will explore how to manipulate files in the Raspbian system. Getting ready If you are still in your terminal from the last recipe, we are going to use the same files from the previous recipe. We should have the ownership back to pi:pi; if not, run the following: sudo chown pi:pi /home/pi/*.txt How to do it… First, let’s make a new directory. We’ll put it under the /home/pi/share/ folder so it is accessible to other computers on your home network. To make a directory, use the mkdir command: pi@rpz14101:~$ mkdir /home/pi/share/ch3 We can look at the new directory with the ls command: pi@rpz14101:~$ ls -ltrh /home/pi/share/ total 4.0K -rw-r--r-- 1 pi pi 0 Jul 24 15:56 helloNetwork.yes drwxr-xr-x 2 pi pi 4.0K Jul 25 13:06 ch3 A great flag to go with the mkdir command is -p. This will allow you to create directories and subdirectories in one command. Without it, if I try to create a subdirectory that doesn’t already exist, I’ll get an error: pi@rpz14101:~$ mkdir /home/pi/share/ch3/nested/folders mkdir: cannot create directory ‘/home/pi/share/ch3/nested/folders’: No such file or directory With the -p flag, it works without a problem: pi@rpz14101:~$ mkdir -p /home/pi/share/ch3/nested/folders The tree command shows the structure of our ch3 directory: pi@rpz14101:~$ tree /home/pi/share /home/pi/share ├── ch3 │ └── nested │ └── folders └── helloNetwork.yes 3 directories, 1 file Now, let’s move our files to our new ch3 directory. The copy and move commands—cp and mv, respectively—are the tools we will use. Copying a file from one place to another is as simple as indicating the file's source and destination. The following command will make a copy of vifile.txt and save it as vifile.txt.copy in the /home/pi/share/ch3/ directory: cp /home/pi/ch3_vifile.txt /home/pi/share/ch3/ch3_vifile.txt.copy We can copy files as well as directories and their contents as long as you have enough disk space. To move or rename a file, we use the mv command. This takes the file given in the source and moves it to the destination provided. As simple as the cp command, let’s move all of our files to the share directory: mv /home/pi/ch3_vifile.txt /home/pi/share/ch3/ch3_vifile.txt.moved mv /home/pi/ch3_touchfile.txt /home/pi/share/ch3/ch3_touchfile.txt If we look at the tree of our share directory, we will see everything nicely organized: pi@rpz14101:~$ tree /home/pi/share/ /home/pi/share/ ├── ch3 │ ├── ch3_touchfile.txt │ ├── ch3_vifile.txt.copy │ ├── ch3_vifile.txt.moved │ └── nested │ └── folders └── helloNetwork.yes 3 directories, 4 files Installing and uninstalling a program We’ve installed a few programs throughout the book so far, but have yet to delve into the apt-get command and the family of software-installation utilities. Now, we will learn how to install and uninstall any program available for Raspbian as well as how to search for new software and run updates. Getting ready Stay in your terminal window, and get ready to install some applications! How to do it… The apt-* commands are a suite of utilities that allow you to do various things with installed packages. To install a package, we use the apt-get tool, and the install command, like this: sudo apt-get install <packagename> Let’s install something cool—how about a Matrix screensaver? It is super easy and works great from the command line. To look for a package, we use the apt-cache search command. apt-cache is another tool in the apt-* family of utilities, and it checks the software database for matches. Running sudo apt-cache search matrix results tons of results! The word "matrix" is a little too popular for us computer and math nerds—we have matrixes everywhere! It would take forever to go through that list to find what we are looking for. Fortunately, we can take advantage of grep, which we touched on in an earlier recipe, to narrow down our results. One of the fun things about using Linux and the command line is the ways you can chain commands to do cool things: pi@rpz14101:~ $ sudo apt-cache search matrix | grep "The Matrix" cmatrix - simulates the display from "The Matrix" wmmatrix - View The Matrix in a Window Maker dock application That’s a bit more manageable! We could also have narrowed the list using this command: sudo apt-cache search “The Matrix” This returns fewer results than before, but a few more than the grep command. Whichever way you find it, we see that the cmatrix package is the one we are looking for. Installing is as simple as running this: pi@rpz14101:~ $ ` Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: cmatrix-xfont The following NEW packages will be installed: cmatrix 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 16.2 kB of archives. After this operation, 27.6 kB of additional disk space will be used. Get:1 http://mirrordirector.raspbian.org/raspbian/ jessie/main cmatrix armhf 1.2a-5 [16.2 kB] Fetched 16.2 kB in 1s (15.3 kB/s) Selecting previously unselected package cmatrix. (Reading database ... 121906 files and directories currently installed.) Preparing to unpack .../cmatrix_1.2a-5_armhf.deb ... Unpacking cmatrix (1.2a-5) ... Processing triggers for man-db (2.7.0.2-5) ... Setting up cmatrix (1.2a-5) ... After that, we are ready to go! Channel your inner Neo and run this command: cmatrix -s -b You should be in the Matrix! Try it on your serial and SSH connections, and even in the terminal on VNC: you’ll notice differences in the rendering behavior. There are literally thousands of software packages available to install in the repositories of our awesome open source communities. Pretty much anything you think a computer should be able to do, someone, or a group of people, has worked on a solution and pushed it out to the repositories.  We will be using apt-get a lot throughout this cookbook; it is one of the commands you’ll find yourself using all the time as you get more interested in Raspberry Pis and the Linux operating system. Running sudo apt-get update will check all repositories to see whether there are any version updates available. Here, you can see all of the locations it checks to see whether there is anything new for Raspbian: pi@rpz14101:~ $ sudo apt-get update Get:1 http://archive.raspberrypi.org jessie InRelease [13.2 kB] Get:2 http://mirrordirector.raspbian.org jessie InRelease [14.9 kB] Get:3 http://archive.raspberrypi.org jessie/main armhf Packages [144 kB] Get:4 http://mirrordirector.raspbian.org jessie/main armhf Packages [8,981 kB] Hit http://archive.raspberrypi.org jessie/ui armhf Packages Ign http://archive.raspberrypi.org jessie/main Translation-en_GB Get:5 http://mirrordirector.raspbian.org jessie/contrib armhf Packages [37.5 kB] Ign http://archive.raspberrypi.org jessie/main Translation-en Get:6 http://mirrordirector.raspbian.org jessie/non-free armhf Packages [70.3 kB] Ign http://archive.raspberrypi.org jessie/ui Translation-en_GB Ign http://archive.raspberrypi.org jessie/ui Translation-en Get:7 http://mirrordirector.raspbian.org jessie/rpi armhf Packages [1,356 B] Ign http://mirrordirector.raspbian.org jessie/contrib Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/contrib Translation-en Ign http://mirrordirector.raspbian.org jessie/main Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/main Translation-en Ign http://mirrordirector.raspbian.org jessie/non-free Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/non-free Translation-en Ign http://mirrordirector.raspbian.org jessie/rpi Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/rpi Translation-en Fetched 9,263 kB in 34s (272 kB/s) Reading package lists... Done After updating, apt-get upgrade will look at the versions of everything you have installed and upgrade anything to the latest version if there is one available. Depending on how many updates you have, this can take quite a while: pi@rpz14101:~ $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: dpkg-dev gir1.2-gdkpixbuf-2.0 initramfs-tools libavcodec56 libavformat56 libavresample2 libavutil54 libdevmapper-event1.02.1 libdevmapper1.02.1 libdpkg-perl python-picamera python3-picamera raspberrypi-kernel raspberrypi-net-mods ssh tzdata xarchiver 40 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 57.0 MB of archives. After this operation, 415 kB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://archive.raspberrypi.org/debian/ jessie/main nodered armhf 0.14.5 [5,578 kB] … Adding 'diversion of /boot/overlays/w1-gpio.dtbo to /usr/share/rpikernelhack/overlays/w1-gpio.dtbo by rpikernelhack' … run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.4.11-v7+ /boot/kernel7.img Preparing to unpack .../raspberrypi-net-mods_1.2.3_armhf.deb ... Unpacking raspberrypi-net-mods (1.2.3) over (1.2.2) ... Processing triggers for man-db (2.7.0.2-5) ... … Setting up libssl1.0.0:armhf (1.0.1t-1+deb8u2) ... Setting up libxml2:armhf (2.9.1+dfsg1-5+deb8u2) ... … Removing 'diversion of /boot/overlays/w1-gpio-pullup.dtbo to /usr/share/rpikernelhack/overlays/w1-gpio-pullup.dtbo by rpikernelhack' … Setting up raspberrypi-net-mods (1.2.3) ... Modified /etc/network/interfaces detected. Leaving unchanged and writing new file as interfaces.new. Processing triggers for libc-bin (2.19-18+deb8u4) ... Processing triggers for initramfs-tools (0.120+deb8u2) ... You don’t really have to understand the details of what’s going on during the upgrade, and it will let you know if there were any problems at the end (and often what to do to fix them). Regularly updating and upgrading will keep all of your software current with all of the latest bug fixes and security patches. There’s more… You can also add and remove software from the GUI. If you log on to your Pi, either directly to a monitor or over VNC Server (a recipe we covered earlier), you can find the Add / Remove Software option under Menu | Preferences: The PiPackages utility makes it very easy to find software when you only have a general idea of what you are looking for. While you can do the same things with the apt commands, if you are browsing, this is a little easier on the eyes. The utility provides categorizations so you don’t have to scroll through every package. Clicking on a package provides a detailed description: Simply check the box and click on Apply or OK, and the software will be installed. Now you can install software on your Raspberry Pi Zero from the command line or GUI. Summary In this article, we looked at the basic file manipulation functionalities of Linux on a Raspberry Pi. We saw how to navigate the filesystem and create, edit, rename, copy, and move files and folders. We also saw how to change ownership of a file and install and uninstall programs. Resources for Article: Further resources on this subject: Sending Notifications using Raspberry Pi Zero [Article] Raspberry Pi LED Blueprints [Article] Hacking a Raspberry Pi project? Understand electronics first! [Article]
Read more
  • 0
  • 0
  • 18524

article-image-introduction-creational-patterns-using-go-programming
Packt
02 Jan 2017
12 min read
Save for later

Introduction to Creational Patterns using Go Programming

Packt
02 Jan 2017
12 min read
This article by Mario Castro Contreras, author of the book Go Design Patterns, introduces you to the Creational design patterns that are explained in the book. As the title implies, this article groups common practices for creating objects. Creational patterns try to give ready-to-use objects to users instead of asking for their input, which, in some cases, could be complex and will couple your code with the concrete implementations of the functionality that should be defined in an interface. (For more resources related to this topic, see here.) Singleton design pattern – Having a unique instance of an object in the entire program Have you ever done interviews for software engineers? It's interesting that when you ask them about design patterns, more than 80% will start saying Singleton design pattern. Why is that? Maybe it's because it is one of the most used design patterns out there or one of the easiest to grasp. We will start our journey on creational design patterns because of the latter reason. Description Singleton pattern is easy to remember. As the name implies, it will provide you a single instance of an object, and guarantee that there are no duplicates. At the first call to use the instance, it is created and then reused between all the parts in the application that need to use that particular behavior. Objective of the Singleton pattern You'll use Singleton pattern in many different situations. For example: When you want to use the same connection to a database to make every query When you open a Secure Shell (SSH) connection to a server to do a few tasks, and don't want to reopen the connection for each task If you need to limit the access to some variable or space, you use a Singleton as the door to this variable. If you need to limit the number of calls to some places, you create a Singleton instance to make the calls in the accepted window The possibilities are endless, and we have just mentioned some of them. Implementation Finally, we have to implement the Singleton pattern. You'll usually write a static method and instance to retrieve the Singleton instance. In Go, we don't have the keyword static, but we can achieve the same result by using the scope of the package. First, we create a structure that contains the object which we want to guarantee to be a Singleton during the execution of the program: package creational type singleton struct{ count int } var instance *singleton func GetInstance() *singleton { if instance == nil { instance = new(singleton) } return instance } func (s *singleton) AddOne() int { s.count++ return s.count } We must pay close attention to this piece of code. In languages like Java or C++, the variable instance would be initialized to NULL at the beginning of the program. In Go, you can initialize a pointer to a structure as nil, but you cannot initialize a structure to nil (the equivalent of NULL). So the var instance *singleton line defines a pointer to a structure of type Singleton as nil, and the variable called instance. We created a GetInstance method that checks if the instance has not been initialized already (instance == nil), and creates an instance in the space already allocated in the line instance = new(singleton). Remember, when we use the keyword new, we are creating a pointer to the type between the parentheses. The AddOne method will take the count of the variable instance, raise it by one, and return the current value of the counter. Lets run now our unit tests again: $ go test -v -run=GetInstance === RUN TestGetInstance --- PASS: TestGetInstance (0.00s) PASS ok Factory method – Delegating the creation of different types of payments The Factory method pattern (or simply, Factory) is probably the second-best known and used design pattern in the industry. Its purpose is to abstract the user from the knowledge of the structure it needs to achieve a specific purpose. By delegating this decision to a Factory, this Factory can provide the object that best fits the user needs or the most updated version. It can also ease the process of downgrading or upgrading of the implementation of an object if needed. Description When using the Factory method design pattern, we gain an extra layer of encapsulation so that our program can grow in a controlled environment. With the Factory method, we delegate the creation of families of objects to a different package or object to abstract us from the knowledge of the pool of possible objects we could use. Imagine that you have two ways to access some specific resource: by HTTP or FTP. For us, the specific implementation of this access should be invisible. Maybe, we just know that the resource is in HTTP or in FTP, and we just want a connection that uses one of these protocols. Instead of implementing the connection by ourselves, we can use the Factory method to ask for the specific connection. With this approach, we can grow easily in the future if we need to add an HTTPS object. Objective of the Factory method After the previous description, the following objectives of the Factory Method design pattern must be clear to you: Delegating the creation of new instances of structures to a different part of the program Working at the interface level instead of with concrete implementations Grouping families of objects to obtain a family object creator Implementation We will start with the GetPaymentMethod method. It must receive an integer that matches with one of the defined constants of the same file to know which implementation it should return. package creational import ( "errors" "fmt" ) type PaymentMethod interface { Pay(amount float32) string } const ( Cash = 1 DebitCard = 2 ) func GetPaymentMethod(m int) (PaymentMethod, error) { switch m { case Cash: return new(CashPM), nilcase DebitCard: return new(DebitCardPM), nil default: return nil, errors.New(fmt.Sprintf("Payment method %d not recognizedn", m)) } } We use a plain switch to check the contents of the argument m (method). If it matches any of the known methods—cash or debit card, it returns a new instance of them. Otherwise, it will return a nil and an error indicating that the payment method has not been recognized. Now we can run our tests again to check the second part of the unit tests: $go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- FAIL: TestGetPaymentMethodCash (0.00s) factory_test.go:16: The cash payment method message wasn't correct factory_test.go:18: LOG: === RUN TestGetPaymentMethodDebitCard --- FAIL: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:28: The debit card payment method message wasn't correct factory_test.go:30: LOG: === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized FAIL exit status 1 FAIL Now we do not get the errors saying it couldn't find the type of payment methods. Instead, we receive a message not correct error when it tries to use any of the methods that it covers. We also got rid of the Not implemented message that was being returned when we asked for an unknown payment method. Lets implement the structures now: type CashPM struct{} type DebitCardPM struct{} func (c *CashPM) Pay(amount float32) string { return fmt.Sprintf("%0.2f paid using cashn", amount) } func (c *DebitCardPM) Pay(amount float32) string { return fmt.Sprintf("%#0.2f paid using debit cardn", amount) } We just get the amount, printing it in a nice formatted message. With this implementation, the tests will all passing now: $ go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- PASS: TestGetPaymentMethodCash (0.00s) factory_test.go:18: LOG: 10.30 paid using cash === RUN TestGetPaymentMethodDebitCard --- PASS: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:30: LOG: 22.30 paid using debit card === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized PASS ok Do you see the LOG: messages? They aren't errors—we just print some information that we receive when using the package under test. These messages can be omitted unless you pass the -v flag to the test command: $ go test -run=GetPaymentMethod . ok Abstract Factory – A factory of factories After learning about the factory design pattern is when we grouped a family of related objects in our case payment methods, one can be quick to think: what if I group families of objects in a more structured hierarchy of families? Description The Abstract Factory design pattern is a new layer of grouping to achieve a bigger (and more complex) composite object, which is used through its interfaces. The idea behind grouping objects in families and grouping families is to have big factories that can be interchangeable and can grow more easily. In the early stages of development, it is also easier to work with factories and abstract factories than to wait until all concrete implementations are done to start your code. Also, you won't write an Abstract Factory from the beginning unless you know that your object's inventory for a particular field is going to be very large and it could be easily grouped into families. The objective Grouping related families of objects is very convenient when your object number is growing so much that creating a unique point to get them all seems the only way to gain flexibility of the runtime object creation. Following objectives of the Abstract Factory method must be clear to you: Provide a new layer of encapsulation for Factory methods that returns a common interface for all factories Group common factories into a super Factory (also called factory of factories) Implementation The implementation of every factory is already done for the sake of brevity. They are very similar to the factory method with the only difference being that in the factory method, we don't use an instance of the factory, because we use the package functions directly. The implementation of the vehicle factory is as follows: func GetVehicleFactory(f int) (VehicleFactory, error) { switch f { case CarFactoryType: return new(CarFactory), nil case MotorbikeFactoryType: return new(MotorbikeFactory), nil default: return nil, errors.New(fmt.Sprintf("Factory with id %d not recognizedn", f)) } } Like in any factory, we switched between the factory possibilities to return the one that was demanded. As we have already implemented all concrete vehicles, the tests must be run too: go test -v -run=Factory -cover . === RUN TestMotorbikeFactory --- PASS: TestMotorbikeFactory (0.00s) vehicle_factory_test.go:16: Motorbike vehicle has 2 wheels vehicle_factory_test.go:22: Sport motorbike has type 1 === RUN TestCarFactory --- PASS: TestCarFactory (0.00s) vehicle_factory_test.go:36: Car vehicle has 4 seats vehicle_factory_test.go:42: Luxury car has 4 doors. PASS coverage: 45.8% of statements ok All of them passed. Take a close look and note that we have used the -cover flag when running the tests to return a coverage percentage of the package 45.8%. What this tells us is that 45.8% of the lines are covered by the tests we have written, but 54.2% is still not under the tests. This is because we haven't covered the cruise motorbike and the Family car with tests. If you write those tests, the result should rise to around 70.8%. Prototype design pattern The last pattern we will see in this article is the Prototype pattern. Like all creational patterns, this too comes in handy when creating objects and it is very common to see the Prototype pattern surrounded by more patterns. Description The aim of the Prototype pattern is to have an object or a set of objects that are already created at compilation time, but which you can clone as many times as you want at runtime. This is useful, for example, as a default template for a user who has just registered with your webpage or a default pricing plan in some service. The key difference between this and a Builder pattern is that objects are cloned for the user instead of building them at runtime. You can also build a cache-like solution, storing information using a prototype. Objective Maintain a set of objects that will be cloned to create new instances Free CPU of complex object initialization to take more memory resources We will start with the GetClone method. This method should return an item of the specified type: type ShirtsCache struct {} func (s *ShirtsCache)GetClone(m int) (ItemInfoGetter, error) { switch m { case White: newItem := *whitePrototype return &newItem, nil case Black: newItem := *blackPrototype return &newItem, nil case Blue: newItem := *bluePrototype return &newItem, nil default: return nil, errors.New("Shirt model not recognized") } } The Shirt structure also needs a GetInfo implementation to print the contents of the instances. type ShirtColor byte type Shirt struct { Price float32 SKU string Color ShirtColor } func (s *Shirt) GetInfo() string { return fmt.Sprintf("Shirt with SKU '%s' and Color id %d that costs %fn", s.SKU, s.Color, s.Price) } Finally, lets run the tests to see that everything is now working: go test -run=TestClone -v . === RUN TestClone --- PASS: TestClone (0.00s) prototype_test.go:41: LOG: Shirt with SKU 'abbcc' and Color id 1 that costs 15.000000 prototype_test.go:42: LOG: Shirt with SKU 'empty' and Color id 1 that costs 15.000000 prototype_test.go:44: LOG: The memory positions of the shirts are different 0xc42002c038 != 0xc42002c040 PASS ok In the log (remember to set the -v flag when running the tests), you can check that shirt1 and shirt2 have different SKUs. Also, we can see the memory positions of both objects. Take into account that the positions shown on your computer will probably be different. Summary We have seen the creational design patterns commonly used in the software industry. Their purpose is to abstract the user from the creation of objects for handling complexity or maintainability purposes. Design patterns have been the foundation of thousands of applications and libraries since the nineties, and most of the software we use today has many of these creational patterns under the hood. Resources for Article: Further resources on this subject: Getting Started [article] Thinking Functionally [article] Auditing and E-discovery [article]
Read more
  • 0
  • 0
  • 19031
article-image-tools-intypescript
Packt
02 Jan 2017
14 min read
Save for later

Tools inTypeScript

Packt
02 Jan 2017
14 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, Second Edition, you will learn how to build enterprise-ready, industrial web applications using TypeScript and leading JavaScript frameworks. In this article, we will cover the following topics: What is TypeScript? The benefits of TypeScript (For more resources related to this topic, see here.) What is TypeScript? TypeScript is both a language and a set of tools to generate JavaScript. It was designed by Anders Hejlsberg at Microsoft (the designer of C#) as an open source project to help developers write enterprise scale JavaScript. TypeScript generates JavaScript – it's as simple as that. Instead of requiring a completely new runtime environment, TypeScript generated JavaScript can reuse all of the existing JavaScript tools, frameworks, and wealth of libraries that are available for JavaScript. The TypeScript language and compiler, however, brings the development of JavaScript closer to a more traditional object-oriented experience. EcmaScript JavaScript as a language has been around for a long time, and is also governed by a language feature standard. The language defined in this standard is called ECMAScript, and each JavaScript interpreter must deliver functions and features that conform to this standard. The definition of this standard helped the growth of JavaScript and the web in general, and allowed websites to render correctly on many different browsers on many different operating systems. The ECMAScript standard was published in 1999 and is known as ECMA-262, third edition. With the popularity of the language, and the explosive growth of internet applications, the ECMAScript standard needed to be revised and updated. This process resulted in a draft specification for ECMAScript, called the fourth edition. Unfortunately, this draft suggested a complete overhaul of the language, and was not well received. Eventually, leaders from Yahoo, Google, and Microsoft tabled an alternate proposal which they called ECMAScript 3.1. This proposal was numbered 3.1, as it was a smaller feature set of the third edition, and sat between edition three and four of the standard. This proposal for language changes tabled earlier was eventually adopted as the fifth edition of the standard, and was called ECMAScript 5. The ECMAScript fourth edition was never published, but it was decided to merge the best features of both the fourth edition and the 3.1 feature set into a sixth edition named ECMAScript Harmony. The TypeScript compiler has a parameter that can switch between different versions of the ECMAScript standard. TypeScript currently supports ECMAScript 3, ECMAScript 5, and ECMAScript 6. When the compiler runs over your TypeScript, it will generate compile errors if the code you are attempting to compile is not valid for that standard. The team at Microsoft has committed to follow the ECMAScript standards in any new versions of the TypeScript compiler, so as new editions are adopted, the TypeScript language and compiler will follow suit. The benefits of TypeScript To give you a flavor of the benefits of TypeScript (and this is by no means the full list), let's have a very quick look at some of the things that TypeScript brings to the table: A compilation step Strong or static typing Type definitions for popular JavaScript libraries Encapsulation Private and public member variable decorators Compiling One of the most frustrating things about JavaScript development is the lack of a compilation step. JavaScript is an interpreted language, and therefore needs to be run in order to test that it is valid. Every JavaScript developer will tell horror stories of hours spent trying to find bugs in their code, only to find that they have missed a stray closing brace { , or a simple comma ,– or even a double quote " where there should have been a single quote ' . Even worse, the real headaches arrive when you misspell a property name, or unwittingly reassign a global variable. TypeScript will compile your code, and generate compilation errors where it finds these sorts of syntax errors. This is obviously very useful, and can help to highlight errors before the JavaScript is run. In large projects, programmers will often need to do large code merges and with today's tools doing automatic merges, it is surprising how often the compiler will pick up these types of errors. While tools to do this sort of syntax checking like JSLint have been around for years, it is obviously beneficial to have these tools integrated into your IDE. Using TypeScript in a continuous integration environment will also fail a build completely when compilation errors are found, further protecting your programmers against these types of bugs. Strong typing JavaScript is not strongly typed. It is a language that is very dynamic, as it allows objects to change their properties and behavior on the fly. As an example of this, consider the following code: var test = "this is a string"; test = 1; test = function(a, b) { return a + b; } On the first line of the preceding code, the variable test is bound to a string. It is then assigned a number, and finally is redefined to be a function that expects two parameters. Traditional object-oriented languages, however, will not allow the type of a variable to change, hence they are called strongly typed languages. While all of the preceding code is valid JavaScript and could be justified, it is quite easy to see how this could cause runtime errors during execution. Imagine that you were responsible for writing a library function to add two numbers, and then another developer inadvertently reassigned your function to subtract these numbers instead. These sorts of errors may be easy to spot in a few lines of code, but it becomes increasingly difficult to find and fix these as your code base and your development team grows. Another feature of strong typing is that the IDE you are working in understands what type of variable you are working with, and can bring better autocomplete or Intellisense options to the fore. TypeScript's syntactic sugar TypeScript introduces a very simple syntax to check the type of an object at compile time. This syntax has been referred to as syntactic sugar, or more formally, type annotations. Consider the following TypeScript code: var test: string = "this is a string"; test = 1; test = function(a, b) { return a + b; } Note on the first line of this code snippet, we have introduced a colon : and a string keyword between our variable and its assignment. This type annotation syntax means that we are setting the type of our variable to be of type string, and that any code that does not adhere to these rules will generate a compile error. Running the preceding code through the TypeScript compiler will generate two errors: hello.ts(3,1): error TS2322: Type 'number' is not assignable to type 'string'. hello.ts(4,1): error TS2322: Type '(a: any, b: any) => any' is not assignable to type 'string'. The first error is fairly obvious. We have specified that the variable test is a string, and therefore attempting to assign a number to it will generate a compile error. The second error is similar to the first, and is, in essence, saying that we cannot assign a function to a string. In this way, the TypeScript compiler introduces strong or static typing to your JavaScript code, giving you all of the benefits of a strongly typed language. TypeScript is therefore described as a superset of JavaScript. Type definitions for popular JavaScript libraries As we have seen, TypeScript has the ability to annotate JavaScript, and bring strong typing to the JavaScript development experience. But how do we strongly type existing JavaScript libraries? The answer is surprisingly simple—by creating a definition file. TypeScript uses files with a .d.ts extension as a sort of header file, similar to languages such as C++, to superimpose strongly typing on existing JavaScript libraries. These definition files hold information that describes each available function, and or variables, along with their associated type annotations. Let's have a quick look at what a definition would look like. As an example, I have lifted a function from the popular Jasmine unit testing framework, called describe: var describe = function(description, specDefinitions) { return jasmine.getEnv().describe(description, specDefinitions); }; Note that this function has two parameters—description and specDefinitions. But JavaScript does not tell us what sort of variables these are. We would need to have a look at the Jasmine documentation to figure out how to call this function. If we head over to http://jasmine.github.io/2.0/introduction.html, we will see an example of how to use this function: describe("A suite", function () { it("contains spec with an expectation", function () { expect(true).toBe(true); }); }); From the documentation, then, we can easily see that the first parameter is a string, and the second parameter is a function. But there is nothing in JavaScript that forces us to conform to this API. As mentioned before, we could easily call this function with two numbers or inadvertently switch the parameters around, sending a function first, and a string second. We will obviously start getting runtime errors if we do this, but TypeScript using a definition file can generate compile-time errors before we even attempt to run this code. Let's have a look at a piece of the jasmine.d.ts definition file: declare function describe( description: string, specDefinitions: () => void ): void; This is the TypeScript definition for the describe function. Firstly, declare function describe tells us that we can use a function called describe, but that the implementation of this function will be provided at runtime. Clearly, the description parameter is strongly typed to a string, and the specDefinitions parameter is strongly typed to be a function that returns void. TypeScript uses the double braces () syntax to declare functions, and the arrow syntax to show the return type of the function. So () => void is a function that does not return anything. Finally, the describe function itself will return void. If our code were to try and pass in a function as the first parameter, and a string as the second parameter (clearly breaking the definition of this function), as shown in the following example: describe(() => { /* function body */}, "description"); TypeScript will generate the following error: hello.ts(11,11): error TS2345: Argument of type '() => void' is not assignable to parameter of type 'string'. This error is telling us that we are attempting to call the describe function with invalid parameters,andit clearly shows that TypeScript will generate errors if we attempt to use external JavaScript libraries incorrectly. DefinitelyTyped Soon after TypeScript was released, Boris Yankov started a Git Hub repository to house definition files, called Definitely Typed (http://definitelytyped.org). This repository has now become the first port of call for integrating external libraries into TypeScript, and it currently holds definitions for over 1,600 JavaScript libraries. Encapsulation One of the fundamental principles of object-oriented programming is encapsulation—the ability to define data, as well as a set of functions that can operate on that data, into a single component. Most programming languages have the concept of a class for this purpose, providing a way to define a template for data and related functions. Let's first take a look at a simple TypeScript class definition: class MyClass { add(x, y) { return x + y; } } varclassInstance = new MyClass(); var result = classInstance.add(1,2); console.log(`add(1,2) returns ${result}`); This code is pretty simple to read and understand. We have created a class, named MyClass, with a simple add function. To use this class we simply create an instance of it, and call the add function with two arguments. JavaScript, unfortunately, does not have a class statement, but instead uses functions to reproduce the functionality of classes. Encapsulation through classes is accomplished by either using the prototype pattern, or by using the closure pattern. Understanding prototypes and the closure pattern, and using them correctly, is considered a fundamental skill when writing enterprise-scale JavaScript. A closure is essentially a function that refers to independent variables. This means that variables defined within a closure function remember the environment in which they were created. This provides JavaScript with a way to define local variables, and provide encapsulation. Writing the MyClass definition in the preceding code, using a closure in JavaScript, would look something like the following: varMyClass = (function () { // the self-invoking function is the // environment that will be remembered // by the closure function MyClass() { // MyClass is the inner function, // the closure } MyClass.prototype.add = function (x, y) { return x + y; }; return MyClass; }()); varclassInstance = new MyClass(); var result = classInstance.add(1, 2); console.log("add(1,2) returns " + result); We start with a variable called MyClass, and assign it to a function that is executed immediately note the })(); syntax near the bottom of the closure definition. This syntax is a common way to write JavaScript in order to avoid leaking variables into the global namespace. We then define a new function named MyClass, and return this new function to the outer calling function. We then use the prototype keyword to inject a new function into the MyClass definition. This function is named add and takes two parameters, returning their sum. The last few lines of the code show how to use this closure in JavaScript. Create an instance of the closure type, and then execute the add function. Running this code will log add(1,2) returns 3 to the console, as expected. Looking at the JavaScript code versus the TypeScript code, we can easily see how simple the TypeScript looks compared to the equivalent JavaScript. Remember how we mentioned that JavaScript programmers can easily misplace a brace {, or a bracket (? Have a look at the last line in the closure definition—})();. Getting one of these brackets or braces wrong can take hours of debugging to find. Public and private accessors A further object oriented principle that is used in Encapsulation is the concept of data hiding that is the ability to have public and private variables. Private variables are meant to be hidden to the user of a particular class as these variables should only be used by the class itself. Inadvertently exposing these variables can easily cause runtime errors. Unfortunately, JavaScript does not have a native way of declaring variables private. While this functionality can be emulated using closures, a lot of JavaScript programmers simply use the underscore character _ to denote a private variable. At runtime though, if you know the name of a private variable you can easily assign a value to it. Consider the following JavaScript code: varMyClass = (function() { function MyClass() { this._count = 0; } MyClass.prototype.countUp = function() { this._count ++; } MyClass.prototype.getCountUp = function() { return this._count; } return MyClass; }()); var test = new MyClass(); test._count = 17; console.log("countUp : " + test.getCountUp()); The MyClass variable is actually a closure with a constructor function, a countUp function, and a getCountUp function. The variable _count is supposed to be a private member variable that is used only within the scope of the closure. Using the underscore naming convention gives the user of this class some indication that the variable is private, but JavaScript will still allow you to manipulate the variable _count. Take a look at the second last line of the code snippet. We are explicitly setting the value of _count to 17 which is allowed by JavaScript, but not desired by the original creator of the class. The output of this code would be countUp : 17. TypeScript, however, introduces the public and private keywords that can be used on class member variables. Trying to access a class member variable that has been marked as private will generate a compile time error. As an example of this, the JavaScript code above can be written in TypeScript, as follows: class CountClass { private _count: number; constructor() { this._count = 0; } countUp() { this._count ++; } getCount() { return this._count; } } varcountInstance = new CountClass() ; countInstance._count = 17; On the second line of our code snippet, we have declared a private member variable named _count. Again, we have a constructor, a countUp, and a getCount function. If we compile this file, the compiler will generate an error: hello.ts(39,15): error TS2341: Property '_count' is private and only accessible within class 'CountClass'. This error is generated because we are trying to access the private variable _count in the last line of the code. The TypeScript compiler, therefore, is helping us to adhere to public and private accessors by generating a compile error when we inadvertently break this rule. Summary In this article, we took a quick look at what TypeScript is and what benefits it can bring to the JavaScript development experience. Resources for Article: Further resources on this subject: Introducing Object Oriented Programmng with TypeScript [article] Understanding Patterns and Architecturesin TypeScript [article] Writing SOLID JavaScript code with TypeScript [article]
Read more
  • 0
  • 0
  • 22189

article-image-building-your-first-odoo-application
Packt
02 Jan 2017
22 min read
Save for later

Building Your First Odoo Application

Packt
02 Jan 2017
22 min read
In this article by, Daniel Reis, the author of the book Odoo 10 Development Essentials, we will create our first Odoo application and learn the steps needed make it available to Odoo and install it. (For more resources related to this topic, see here.) Inspired by the notable http://todomvc.com/ project, we will build a simple To-Do application. It should allow us to add new tasks, mark them as completed, and finally clear the task list from all the already completed tasks. Understanding applications and modules It's common to hear about Odoo modules and applications. But what exactly is the difference between them? Module add-ons are building blocks for Odoo applications. A module can add new features to Odoo, or modify existing ones. It is a directory containing a manifest, or descriptor file, named __manifest__.py, plus the remaining files that implement its features. Applications are the way major features are added to Odoo. They provide the core elements for a functional area, such as Accounting or HR, based on which additional add-on modules modify or extend features. Because of this, they are highlighted in the Odoo Apps menu. If your module is complex, and adds new or major functionality to Odoo, you might consider creating it as an application. If you module just makes changes to existing functionality in Odoo, it is likely not an application. Whether a module is an application or not is defined in the manifest. Technically is does not have any particular effect on how the add-on module behaves. It is only used for highlight on the Apps list. Creating the module basic skeleton We should have the Odoo server at ~/odoo-dev/odoo/. To keep things tidy, we will create a new directory alongside it to host our custom modules, at ~/odoo-dev/custom-addons. Odoo includes a scaffold command to automatically create a new module directory, with a basic structure already in place. You can learn more about it with: $ ~/odoo-dev/odoo/odoo-bin scaffold --help You might want to keep this in mind when you start working your next module, but we won't be using it right now, since we will prefer to manually create all the structure for our module. An Odoo add-on module is a directory containing a __manifest__.py descriptor file. In previous versions, this descriptor file was named __openerp__.py. This name is still supported, but is deprecated. It also needs to be Python-importable, so it must also have an __init__.py file. The module's directory name is its technical name. We will use todo_app for it. The technical name must be a valid Python identifier: it should begin with a letter and can only contain letters, numbers, and the underscore character. The following commands create the module directory and create an empty __init__.py file in it, ~/odoo-dev/custom-addons/todo_app/__init__.py. In case you would like to do that directly from the command line, this is what you would use: $ mkdir ~/odoo-dev/custom-addons/todo_app $ touch ~/odoo-dev/custom-addons/todo_app/__init__.py Next, we need to create the descriptor file. It should contain only a Python dictionary with about a dozen possible attributes; of this, only the name attribute is required. A longer description attribute and the author attribute also have some visibility and are advised. We should now add a __manifest__.py file alongside the __init__.py file with the following content: { 'name': 'To-Do Application', 'description': 'Manage your personal To-Do tasks.', 'author': 'Daniel Reis', 'depends': ['base'], 'application': True, } The depends attribute can have a list of other modules that are required. Odoo will have them automatically installed when this module is installed. It's not a mandatory attribute, but it's advised to always have it. If no particular dependencies are needed, we should depend on the core base module. You should be careful to ensure all dependencies are explicitly set here; otherwise, the module may fail to install in a clean database (due to missing dependencies) or have loading errors, if by chance the other required modules are loaded afterwards. For our application, we don't need any specific dependencies, so we depend on the base module only. To be concise, we chose to use very few descriptor keys, but in a real word scenario, we recommend that you also use the additional keys since they are relevant for the Odoo apps store: summary: This is displayed as a subtitle for the module. version: By default, is 1.0. It should follow semantic versioning rules (see http://semver.org/ for details). license: By default, is LGPL-3. website: This is a URL to find more information about the module. This can help people find more documentation or the issue tracker to file bugs and suggestions. category: This is the functional category of the module, which defaults to Uncategorized. The list of existing categories can be found in the security groups form (Settings | User | Groups), in the Application field drop-down list. These other descriptor keys are also available: installable: It is by default True but can be set to False to disable a module. auto_install: If the auto_install module is set to True, this module will be automatically installed, provided all its dependencies are already installed. It is used for the Glue modules. Since Odoo 8.0, instead of the description key, we can use a README.rst or README.md file in the module's top directory. A word about licenses Choosing a license for your work is very important, and you should consider carefully what is the best choice for you, and its implications. The most used licenses for Odoo modules are the GNU Lesser General Public License (LGLP) and the Affero General Public License (AGPL). The LGPL is more permissive and allows commercial derivate work, without the need to share the corresponding source code. The AGPL is a stronger open source license, and requires derivate work and service hosting to share their source code. Learn more about the GNU licenses at https://www.gnu.org/licenses/. Adding to the add-ons path Now that we have a minimalistic new module, we want to make it available to the Odoo instance. For that, we need to make sure the directory containing the module is in the add-ons path, and then update the Odoo module list. We will position in our work directory and start the server with the appropriate add-ons path configuration: $ cd ~/odoo-dev $ ./odoo/odoo-bin -d todo --addons-path="custom-addons,odoo/addons" --save The --save option saves the options you used in a config file. This spares us from repeating them every time we restart the server: just run ./odoo-bin and the last saved options will be used. Look closely at the server log. It should have an INFO ? odoo: addons paths:[...] line. It should include our custom-addons directory. Remember to also include any other add-ons directories you might be using. For instance, if you also have a ~/odoo-dev/extra directory containing additional modules to be used, you might want to include them also using the option: --addons-path="custom-addons,extra,odoo/addons" Now we need the Odoo instance to acknowledge the new module we just added. Installing the new module In the Apps top menu, select the Update Apps List option. This will update the module list, adding any modules that may have been added since the last update to the list. Remember that we need the developer mode enabled for this option to be visible. That is done in the Settings dashboard, in the link at the bottom right, below the Odoo version number information . Make sure your web client session is working with the right database. You can check that at the top right: the database name is shown in parenthesis, right after the user name. A way to enforce using the correct database is to start the server instance with the additional option --db-filter=^MYDB$. The Apps option shows us the list of available modules. By default it shows only application modules. Since we created an application module we don't need to remove that filter to see it. Type todo in the search and you should see our new module, ready to be installed. Now click on the module's Install button and we're ready! The Model layer Now that Odoo knows about our new module, let's start by adding a simple model to it. Models describe business objects, such as an opportunity, sales order, or partner (customer, supplier, and so on.). A model has a list of attributes and can also define its specific business. Models are implemented using a Python class derived from an Odoo template class. They translate directly to database objects, and Odoo automatically takes care of this when installing or upgrading the module. The mechanism responsible for this is Object Relational Model (ORM). Our module will be a very simple application to keep to-do tasks. These tasks will have a single text field for the description and a checkbox to mark them as complete. We should later add a button to clean the to-do list from the old completed tasks. Creating the data model The Odoo development guidelines state that the Python files for models should be placed inside a models subdirectory. For simplicity, we won't be following this here, so let's create a todo_model.py file in the main directory of the todo_app module. Add the following content to it: # -*- coding: utf-8 -*- from odoo import models, fields class TodoTask(models.Model): _name = 'todo.task' _description = 'To-do Task' name = fields.Char('Description', required=True) is_done = fields.Boolean('Done?') active = fields.Boolean('Active?', default=True) The first line is a special marker telling the Python interpreter that this file has UTF-8 so that it can expect and handle non-ASCII characters. We won't be using any, but it's a good practice to have it anyway. The second line is a Python import statement, making available the models and fields objects from the Odoo core. The third line declares our new model. It's a class derived from models.Model. The next line sets the _name attribute defining the identifier that will be used throughout Odoo to refer to this model. Note that the actual Python class name , TodoTask in this case, is meaningless to other Odoo modules. The _name value is what will be used as an identifier. Notice that this and the following lines are indented. If you're not familiar with Python, you should know that this is important: indentation defines a nested code block, so these four lines should all be equally indented. Then we have the _description model attribute. It is not mandatory, but it provides a user friendly name for the model records, that can be used for better user messages. The last three lines define the model's fields. It's worth noting that name and active are special field names. By default, Odoo will use the name field as the record's title when referencing it from other models. The active field is used to inactivate records, and by default, only active records will be shown. We will use it to clear away completed tasks without actually deleting them from the database. Right now, this file is not yet used by the module. We must tell Python to load it with the module in the __init__.py file. Let's edit it to add the following line: from . import todo_model That's it. For our Python code changes to take effect the server instance needs to be restarted (unless it was using the --dev mode). We won't see any menu option to access this new model, since we didn't add them yet. Still we can inspect the newly created model using the Technical menu. In the Settings top menu, go to Technical | Database Structure | Models, search for the todo.task model on the list and then click on it to see its definition: If everything goes right, it is confirmed that the model and fields were created. If you can't see them here, try a server restart with a module upgrade, as described before. We can also see some additional fields we didn't declare. These are reserved fields Odoo automatically adds to every new model. They are as follows: id: A unique, numeric identifier for each record in the model. create_date and create_uid: These specify when the record was created and who created it, respectively. write_date and write_uid: These confirm when the record was last modified and who modified it, respectively. __last_update: This is a helper that is not actually stored in the database. It is used for concurrency checks. The View layer The View layer describes the user interface. Views are defined using XML, which is used by the web client framework to generate data-aware HTML views. We have menu items that can activate the actions that can render views. For example, the Users menu item processes an action also called Users, that in turn renders a series of views. There are several view types available, such as the list and form views, and the filter options made available are also defined by particular type of view, the search view. The Odoo development guidelines state that the XML files defining the user interface should be placed inside a views/ subdirectory. Let's start creating the user interface for our To-Do application. Adding menu items Now that we have a model to store our data, we should make it available on the user interface. For that we should add a menu option to open the To-do Task model so that it can be used. Create the views/todo_menu.xml file to define a menu item and the action performed by it: <?xml version="1.0"?> <odoo> <!-- Action to open To-do Task list --> <act_window id="action_todo_task" name="To-do Task" res_model="todo.task" view_mode="tree,form" /> <!-- Menu item to open To-do Task list --> <menuitem id="menu_todo_task" name="Todos" action="action_todo_task" /> </odoo> The user interface, including menu options and actions, is stored in database tables. The XML file is a data file used to load those definitions into the database when the module is installed or upgraded. The preceding code is an Odoo data file, describing two records to add to Odoo: The <act_window> element defines a client-side window action that will open the todo.task model with the tree and form views enabled, in that order The <menuitem> defines a top menu item calling the action_todo_task action, which was defined before Both elements include an id attribute. This id , also called an XML ID, is very important: it is used to uniquely identify each data element inside the module, and can be used by other elements to reference it. In this case, the <menuitem> element needs to reference the action to process, and needs to make use of the <act_window> id for that. Our module does not know yet about the new XML data file. This is done by adding it to the data attribute in the __manifest__.py file. It holds the list of files to be loaded by the module. Add this attribute to the descriptor's dictionary: 'data': ['views/todo_menu.xml'], Now we need to upgrade the module again for these changes to take effect. Go to the Todos top menu and you should see our new menu option available: Even though we haven't defined our user interface view, clicking on the Todos menu will open an automatically generated form for our model, allowing us to add and edit records. Odoo is nice enough to automatically generate them so that we can start working with our model right away. Odoo supports several types of views, but the three most important ones are: tree (usually called list views), form, and search views. We'll add an example of each to our module. Creating the form view All views are stored in the database, in the ir.ui.view model. To add a view to a module, we declare a <record> element describing the view in an XML file, which is to be loaded into the database when the module is installed. Add this new views/todo_view.xml file to define our form view: <?xml version="1.0"?> <odoo> <record id="view_form_todo_task" model="ir.ui.view"> <field name="name">To-do Task Form</field> <field name="model">todo.task</field> <field name="arch" type="xml"> <form string="To-do Task"> <group> <field name="name"/> <field name="is_done"/> <field name="active" readonly="1"/> </group> </form> </field> </record> </odoo> Remember to add this new file to the data key in manifest file, otherwise our module won't know about it and it won't be loaded. This will add a record to the ir.ui.view model with the identifier view_form_todo_task. The view is for the todo.task model and is named To-do Task Form. The name is just for information; it does not have to be unique, but it should allow one to easily identify which record it refers to. In fact the name can be entirely omitted, in that case it will be automatically generated from the model name and the view type. The most important attribute is arch, and contains the view definition, highlighted in the XML code above. The <form> tag defines the view type, and in this case contains three fields. We also added an attribute to the active field to make it read-only. Adding action buttons Forms can have buttons to perform actions. These buttons are able to trigger workflow actions, run window actions—such as opening another form, or run Python functions defined in the model. They can be placed anywhere inside a form, but for document-style forms, the recommended place for them is the <header> section. For our application, we will add two buttons to run the methods of the todo.task model: <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> The basic attributes of a button comprise the following: The string attribute that has the text to be displayed on the button The type attribute referring to the action it performs The name attribute referring to the identifier for that action The class attribute, which is an optional attribute to apply CSS styles, like in regular HTML The complete form view At this point, our todo.task form view should look like this: <form> <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> <sheet> <group name="group_top"> <group name="group_left"> <field name="name"/> </group> <group name="group_right"> <field name="is_done"/> <field name="active" readonly="1" /> </group> </group> </sheet> </form> Remember that for the changes to be loaded to our Odoo database, a module upgrade is needed. To see the changes in the web client, the form needs to be reloaded: either click again on the menu option that opens it or reload the browser page (F5 in most browsers). The action buttons won't work yet, since we still need to add their business logic. The business logic layer Now we will add some logic to our buttons. This is done with Python code, using the methods in the model's Python class. Adding business logic We should edit the todo_model.py Python file to add to the class the methods called by the buttons. First we need to import the new API, so add it to the import statement at the top of the Python file: from odoo import models, fields, api The action of the Toggle Done button will be very simple: just toggle the Is Done? flag. For logic on records, use the @api.multi decorator. Here, self will represent a recordset, and we should then loop through each record. Inside the TodoTask class, add this: @api.multi def do_toggle_done(self): for task in self: task.is_done = not task.is_done return True The code loops through all the to-do task records, and for each one, modifies the is_done field, inverting its value. The method does not need to return anything, but we should have it to at least return a True value. The reason is that clients can use XML-RPC to call these methods, and this protocol does not support server functions returning just a None value. For the Clear All Done button, we want to go a little further. It should look for all active records that are done and make them inactive. Usually, form buttons are expected to act only on the selected record, but in this case, we will want it also act on records other than the current one: @api.model def do_clear_done(self): dones = self.search([('is_done', '=', True)]) dones.write({'active': False}) return True On methods decorated with @api.model, the self variable represents the model with no record in particular. We will build a dones recordset containing all the tasks that are marked as done. Then, we set on the active flag to False on them. The search method is an API method that returns the records that meet some conditions. These conditions are written in a domain, which is a list of triplets. The write method sets the values at once on all the elements of a recordset. The values to write are described using a dictionary. Using write here is more efficient than iterating through the recordset to assign the value to each of them one by one. Set up access security You might have noticed that upon loading, our module is getting a warning message in the server log: The model todo.task has no access rules, consider adding one. The message is pretty clear: our new model has no access rules, so it can't be used by anyone other than the admin super user. As a super user, the admin ignores data access rules, and that's why we were able to use the form without errors. But we must fix this before other users can use our model. Another issue yet to address is that we want the to-do tasks to be private to each user. Odoo supports row-level access rules, which we will use to implement that. Adding access control security To get a picture of what information is needed to add access rules to a model, use the web client and go to Settings | Technical | Security | Access Controls List: Here we can see the ACL for some models. It indicates, per security group, what actions are allowed on records. This information has to be provided by the module using a data file to load the lines into the ir.model.access model. We will add full access to the Employee group on the model. Employee is the basic access group nearly everyone belongs to. This is done using a CSV file named security/ir.model.access.csv. Let's add it with the following content: id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink acess_todo_task_group_user,todo.task.user,model_todo_task,base.group_user,1,1,1,1 The filename corresponds to the model to load the data into, and the first line of the file has the column names. These are the columns provided by the CSV file: id: It is the record external identifier (also known as XML ID). It should be unique in our module. name: This is a description title. It is only informative and it's best if it's kept unique. Official modules usually use a dot-separated string with the model name and the group. Following this convention, we used todo.task.user. model_id: This is the external identifier for the model we are giving access to. Models have XML IDs automatically generated by the ORM: for todo.task, the identifier is model_todo_task. group_id: This identifies the security group to give permissions to. The most important ones are provided by the base module. The Employee group is such a case and has the identifier base.group_user. The last four perm fields flag the access to grant read, write, create, or unlink (delete) access. We must not forget to add the reference to this new file in the __manifest__.py descriptor's data attribute It should look like this: 'data': [ 'security/ir.model.access.csv', 'views/todo_view.xml', 'views/todo_menu.xml', ], As before, upgrade the module for these additions to take effect. The warning message should be gone, and we can confirm that the permissions are OK by logging in with the user demo (password is also demo). If we run our tests now it they should only fail the test_record_rule test case. Summary We created a new module from the start, covering the most frequently used elements in a module: models, the three basic types of views (form, list, and search), business logic in model methods, and access security. Always remember, when adding model fields, an upgrade is needed. When changing Python code, including the manifest file, a restart is needed. When changing XML or CSV files, an upgrade is needed; also, when in doubt, do both: restart the server and upgrade the modules. Resources for Article: Further resources on this subject: Getting Started with Odoo Development [Article] Introduction to Odoo [Article] Web Server Development [Article]
Read more
  • 0
  • 0
  • 41462
Modal Close icon
Modal Close icon