Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-benefits-and-components-docker
Packt
04 Nov 2016
19 min read
Save for later

Benefits and Components of Docker

Packt
04 Nov 2016
19 min read
In this article by Jaroslaw Krochmalski, author of Developing with Docker, at the beginning, Docker was created as an internal tool by a Platform as a Service company, called dotCloud. Later on, in March 2013, it was released as open source. Apart from the Docker Inc. team, which is the main sponsor, there are some other big names contributing to the tool –Red Hat, IBM, Microsoft, Google, and Cisco Systems, just to name a few. Software development today needs to be agile and react quickly to the changes. We use methodologies such as Scrum, estimate our work in story points and attend the daily stand-ups. But what about preparing our software for shipment and the deployment? Let's see how Docker fits into that scenario and can help us being agile. (For more resources related to this topic, see here.) In this article, we will cover the following topics: The basic idea behind Docker A difference between virtualization and containerization Benefits of using Docker Components available to install We will begin with a basic idea behind this wonderful tool. The basic idea The basic idea behind Docker is to pack an application with all of its dependencies (let it be binaries, libraries, configuration files, scripts, jars, and so on) into a single, standardized unit for software development and deployment. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, and system libraries—anything you can install on a server. This guarantees that it will always run in the same way, no matter what environment it will be deployed in. With Docker, you can build some Node.js or Java project (but you are of course not limited to those two) without having to install Node.js or Java on your host machine. Once you're done with it, you can just destroy the Docker image and it's as though nothing ever happened. It's not a programming language or a framework, rather think of it as about a tool that helps solving the common problems such as installing, distributing, and managing the software. It allows programmers and DevOps to build, ship, and run their code anywhere. You can think that maybe Docker is a virtualization engine, but it's far from it as we will explain in a while. Containerization versus virtualization To fully understand what Docker really is, first we need to understand the difference between traditional virtualization and containerization. Let's compare those two technologies now. Traditional virtualization A traditional virtual machine, which represents the hardware-level virtualization, is basically a complete operating system running on top of the host operating system. There are two types of virtualization hypervisors: Type 1 and Type 2. Type 1 hypervisors provide server virtualization on bare metal hardware—there is no traditional, end user's operating system. Type 2 hypervisors, on the other hand, are commonly used as a desktop virtualization—you run the virtualization engine on top of your own operating system. There can be a lot of use cases that would make an advantage from using virtualization—the biggest asset is that you can run many virtual machines with totally different operating systems on a single host. Virtual machines are fully isolated, hence very secure. But nothing comes without a price. There are many drawbacks—they contain all the features that an operating system needs to have: device drivers, core system libraries, and so on. They are heavyweight, usually resource-hungry and not so easy to set up—virtual machines require full installation. They require more computing resources to execute. To successfully run an application on a virtual machine, the hypervisor needs to first import the virtual machine and then power it up, and this takes time. Furthermore, their performance gets substantially degraded. As a result, only a few virtual machines can be provisioned and made available to work on a single machine. Containerization The Docker software runs in an isolated environment called a Docker container. A Docker container is not a virtual machine in the popular sense. It represents operating system virtualization. While each virtual machine image runs on an independent guest OS, the Docker images run within the same operating system kernel. A container has its own filesystem and environment variables. It's self-sufficient. Because of the containers run within the same kernel, they utilize fewer system resources. The base container can be, and usually is, very lightweight. Worth knowing is that Docker containers are isolated not only from the underlying OS, but from each other as well. There is no overhead related to a classic virtualization hypervisor and a guest operating system. This allows achieving almost bare metal, near native performance. The boot time of a dockerized application is usually very fast due to the low overhead of containers. It is also possible to speed up the roll-out of hundreds of application containers in seconds and to reduce the time taken for provisioning your software. As you can see, Docker is quite different from the traditional virtualization engines. Be aware that containers cannot substitute virtual machines for all use cases. A thoughtful evaluation is still required to determine what is best for your application. Both solutions have their advantages. On one hand we have the fully isolated, secure virtual machine with average performance and on the other hand, we have the containers that are missing some of the key features (such as total isolation), but are equipped with high performance that can be provisioned swiftly. Let's see what other benefits you will get when using Docker containerization. Benefits of using Docker When comparing the Docker containers with traditional virtual machines, we have mentioned some of its advantages. Let's summarize them now in more detail and add some more. Speed and size As we have said before, the first visible benefit of using Docker will be very satisfactory performance and short provisioning time. You can create or destroy containers quickly and easily. Docker shares only the Kernel, nothing less, nothing more. However, it reuses the image layers on which the specific image is built upon. Because of that, multiple versions of an application running in containers will be very lightweight. The result is faster deployment, easier migration, and nimble boot times. Reproducible and portable builds Using Docker enables you to deploy ready-to-run software, which is portable and extremely easy to distribute. Your containerized application simply runs within its container, there's no need for traditional installation. The key advantage of a Docker image is, it is bundled with all the dependency the containerized application needs to run. The lack of installation of dependencies process has a huge advantage. This eliminates problems such as software and library conflicts or even driver compatibility issues. Because of Docker's reproducible build environment, it's particularly well suited for testing, especially in your continuous integration flow. You can quickly boot up identical environments to run the tests. And because the container images are all identical each time, you can distribute the workload and run tests in parallel without a problem. Developers can run the same image on their machine that will be run in production later, which again has a huge advantage in testing. The use of Docker containers speeds up continuous integration. There are no more endless build-test-deploy cycles. Docker containers ensure that applications run identically in development, test, and production environments. One of Docker's greatest features is the portability. Docker containers are portable – they can be run from anywhere: your local machine, a nearby or distant server, and private or public cloud. When speaking about the cloud, all major cloud computing providers, such as Amazon Web Services, Google's Compute Platform have perceived Docker's availability and now support it. Docker containers can be run inside an Amazon EC2 instance, Google Compute Engine instance provided that the host operating system supports Docker. A container running on an Amazon EC2 instance can easily be transferred to some other environment, achieving the same consistency and functionality. Docker works very well with various other IaaS (Infrastructure-as-a-Service) providers such as Microsoft's Azure, IBM SoftLayer, or OpenStack. This additional level of abstraction from your infrastructure layer is an indispensable feature. You can just develop your software without worrying about the platform it will be run later on. It's truly a write once run everywhere solution. Immutable and agile infrastructure Maintaining a truly idempotent configuration management code base can be tricky and a time-consuming process. The code grows over time and becomes more and more troublesome. That's why the idea of an immutable infrastructure becomes more and more popular nowadays. Containerization comes to the rescue. By using containers during the process of development and deployment of your applications, you can simplify the process. Having a lightweight Docker server that needs almost no configuration management, you manage your applications simply by deploying and redeploying containers to the server. And again, because the containers are very lightweight, it takes only seconds of your time. As a starting point, you can download a prebuilt Docker image from the Docker Hub, which is like a repository of ready-to-use images. There are many choices of web servers, runtime platforms, databases, messaging servers and so on. It's like a real gold mine of software you can use for free to get a base foundation for your own project. The effect of the immutability of Docker's images is the result of a way they are being created. Docker makes use of a special file called a Dockerfile. This file contains all the setup instructions on how to create an image, like must-have components, libraries, exposed shared directories, network configuration, and so on. An image can be very basic, containing nothing but the operating system foundations, or—something that is more common—containing a whole prebuilt technology stack which is ready to launch. You can create images by hand, but it can be an automated process also. Docker creates images in a layered fashion: every feature you include will be added as another layer in the base image. This is another serious speed boost compared to the traditional virtualization techniques. Tools and APIs Of course, Docker is not just a Dockerfile processor and the runtime engine. It's a complete package with wide selection of tools and APIs that are helpful during the developer's and DevOp's daily work. First of all, there's The Docker Toolbox, which is an installer to quickly and easily install and setup a Docker environment on your own machine. The Kinematic is desktop developer environment for using Docker on Windows and Mac OS X. Docker distribution also contains a whole bunch of command-line tools. Let's look at them now. Tools overview On Windows, depending on the Windows version you use, there are two choices. It can be either Docker for Windows if you are on Windows 10 or later, or Docker Toolbox for all earlier versions of Windows. The same applies to MacOS. The newest offering is Docker for Mac, which runs as a native Mac application and uses xhyve to virtualize the Docker Engine environment and Linux kernel. For earlier version of Mac that doesn't meet the Docker for Mac requirements you should pick the Docker Toolbox for Mac. The idea behind Docker Toolbox and Docker native applications are the same—to virtualize Linux kernel and Docker engine on top of your operating system, we will be using Docker Toolbox, as it is more universal; it will run in all Windows and MacOS versions. The installation package for Windows and Mac OS is wrapped in an executable called the Docker Toolbox. The package contains all the tools you need to begin working with Docker. Of course there are tons of additional third-party utilities compatible with Docker, some of them very useful. But for now, let's focus on the default toolset. Before we start the installation, let's look at the tools that the installer package contains to better understand what changes will be made to your system. Docker Engine and Docker Engine client Docker is a client-server application. It consists of the daemon that does the important job: builds and downloads images, starts and stops containers and so on. It exposes a REST API that specifies interfaces for interacting with the daemon and is being used for remote management. Docker Engine accepts Docker commands from the command line, such as docker to run the image, docker ps to list running containers, docker images to list images, and so on. The Docker client is a command-line program that is being used to manage Docker hosts running Linux containers. It communicates with the Docker server using the REST API wrapper. You will interact with Docker by using the client to send commands to the server. Docker Engine works only on Linux. If you want run Docker on Windows or Mac OS, or want to provision multiple Docker hosts on a network or in the Cloud, you will then need the Docker Machine. Docker Machine Docker Machine is a fairly new command-line tool created by Docker team to manage Docker servers you can deploy containers to. It deprecated the old way of installing Docker with the Boot2Docker utility. The Docker Machine eliminates the need to create virtual machines manually and install Docker before starting Docker containers on them. It handles the provisioning and installation process for you behind the scenes. In other words, it's a quick way to get a new virtual machine provisioned and ready to run Docker containers. This is an indispensable tool when developing PaaS (Platform as a Service) architecture. Docker Machine not only creates a new VM with the Docker Engine installed in it, but sets up the certificate files for authentication and then configures the Docker client to talk to it. For flexibility purposes, the Docker Machine introduces the concept of drivers. Using drivers, Docker is able to communicate with various virtualization software and cloud providers. In fact, when you install Docker for Windows or Mac OS, the default VirtualBox driver will be used. The following command will be executed behind the scenes: docker-machine create --driver=virtualbox default Another available driver is amazonec2 for Amazon Web Services. It can be used to install Docker on the Amazon's cloud—we will do it later in this article. There are a lot of drivers ready to be used, and more are coming all the time. The list of existing official drivers with their documentation is always available at the Docker Drivers website: https://docs.docker.com/machine/drivers. The list contains the following drivers at the moment: Amazon Web Services Microsoft Azure Digital Ocean Exoscale Google Compute Engine Generic Microsoft Hyper-V OpenStack Rackspace IBM Softlayer Oracle VirtualBox VMware vCloud Air VMware Fusion VMware vSphere Apart from these, there is also a lot of third-party driver plugins available freely on the Internet sites such as GitHub. You can find additional drivers for different cloud providers and virtualization platforms, such as OVH Cloud or Parallels for Mac OS, for example, you are not limited to Amazon's AWS or Oracle's VirtualBox. As you can see, the choice is very broad. If you cannot find a specific driver for your Cloud provider, try looking for it on the GitHub. When installing the Docker Toolbox on Windows or Mac OS, Docker Machine will be selected by default. It's mandatory and currently the only way to run Docker on these operating systems. Installing the Docker Machine is not obligatory for Linux—there is no need to virtualize the Linux kernel there. However, if you want to deal with the Cloud providers or just want to have common runtime environment portable between Mac OS, Windows, and Linux, you can install Docker Machine for Linux as well. We will describe the process later in this article. Machine will be also used behind the scenes when using the graphical tool Kitematic, which we will present in a while. After the installation process, Docker Machine will be available as a command-line tool: docker-machine. Kitematic Kitematic is the software tool you can use to run containers through a plain, yet robust graphical user interface. In 2015, Docker acquired the Kitematic team, expecting to attract many more developers and hoping to open up the containerization solution to more developers and a wider, more general public. Kitematic is now included by default when installing Docker Toolbox on Mac OS and MS Windows. You can use it to comfortably search and fetch images you need from the Docker Hub. The tool can also be used to run your own app containers. Using the GUI, you can edit environment variables, map ports, configure volumes, study logs, and have command-line access to the containers. It is worth mentioning that you can seamlessly switch between Kitematic GUI and command-line interface to run and manage application containers. Kitematic is very convenient, however, if you want to have more control when dealing with the containers or just want to use scripting - the command line will be a better solution. In fact, Kitematic allows you to switch back and forth between the Docker CLI and the graphical interface. Any changes you make on the command-line interface will be directly reflected in Kitematic. The tool is simple to use, as you will see at the end of this article, when we are going to test our setup on Mac or Windows PC, we will be using the command-line interface for working with Docker. Docker compose Compose is a tool, executed from the command line as docker-compose. It replaces the old fig utility. It's used to define and run multicontainer Docker applications. Although it's very easy to imagine a multi-container application (like a web server in one container and a database in the other), it's not mandatory. So if you decide that your application will fit in a single Docker container, there will be no use for docker-compose. In real life, it's very likely that your application will span into multiple containers. With docker-compose, you use a compose file to configure your application's services, so they can be run together in an isolated environment. Then, using a single command, you create and start all the services from your configuration. When it comes to multicontainer applications, docker-compose is great for development and testing, as well as continuous integration workflows. Oracle VirtualBox Oracle VM VirtualBox is a free and open source hypervisor for x86 computers from Oracle. It will be installed by default when installing the Docker Toolbox. It supports the creation and management of virtual machines running Windows, Linux, BSD, OS/2, Solaris, and so on. In our case, the Docker Machine using VirtualBox driver, will use VirtualBox to create and boot a bitsy Linux distribution capable of the running docker-engine. It's worth mentioning that you can also run the teensy–weensy virtualized Linux straight from the VirtualBox itself. Every Docker Machine you have created using the docker-machine or Kitematic, will be visible and available to boot in the VirtualBox, when you run it directly, as shown in the following screenshot: You can start, stop, reset, change settings, and read logs in the same way as for other virtualized operating systems. You can use VirtualBox in Windows or Mac for other purposes than Docker. Git Git is a distributed version control system that is widely used for software development and other version control tasks. It has emphasis on speed, data integrity, and support for distributed, non-linear workflows. Docker Machine and Docker client follows the pull/push model of Git for fetching the needed dependencies from the network. For example, if you decide to run the Docker image which is not present on your local machine, Docker will fetch this image from the Docker Hub. Docker doesn't internally use Git for any kind of resource versioning. It does, however, rely on hashing to uniquely identify the filesystem layers which is very similar to what Git does. Docker also takes initial inspiration in the notion of commits, pushes, and pulls. Git is also included in the Docker Toolbox installation package. From a developer's perspective, there are tools especially useful in a programmer's daily job, be it IntelliJ IDEA Docker Integration Plugin for Java fans or Visual Studio 2015 Tools for Docker for those who prefer C#. They let you download and build Docker images, create and start containers, and carry out other related tasks straight from your favorite IDE. Apart from the tools included in the Docker's distribution package (it will be Docker Toolbox for older versions of Windows or Docker for Windows and Docker for Mac), there are hundreds of third-party tools, such as Kubernetes and Helios (for Docker orchestration), Prometheus (for monitoring of statistics) or Swarm and Shipyard for managing clusters. As Docker captures higher attention, more and more Docker-related tools pop-up almost every week. But these are not the only tools available for you. Additionally, Docker provides a set of APIs that can be very handy. One of them is the Remote API for the management of the images and containers. Using this API, you will be able to distribute your images to the runtime Docker engine. The container can be shifted to a different machine that runs Docker, and executed there without compatibility concerns. This may be especially useful when creating PaaS (Platform-as-a-Service) architectures. There's also the Stats API that will expose live resource usage information (such as CPU, memory, network I/O and block I/O) for your containers. This API endpoint can be used to create tools that show how your containers behave, for example, on a production system. Summary By now we understand the difference between the virtualization and containerization and also, I hope, we can see the advantages of using the latter. We also know what components are available for us to install and use. Let's begin our journey to the world of containers and go straight to the action by installing the software. Resources for Article: Further resources on this subject: Let's start with Extending Docker [article] Docker Hosts [article] Understanding Docker [article]
Read more
  • 0
  • 0
  • 26013

article-image-threejs-materials-and-texture
Packt
06 Feb 2015
11 min read
Save for later

Three.js - Materials and Texture

Packt
06 Feb 2015
11 min read
In this article by Jos Dirksen author of the book Three.js Cookbook, we will learn how Three.js offers a large number of different materials and supports many different types of textures. These textures provide a great way to create interesting effects and graphics. In this article, we'll show you recipes that allow you to get the most out of these components provided by Three.js. (For more resources related to this topic, see here.) Using HTML canvas as a texture Most often when you use textures, you use static images. With Three.js, however, it is also possible to create interactive textures. In this recipe, we will show you how you can use an HTML5 canvas element as an input for your texture. Any change to this canvas is automatically reflected after you inform Three.js about this change in the texture used on the geometry. Getting ready For this recipe, we need an HTML5 canvas element that can be displayed as a texture. We can create one ourselves and add some output, but for this recipe, we've chosen something else. We will use a simple JavaScript library, which outputs a clock to a canvas element. The resulting mesh will look like this (see the 04.03-use-html-canvas-as-texture.html example): The JavaScript used to render the clock was based on the code from this site: http://saturnboy.com/2013/10/html5-canvas-clock/. To include the code that renders the clock in our page, we need to add the following to the head element: <script src="../libs/clock.js"></script> How to do it... To use a canvas as a texture, we need to perform a couple of steps: The first thing we need to do is create the canvas element: var canvas = document.createElement('canvas'); canvas.width=512; canvas.height=512; Here, we create an HTML canvas element programmatically and define a fixed width. Now that we've got a canvas, we need to render the clock that we use as the input for this recipe on it. The library is very easy to use; all you have to do is pass in the canvas element we just created: clock(canvas); At this point, we've got a canvas that renders and updates an image of a clock. What we need to do now is create a geometry and a material and use this canvas element as a texture for this material: var cubeGeometry = new THREE.BoxGeometry(10, 10, 10); var cubeMaterial = new THREE.MeshLambertMaterial(); cubeMaterial.map = new THREE.Texture(canvas); var cube = new THREE.Mesh(cubeGeometry, cubeMaterial); To create a texture from a canvas element, all we need to do is create a new instance of THREE.Texture and pass in the canvas element we created in step 1. We assign this texture to the cubeMaterial.map property, and that's it. If you run the recipe at this step, you might see the clock rendered on the sides of the cubes. However, the clock won't update itself. We need to tell Three.js that the canvas element has been changed. We do this by adding the following to the rendering loop: cubeMaterial.map.needsUpdate = true; This informs Three.js that our canvas texture has changed and needs to be updated the next time the scene is rendered. With these four simple steps, you can easily create interactive textures and use everything you can create on a canvas element as a texture in Three.js. How it works... How this works is actually pretty simple. Three.js uses WebGL to render scenes and apply textures. WebGL has native support for using HTML canvas element as textures, so Three.js just passes on the provided canvas element to WebGL and it is processed as any other texture. Making part of an object transparent You can create a lot of interesting visualizations using the various materials available with Three.js. In this recipe, we'll look at how you can use the materials available with Three.js to make part of an object transparent. This will allow you to create complex-looking geometries with relative ease. Getting ready Before we dive into the required steps in Three.js, we first need to have the texture that we will use to make an object partially transparent. For this recipe, we will use the following texture, which was created in Photoshop: You don't have to use Photoshop; the only thing you need to keep in mind is that you use an image with a transparent background. Using this texture, in this recipe, we'll show you how you can create the following (04.08-make-part-of-object-transparent.html): As you can see in the preceeding, only part of the sphere is visible, and you can look through the sphere to see the back at the other side of the sphere. How to do it... Let's look at the steps you need to take to accomplish this: The first thing we do is create the geometry. For this recipe, we use THREE.SphereGeometry: var sphereGeometry = new THREE.SphereGeometry(6, 20, 20); Just like all the other recipes, you can use whatever geometry you want. In the second step, we create the material: var mat = new THREE.MeshPhongMaterial(); mat.map = new THREE.ImageUtils.loadTexture( "../assets/textures/partial-transparency.png"); mat.transparent = true; mat.side = THREE.DoubleSide; mat.depthWrite = false; mat.color = new THREE.Color(0xff0000); As you can see in this fragment, we create THREE.MeshPhongMaterial and load the texture we saw in the Getting ready section of this recipe. To render this correctly, we also need to set the side property to THREE.DoubleSide so that the inside of the sphere is also rendered, and we need to set the depthWrite property to false. This will tell WebGL that we still want to test our vertices against the WebGL depth buffer, but we don't write to it. Often, you need to set this to false when working with more complex transparent objects or particles. Finally, add the sphere to the scene: var sphere = new THREE.Mesh(sphereGeometry, mat); scene.add(sphere); With these simple steps, you can create really interesting effects by just experimenting with textures and geometries. There's more With Three.js, it is possible to repeat textures (refer to the Setup repeating textures recipe). You can use this to create interesting-looking objects such as this: The code required to set a texture to repeat is the following: var mat = new THREE.MeshPhongMaterial(); mat.map = new THREE.ImageUtils.loadTexture( "../assets/textures/partial-transparency.png"); mat.transparent = true; mat.map.wrapS = mat.map.wrapT = THREE.RepeatWrapping; mat.map.repeat.set( 4, 4 ); mat.depthWrite = false; mat.color = new THREE.Color(0x00ff00); By changing the mat.map.repeat.set values, you define how often the texture is repeated. Using a cubemap to create reflective materials With the approach Three.js uses to render scenes in real time, it is difficult and very computationally intensive to create reflective materials. Three.js, however, provides a way you can cheat and approximate reflectivity. For this, Three.js uses cubemaps. In this recipe, we'll explain how to create cubemaps and use them to create reflective materials. Getting ready A cubemap is a set of six images that can be mapped to the inside of a cube. They can be created from a panorama picture and look something like this: In Three.js, we map such a map on the inside of a cube or sphere and use that information to calculate reflections. The following screenshot (example 04.10-use-reflections.html) shows what this looks like when rendered in Three.js: As you can see in the preceeding screenshot, the objects in the center of the scene reflect the environment they are in. This is something often called a skybox. To get ready, the first thing we need to do is get a cubemap. If you search on the Internet, you can find some ready-to-use cubemaps, but it is also very easy to create one yourself. For this, go to http://gonchar.me/panorama/. On this page, you can upload a panoramic picture and it will be converted to a set of pictures you can use as a cubemap. For this, perform the following steps: First, get a 360 degrees panoramic picture. Once you have one, upload it to the http://gonchar.me/panorama/ website by clicking on the large OPEN button:  Once uploaded, the tool will convert the panorama picture to a cubemap as shown in the following screenshot:  When the conversion is done, you can download the various cube map sites. The recipe in this book uses the naming convention provided by Cube map sides option, so download them. You'll end up with six images with names such as right.png, left.png, top.png, bottom.png, front.png, and back.png. Once you've got the sides of the cubemap, you're ready to perform the steps in the recipe. How to do it... To use the cubemap we created in the previous section and create reflecting material,we need to perform a fair number of steps, but it isn't that complex: The first thing you need to do is create an array from the cubemap images you downloaded: var urls = [ '../assets/cubemap/flowers/right.png', '../assets/cubemap/flowers/left.png', '../assets/cubemap/flowers/top.png', '../assets/cubemap/flowers/bottom.png', '../assets/cubemap/flowers/front.png', '../assets/cubemap/flowers/back.png' ]; With this array, we can create a cubemap texture like this: var cubemap = THREE.ImageUtils.loadTextureCube(urls); cubemap.format = THREE.RGBFormat; From this cubemap, we can use THREE.BoxGeometry and a custom THREE.ShaderMaterial object to create a skybox (the environment surrounding our meshes): var shader = THREE.ShaderLib[ "cube" ]; shader.uniforms[ "tCube" ].value = cubemap; var material = new THREE.ShaderMaterial( { fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: shader.uniforms, depthWrite: false, side: THREE.DoubleSide }); // create the skybox var skybox = new THREE.Mesh( new THREE.BoxGeometry( 10000, 10000, 10000 ), material ); scene.add(skybox); Three.js provides a custom shader (a piece of WebGL code) that we can use for this. As you can see in the code snippet, to use this WebGL code, we need to define a THREE.ShaderMaterial object. With this material, we create a giant THREE.BoxGeometry object that we add to scene. Now that we've created the skybox, we can define the reflecting objects: var sphereGeometry = new THREE.SphereGeometry(4,15,15); var envMaterial = new THREE.MeshBasicMaterial( {envMap:cubemap}); var sphere = new THREE.Mesh(sphereGeometry, envMaterial); As you can see, we also pass in the cubemap we created as a property (envmap) to the material. This informs Three.js that this object is positioned inside a skybox, defined by the images that make up cubemap. The last step is to add the object to the scene, and that's it: scene.add(sphere); In the example in the beginning of this recipe, you saw three geometries. You can use this approach with all different types of geometries. Three.js will determine how to render the reflective area. How it works... Three.js itself doesn't really do that much to render the cubemap object. It relies on a standard functionality provided by WebGL. In WebGL, there is a construct called samplerCube. With samplerCube, you can sample, based on a specific direction, which color matches the cubemap object. Three.js uses this to determine the color value for each part of the geometry. The result is that on each mesh, you can see a reflection of the surrounding cubemap using the WebGL textureCube function. In Three.js, this results in the following call (taken from the WebGL shader in GLSL): vec4 cubeColor = textureCube( tCube, vec3( -vReflect.x, vReflect.yz ) ); A more in-depth explanation on how this works can be found at http://codeflow.org/entries/2011/apr/18/advanced-webgl-part-3-irradiance-environment-map/#cubemap-lookup. There's more... In this recipe, we created the cubemap object by providing six separate images. There is, however, an alternative way to create the cubemap object. If you've got a 360 degrees panoramic image, you can use the following code to directly create a cubemap object from that image: var texture = THREE.ImageUtils.loadTexture( 360-degrees.png', new THREE.UVMapping()); Normally when you create a cubemap object, you use the code shown in this recipe to map it to a skybox. This usually gives the best results but requires some extra code. You can also use THREE.SphereGeometry to create a skybox like this: var mesh = new THREE.Mesh( new THREE.SphereGeometry( 500, 60, 40 ), new THREE.MeshBasicMaterial( { map: texture })); mesh.scale.x = -1; This applies the texture to a sphere and with mesh.scale, turns this sphere inside out. Besides reflection, you can also use a cubemap object for refraction (think about light bending through water drops or glass objects): All you have to do to make a refractive material is load the cubemap object like this: var cubemap = THREE.ImageUtils.loadTextureCube(urls, new THREE.CubeRefractionMapping()); And define the material in the following way: var envMaterial = new THREE.MeshBasicMaterial({envMap:cubemap}); envMaterial.refractionRatio = 0.95; Summary In this article, we learned about the different textures and materials supported by Three.js Resources for Article:  Further resources on this subject: Creating the maze and animating the cube [article] Working with the Basic Components That Make Up a Three.js Scene [article] Mesh animation [article]
Read more
  • 0
  • 0
  • 25991

article-image-reinforcement-learning-mdp-markov-decision-process-tutorial
Fatema Patrawala
09 Jul 2018
11 min read
Save for later

Implement Reinforcement learning using Markov Decision Process [Tutorial]

Fatema Patrawala
09 Jul 2018
11 min read
The Markov decision process, better known as MDP, is an approach in reinforcement learning to take decisions in a gridworld environment. A gridworld environment consists of states in the form of grids. The MDP tries to capture a world in the form of a grid by dividing it into states, actions, models/transition models, and rewards. The solution to an MDP is called a policy and the objective is to find the optimal policy for that MDP task. Thus, any reinforcement learning task composed of a set of states, actions, and rewards that follows the Markov property would be considered an MDP. In this tutorial, we will dig deep into MDPs, states, actions, rewards, policies, and how to solve them using Bellman equations. This article is a reinforcement learning tutorial taken from the book, Reinforcement learning with TensorFlow. Markov decision processes MDP is defined as the collection of the following: States: S Actions: A(s), A Transition model: T(s,a,s') ~ P(s'|s,a) Rewards: R(s), R(s,a), R(s,a,s') Policy:  is the optimal policy In the case of an MDP, the environment is fully observable, that is, whatever observation the agent makes at any point in time is enough to make an optimal decision. In case of a partially observable environment, the agent needs a memory to store the past observations to make the best possible decisions. Let's try to break this into different lego blocks to understand what this overall process means. The Markov property In short, as per the Markov property, in order to know the information of near future (say, at time t+1) the present information at time t matters. Given a sequence, , the first order of Markov says, , that is,  depends only on . Therefore,  will depend only on . The second order of Markov says, , that is,  depends only on  and  In our context, we will follow the first order of the Markov property from now on. Therefore, we can convert any process to a Markov property if the probability of the new state, say , depends only on the current state, , such that the current state captures and remembers the property and knowledge from the past. Thus, as per the Markov property, the world (that is, the environment) is considered to be stationary, that is, the rules in the world are fixed. The S state set The S state set is a set of different states, represented as s, which constitute the environment. States are the feature representation of the data obtained from the environment. Thus, any input from the agent's sensors can play an important role in state formation. State spaces can be either discrete or continuous. The starts from start state and has to reach the goal state in the most optimized path without ending up in bad states (like the red colored state shown in the diagram below). Consider the following gridworld as having 12 discrete states, where the green-colored grid is the goal state, red is the state to avoid, and black is a wall that you'll bounce back from if you hit it head on: The states can be represented as 1, 2,....., 12 or by coordinates, (1,1),(1,2),.....(3,4). Actions The actions are the things an agent can perform or execute in a particular state. In other words, actions are sets of things an agent is allowed to do in the given environment. Like states, actions can also be either discrete or continuous. Consider the following gridworld example having 12 discrete states and 4 discrete actions (UP, DOWN, RIGHT, and LEFT): The preceding example shows the action space to be a discrete set space, that is, a  A where, A = {UP, DOWN, RIGHT, and LEFT}. It can also be treated as a function of state, that is, a = A(s), where depending on the state function, it decides which action is possible. Transition model The transition model T(s, a, s') is a function of three variables, which are the current state (s), action (a), and the new state (s'), and defines the rules to play the game in the environment. It gives probability P(s'|s, a), that is, the probability of landing up in the new s' state given that the agent takes an action, a, in given state, s. The transition model plays the crucial role in a stochastic world, unlike the case of a deterministic world where the probability for any landing state other than the determined one will have zero probability. Let's consider the following environment (world) and consider different cases, determined and stochastic: Since the actions a  A where, A = {UP, DOWN, RIGHT, and LEFT}. The behavior of these two cases depends on certain factors: Determined environment: In a determined environment, if you take a certain action, say UP, you will certainly perform that action with probability 1. Stochastic environment: In a stochastic environment, if you take the same action, say UP, there will certain probability say 0.8 to actually perform the given action and there is 0.1 probability it can perform an action (either LEFT or RIGHT) perpendicular to the given action, UP. Here, for the s state and the UP action transition model, T(s',UP, s) = P(s'| s,UP) = 0.8. Since T(s,a,s') ~ P(s'|s,a), where the probability of new state depends on the current state and action only, and none of the past states. Thus, the transition model follows the first order Markov property. We can also say that our universe is also a stochastic environment, since the universe is composed of atoms that are in different states defined by position and velocity. Actions performed by each atom change their states and cause changes in the universe. Rewards The reward of the state quantifies the usefulness of entering into a state. There are three different forms to represent the reward namely, R(s), R(s, a) and R(s, a, s'), but they are all equivalent. For a particular environment, the domain knowledge plays an important role in the assignment of rewards for different states as minor changes in the reward do matter for finding the optimal solution to an MDP problem. There are two approaches we reward our agent for when taking a certain action. They are: Credit assignment problem: We look at the past and check which actions led to the present reward, that is, which action gets the credit Delayed rewards: In contrast, in the present state, we check which action to take that will lead us to potential rewards Delayed rewards form the idea of foresight planning. Therefore, this concept is being used to calculate the expected reward for different states. We will discuss this in the later sections. Policy Until now, we have covered the blocks that create an MDP problem, that is, states, actions, transition models, and rewards, now comes the solution. The policy is the solution to an MDP problem. The policy is a function that takes the state as an input and outputs the action to be taken. Therefore, the policy is a command that the agent has to obey.  is called the optimal policy, which maximizes the expected reward. Among all the policies taken, the optimal policy is the one that optimizes to maximize the amount of reward received or expected to receive over a lifetime. For an MDP, there's no end of the lifetime and you have to decide the end time. Thus, the policy is nothing but a guide telling which action to take for a given state. It is not a plan but uncovers the underlying plan of the environment by returning the actions to take for each state. The Bellman equations Since the optimal  policy is the policy that maximizes the expected rewards, therefore, , where  means the expected value of the rewards obtained from the sequence of states agent observes if it follows the  policy. Thus,  outputs the  policy that has the highest expected reward. Similarly, we can also calculate the utility of the policy of a state, that is, if we are at the s state, given a  policy, then, the utility of the  policy for the s state, that is,  would be the expected rewards from that state onward: The immediate reward of the state, that is,  is different than the utility of the  state (that is, the utility of the optimal policy of the  state) because of the concept of delayed rewards. From now onward, the utility of the  state will refer to the utility of the optimal policy of the state, that is, the  state. Moreover, the optimal policy can also be regarded as the policy that maximizes the expected utility. Therefore, where, T(s,a,s') is the transition probability, that is, P(s'|s,a) and U(s') is the utility of the new landing state after the a action is taken on the s state.  refers to the summation of all possible new state outcomes for a particular action taken, then whichever action gives the maximum value of  that is considered to be the part of the optimal policy and thereby, the utility of the 's' state is given by the following Bellman equation, where,  is the immediate reward and  is the reward from future, that is, the discounted utilities of the 's' state where the agent can reach from the given s state if the action, a, is taken. Solving the Bellman equation to find policies Say we have some n states in the given environment and if we see the Bellman equation, we find out that n states are given; therefore, we will have n equations and n unknown but the  function makes it non-linear. Thus, we cannot solve them as linear equations. Therefore, in order to solve: Start with an arbitrary utility Update the utilities based on the neighborhood until convergence, that is, update the utility of the state using the Bellman equation based on the utilities of the landing states from the given state Iterate this multiple times to lead to the true value of the states. This process of iterating to convergence towards the true value of the state is called value iteration. For the terminal states where the game ends, the utility of those terminal state equals the immediate reward the agent receives while entering the terminal state. Let's try to understand this by implementing an example. An example of value iteration using the Bellman equation Consider the following environment and the given information: Given information: A, C, and X are the names of some states. The green-colored state is the goal state, G, with a reward of +1. The red-colored state is the bad state, B, with a reward of -1, try to prevent your agent from entering this state Thus, the green and red states are the terminal states, enter either and the game is over. If the agent encounters the green state, that is, the goal state, the agent wins, while if they enter the red state, then the agent loses the game. ,  (that is, reward for all states except the G and B states is -0.04),  (that is, the utility at the first time step is 0, except the G and B states). Transition probability T(s,a,s') equals 0.8 if going in the desired direction; otherwise, 0.1 each if going perpendicular to the desired direction. For example, if the action is UP then with 0.8 probability, the agent goes UP but with 0.1 probability it goes RIGHT and 0.1 to the LEFT. Questions:  Find , the utility of the X state at time step 1, that is, the agent will go through one iteration Similarly, find  Solution: R(X) = -0.04 Action as'RIGHT G 0.8+10.8 x 1 = 0.8RIGHTC0.100.1 x 0 = 0RIGHTX0.100.1 x 0 = 0 Thus, for action a = RIGHT, Action as'DOWN C 0.800.8 x 0 = 0DOWNG0.1+10.1 x 1 = 0.1DOWNA0.100.1 x 0 = 0 Thus, for action a = DOWN, Action as'UP X 0.800.8 x 0 = 0UPG0.1+10.1 x 1 = 0.1UPA0.100.1 x 0 = 0 Thus, for action a = UP, Action as'LEFT A 0.800.8 x 0 = 0LEFTX0.100.1 x 0 = 0LEFTC0.100.1 x 0 = 0 Thus, for action a = LEFT, Therefore, among all actions, Therefore, , where  and  Similarly, calculate  and  and we get  and  Since, , and, R(X) = -0.04 Action as'RIGHT G 0.8+10.8 x 1 = 0.8RIGHTC0.1-0.040.1 x -0.04 = -0.004RIGHTX0.10.360.1 x 0.36 = 0.036 Thus, for action a = RIGHT, Action as'DOWN C 0.8-0.040.8 x -0.04 = -0.032DOWNG0.1+10.1 x 1 = 0.1DOWNA0.1-0.040.1 x -0.04 = -0.004 Thus, for action a = DOWN, Action as'UP X 0.80.360.8 x 0.36 = 0.288UPG0.1+10.1 x 1 = 0.1UPA0.1-0.040.1 x -0.04 = -0.004 Thus, for action a = UP, Action as'LEFT A 0.8-0.040.8 x -0.04 = -0.032LEFTX0.10.360.1 x 0.36 = 0.036LEFTC0.1-0.040.1 x -0.04 = -0.004 Thus, for action a = LEFT, Therefore, among all actions, Therefore, , where  and  Therefore, the answers to the preceding questions are:     Policy iteration The process of obtaining optimal utility by iterating over the policy and updating the policy itself instead of value until the policy converges to the optimum is called policy iteration. The process of policy iteration is as follows: Start with a random policy,  For the given  policy at iteration step t, calculate  by using the following formula: Improve the  policy by This ends an interesting reinforcement learning tutorial. Want to implement state-of-the-art Reinforcement Learning algorithms from scratch? Get this best-selling title, Reinforcement Learning with TensorFlow. How Reinforcement Learning works Convolutional Neural Networks with Reinforcement Learning Getting started with Q-learning using TensorFlow
Read more
  • 0
  • 0
  • 25990

article-image-run-lambda-functions-on-aws-greengrass
Vijin Boricha
27 Apr 2018
7 min read
Save for later

How to run Lambda functions on AWS Greengrass

Vijin Boricha
27 Apr 2018
7 min read
AWS Greengrass is a form of edge computing service that extends the cloud's functionality to your IoT devices by allowing data collection and analysis closer to its point of origin. This is accomplished by executing AWS Lambda functions locally on the IoT device itself, while still using the cloud for management and analytics. Today, we will learn how to leverage AWS Greengrass to run simple lambda functions on an IoT device. How does this help a business? Well to start with, using AWS Greengrass you are now able to respond to locally generated events in near real time. With Greengrass, you can program your IoT devices to locally process and filter data and only transmit the important chunks back to AWS for analysis. This also has a direct impact on the costs as well as the amount of data transmitted back to the cloud. Here are the core components of AWS Greengrass: Greengrass Core (GGC) software: The Greengrass Core software is a packaged module that consists of a runtime to allow executions of Lambda functions, locally. It also contains an internal message broker and a deployment agent that periodically notifies the AWS Greengrass service about the device's configuration, state, available updates, and so on. The software also ensures that the connection between the device and the IoT service is secure with the help of keys and certificates. Greengrass groups: A Greengrass group is a collection of Greengrass Core settings and definitions that are used to manage one or more Greengrass-backed IoT devices. The groups internally comprise a few other components, namely: Greengrass group definition: A collection of information about your Greengrass group Device definition: A collection of IoT devices that are a part of a Greengrass group Greengrass group settings: Contains connection as well as configuration information along with the necessary IAM Roles required for interacting with other AWS services Greengrass Core: The IoT device itself Lambda functions: A list of Lambda functions that can be deployed to the Greengrass Core. Subscriptions: A collection of a message source, a message target and an MQTT topic to transmit the messages. The source or targets can be either the IoT service, a Lambda function or even the IoT device itself. Greengrass Core SDK: Greengrass also provides an SDK which you can use to write and run Lambda functions on Greengrass Core devices. The SDK currently supports Java 8, Python 2.7, and Node.js 6.10. With this key information in mind, let's go ahead and deploy our very own Greengrass Core on an IoT device. Running Lambda functions on AWS Greengrass With the Greengrass Core software up and running on your IoT device, we can now go ahead and run a simple Lambda function on it! For this particular section, we will be leveraging an AWS Lambda blueprint that prints a simple Hello World message: To get started, first, we will need to create our Lambda function. From the AWS Management Console, filter out the Lambda service using the Filter option or alternatively, select this URL: https://console.aws.amazon.com/lambda/home. Ensure that the Lambda function is launched from the same region as that of the AWS Greengrass. In this case, we are using the US-East-1 (N. Virginia) region. On the AWS Lambda console landing page, select the Create function option to get started. Since we are going to be leveraging an existing function blueprint for this use case, select the Blueprints option provided on the Create function page. Use the filter to find a blueprint with the name greengrass-hello-world. There are two templates present to date that match this name, one function is based on Python while the other is based on Node.js. For this particular section, select the greengrass-hello-world Python function and click on Configure to proceed. Fill out the required details for the new function, such as a Name followed by a valid Role. For this section, go ahead and select the Create new role from template option. Provide a suitable Role name and finally, from the Policy templates drop-down list, select the AWS IoT Button Permissions role. Once completed, click on Create function to complete the function's creation process. But before you move on to associating this function with your AWS Greengrass, you will also need to create a new version out of this function. Select the Publish new version option from the Actions tab. Provide a suitable Version description text and click on Publish once done. Your function is now ready for AWS Greengrass. Now, head back to the AWS IoT dashboard and select the newly deployed Greengrass group from the Groups option present on the navigation pane. From the Greengrass group page, select the Lambdas option from the navigation pane followed by the Add Lambda option, as shown in the following screenshot: On the Add a Lambda to your Greengrass group, you can choose to either Create a new Lambda function or Use an existing Lambda function as well. Since we have already created our function, select the Use existing function option. In the next page, select your Greengrass Lambda function and click Next to proceed. Finally, select the version of the deployed function and click on Finish once done. To finish things, we will need to create a new subscription between the Lambda function (source) and the AWS IoT service (destination). Select the Subscriptions option from the same Greengrass group page, as shown. Click on Add Subscription to proceed: On the Select your source and target page, select the newly deployed Lambda function as the source, followed by the IoT cloud as the target. Click on Next once done. You can provide an Optional topic filter as well, to filter messages published on the messaging queue. In this case, we have provided a simple hello/world as the filter for this scenario. Click on Finish once done to complete the subscription configuration. With all the pieces in place, it's now time to deploy our Lambda function over to the Greengrass Core. To do so, select the Deployments option and from the Actions drop-down list, select the Deploy option, as shown in the following screenshot: The deployment takes a few seconds to complete. Once done, verify the status of the deployment by viewing the Status column. The Status should show Successfully completed. With the function now deployed, test the setup by using the MQTT client provided by AWS IoT, as done before. Remember to enter the same hello/world topic name in the subscription topic field and click on Publish to topic once done. If all goes well, you should receive a custom Hello World message from the Greengrass Core as depicted in the following screenshot: This was just a high-level view of what you can achieve with Greengrass and Lambda. You can leverage Lambda for performing all kinds of preprocessing on data on your IoT device itself, thus saving a tremendous amount of time, as well as costs. With this, we come to the end of this post. Stay tuned for our next post where we will look at ways to effectively monitor IoT devices. We leveraged AWS Greengrass and Lambda to develop a cost-effective and speedy solution. You read an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia.  Whether you are a seasoned system admin or a rookie, this book will help you learn all the skills you need to work with the AWS cloud.  
Read more
  • 0
  • 0
  • 25959

article-image-working-with-the-vue-router-plugin-for-spas
Pravin Dhandre
07 Jun 2018
5 min read
Save for later

Working with the Vue-router plugin for SPAs

Pravin Dhandre
07 Jun 2018
5 min read
Single-Page Applications (SPAs) are web applications that load a single HTML page and updates that page dynamically based on the user interaction with the app. These SPAs use AJAX and HTML5 for creating fluid and responsive web applications without any requirement of constant page reloads. In this tutorial, we will show a step-by-step approach of how to install an extremely powerful plugin Vue-router to build Single Page Applications. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example. Similar to how you add Vue and Vuex to your applications, you can either directly include the library from unpkg, or head to the following URL and download a local copy for yourself: https://unpkg.com/Vue-router. Add the JavaScript to a new HTML document, along with Vue, and your application's JavaScript. Create an application container element, your view, as well. In the following example, I have saved the Vue-router JavaScript file as router.js: Initialize VueRouter in your JavaScript application We are now ready to add VueRouter and utilize its power. Before we do that, however, we need to create some very simple components which we can load and display based on the URL. As we are going to be loading the components with the router, we don't need to register them with Vue.component, but instead create JavaScript objects with the same properties as we would a Vue component. For this first exercise, we are going to create two pages—Home and About pages. Found on most websites, these should help give you context as to what is loading where and when. Create two templates in your HTML page for us to use: Don't forget to encapsulate all your content in one "root" element (represented here by the wrapping div tags). You also need to ensure you declare the templates before your application JavaScript is loaded. We've created a Home page template, with the id of homepage, and an About page, containing some placeholder text from lorem ipsum, with the id of about. Create two components in your JavaScript which reference these two templates: The next step is to give the router a placeholder to render the components in the view. This is done by using a custom <router-view> HTML element. Using this element gives you control over where your content will render. It allows us to have a header and footer right in the app view, without needing to deal with messy templates or include the components themselves. Add a header, main, and footer element to your app. Give yourself a logo in the header and credits in the footer; in the main HTML element, place the router-view placeholder: Everything in the app view is optional, except the router-view, but it gives you an idea of how the router HTML element can be implemented into a site structure. The next stage is to initialize the Vue-router and instruct Vue to use it. Create a new instance of VueRouter and add it to the Vue instance—similar to how we added Vuex in the previous section: We now need to tell the router about our routes (or paths), and what component it should load when it encounters each one. Create an object inside the Vue-router instance with a key of routes and an array as the value. This array needs to include an object for each route: Each route object contains a path and component key. The path is a string of the URL that you want to load the component on. Vue-router serves up components based on a first-come-first-served basis. For example, if there are several routes with the same path, the first one encountered is used. Ensure each route has the beginning slash—this tells the router it is a root page and not a sub-page. Press save and view your app in the browser. You should be presented with the content of the Home template component. If you observe the URL, you will notice that on page load a hash and forward slash (#/) are appended to the path. This is the router creating a method for browsing the components and utilizing the address bar. If you change this to the path of your second route, #/about, you will see the contents of the About component. Vue-router is also able to use the JavaScript history API to create prettier URLs. For example, yourdomain.com/index.html#about would become yourdomain.com/about. This is activated by adding mode: 'history' to your VueRouter instance: With this, you should now be familiar with Vue-router and how to initialize it for creating new routes and paths for different web pages on your website. Do check out the book Vue.js 2.x by Example to start developing building blocks for a successful e-commerce website. What is React.js and how does it work? Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js
Read more
  • 0
  • 0
  • 25950

article-image-datacamp-reckons-in-metoo-movement-ceo-steps-down-from-his-role-indefinitely
Fatema Patrawala
25 Apr 2019
7 min read
Save for later

DataCamp reckons with its #MeToo movement; CEO steps down from his role indefinitely

Fatema Patrawala
25 Apr 2019
7 min read
The data science community is reeling after data science learning startup DataCamp penned a blog post acknowledging that an unnamed company executive made "uninvited physical contact" with one of its employees. DataCamp, which operates an e-platform where aspiring data scientists can take courses in coding and data analysis is a startup valued at $184 million. It has additionally raised over $30 million in funding. The company disclosed in a blog post published on 4th April that this incident occurred at an "informal employee gathering" at a bar in October 2017. The unnamed DataCamp executive had "danced inappropriately and made uninvited physical contact" with the employee on the dance floor, the post read. The company didn't name the executive involved in the incident in its post. But called the executive's behavior on the dance floor "entirely inappropriate" and "inconsistent" with employee expectations and policies. When Buisness Insider reached out to one of the course instructors OS Keyes familiar with this matter, Keyes said that the executive in question is DataCamp's co-founder and CEO Jonathan Cornelissen. Yesterday Motherboard also reported that the company did not adequately address sexual misconduct by a senior executive there and instructors at DataCamp have begun boycotting the service and asking the company to delete their courses following allegations. What actually happened and how did DataCamp respond? On April 4, DataCamp shared a statement on its blog titled “a note to our community.” In it, the startup addressed the accusations against one of the company’s executives: “In October 2017, at an informal employee gathering at a bar after a week-long company offsite, one of DataCamp’s executives danced inappropriately and made uninvited physical contact with another employee while on the dance floor.” DataCamp got the complaint reviewed by a “third party not involved in DataCamp’s day-to-day business,” and said it took several “corrective actions,” including “extensive sensitivity training, personal coaching, and a strong warning that the company will not tolerate any such behavior in the future.” DataCamp only posted its blog a day after more than 100 DataCamp instructors signed a letter and sent it to DataCamp executives. “We are unable to cooperate with continued silence and lack of transparency on this issue,” the letter said. “The situation has not been acknowledged adequately to the data science community, leading to harmful rumors and uncertainty.” But as instructors read the statement from DataCamp following the letter, many found the actions taken to be insufficient. https://twitter.com/hugobowne/status/1120733436346605568 https://twitter.com/NickSolomon10/status/1120837738004140038 Motherboard reported this case in detail taking notes from Julia Silge, a data scientist who co-authored the letter to DataCamp. Julia says that going public with our demands for accountability was the last resort. Julia spoke about the incident in detail and says she remembered seeing the victim of the assault start working at DataCamp and then leave abruptly. This raised “red flags” but she did not reach out to her. Then Silge heard about the incident from a mutual friend and she began to raise the issue with internal people at DataCamp. “There were various responses from the rank and file. It seemed like after a few months of that there was not a lot of change, so I escalated a little bit,” she said. DataCamp finally responded to Silge by saying “I think you have misconceptions about what happened,” and they also mentioned that “there was alcohol involved” to explain the behavior of the executive. DataCamp further explained that “We also heard over and over again, ‘This has been thoroughly handled.’” But according to Silge and other instructors who have spoken out, say that DataCamp hasn’t properly handled the situation and has tried to sweep it under the rug. Silge also created a private Slack group to communicate and coordinate their efforts to confront this issue. She along with the group got into a group video conference with DataCamp, which was put into “listen-only” mode for all the other participants except DataCamp, meaning they could not speak in the meeting, and were effectively silenced. “It felt like 30 minutes of the DataCamp leadership saying what they wanted to say to us,” Silge said. “The content of it was largely them saying how much they valued diversity and inclusion, which is hard to find credible given the particular ways DataCamp has acted over the past.” Following that meeting, instructors began to boycott DataCamp more blatantly, with one instructor refusing to make necessary upgrades to her course until DataCamp addressed the situation. Silge and two other instructors eventually drafted and sent the letter, at first to the small group involved in accountability efforts, then to almost every DataCamp instructor. All told, the letter received more than 100 signatures (of about 200 total instructors). A DataCamp spokesperson said in response to this, “When we became aware of this matter, we conducted a thorough investigation and took actions we believe were necessary and appropriate. However, recent inquiries have made us aware of mischaracterizations of what occurred and we felt it necessary to make a public statement. As a matter of policy, we do not disclose details on matters like this, to protect the privacy of the individuals involved.” “We do not retaliate against employees, contractors or instructors or other members of our community, under any circumstances, for reporting concerns about behavior or conduct,” the company added. The response received from DataCamp was not only inadequate, but technologically faulty, as per one of the contractors Noam Ross who pointed out in his blog post that DataCamp had published the blog with a “no-index” tag, meaning it would not show up in aggregated searches like Google results. Thus adding this tag knowingly represents DataCamp’s continued lack of public accountability. OS Keyes said to Business Insider that at this point, the best course of action for DataCamp is a blatant change in leadership. “The investors need to get together and fire the [executive], and follow that by publicly explaining why, apologising, compensating the victim and instituting a much more rigorous set of work expectations,” Keyes said. #Rstats and other data science communities and DataCamp instructors take action One of the contractors Ines Montani expressed this by saying, “I was pretty disappointed, appalled and frustrated by DataCamp's reaction and non-action, especially as more and more details came out about how they essentially tried to sweep this under the rug for almost two years,” Due to their contracts, many instructors cannot take down their DataCamp courses. Instead of removing the courses, many contractors for DataCamp, including Montani, took to Twitter after DataCamp published the blog, urging students to boycott the very courses they designed. https://twitter.com/noamross/status/1116667602741485571 https://twitter.com/daniellequinn88/status/1117860833499832321 https://twitter.com/_tetration_/status/1118987968293875714 Instructors put financial pressures on the company by boycotting their own courses. They also wanted to get the executive responsible for such misbehaviour account for his actions, compensate the victim and compensate those who were fired for complaining—this may ultimately undercut DataCamp’s bottom line. Influential open-source communities, including RStudio, SatRdays, and R-Ladies, have cut all ties with DataCamp to show disappointment with the lack of serious accountability.. CEO steps down “indefinitely” from his role and accepts his mistakes Today Jonathan Cornelissen, accepted his mistake and wrote a public apology for his inappropriate behaviour. He writes, “I want to apologize to a former employee, our employees, and our community. I have failed you twice. First in my behavior and second in my failure to speak clearly and unequivocally to you in a timely manner. I am sorry.” He has also stepped down from his position as the company CEO indefinitely until there is complete review of company’s environment and culture. While it is in the right direction, unfortunately this apology comes to the community very late and is seen as a PR move to appease the backlash from the data science community and other instructors. https://twitter.com/mrsnoms/status/1121235830381645824 9 Data Science Myths Debunked 30 common data science terms explained Why is data science important?
Read more
  • 0
  • 0
  • 25950
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-exploring-token-generation-strategies
Saeed Dehqan
28 Aug 2023
8 min read
Save for later

Exploring Token Generation Strategies

Saeed Dehqan
28 Aug 2023
8 min read
IntroductionThis article discusses different methods for generating sequences of tokens using language models, specifically focusing on the context of predicting the next token in a sequence. The article explains various techniques to select the next token based on the predicted probability distribution of possible tokens.Language models predict the next token based on n previous tokens. Models try to extract information from n previous tokens as far as they can. Transformer models aggregate information from all the n previous tokens. Tokens in a sequence communicate with one another and exchange their information. At the end of the communication process, tokens are context-aware and we use them to predict their own next token. Each token separately goes to some linear/non-linear layers and the output is unnormalized logits. Then, we apply Softmax on logits to convert them into probability distributions. Each token has its own probability distribution over its next token:Exploring Methods for Token SelectionWhen we have the probability distribution of tokens, it’s time to pick one token as the next token. There are four methods for selecting the suitable token from probability distribution:●    Greedy or naive method: Simply select the token that has the highest probability from the list. This is a deterministic method.●    Beam search: It receives a parameter named beam size and based on it, the algorithm tries to use the model to predict multiple times to find a suitable sentence, not just a token. This is a deterministic method.●    Top-k sampling: Select the top k most probable tokens and shut off other tokens (make their probability -inf) and sample from top k tokens. This is a sampling method.●    Nucleus sampling: Select the top most probable tokens and shut off other tokens but with a difference that is a dynamic selection of most probable tokens. Not just a crisp k.Greedy methodThis is a simple and fast method and only needs one prediction. Just select the most probable token as the next token. Greedy methods can be efficient on arithmetic tasks. But, it tends to get stuck in a loop and repeat tokens one after another. It also kills the diversity of the model by selecting the tokens that occur frequently in the training dataset.Here’s the code that converts unnormalized logits(simply the output of the network) into probability distribution and selects the most probable next token:probs = F.softmax(logits, dim=-1) next_token = probs.argmax() Beam searchBeam search produces better results and is slower because it runs the model multiple times so that it can create n sequences, where n is beam size. This method selects top n tokens and adds them to the current sequence and runs the model on the made sequences to predict the next token. And this process continues until the end of the sequence. Computationally expensive, but more quality. Based on this search, the algorithm returns two sequences:Then, how do we select the final sequence? We sum up the loss for all predictions and select the sequence with the lowest loss.Simple samplingWe can select tokens randomly based on their probability. The more the probability, the more the chance of being selected. We can achieve this by using multinomial method:logits = logits[:, -1, :] probs = F.softmax(logits, dim=-1) next_idx = torch.multinomial(probs, num_samples=1)This is part of the model we implemented in the “transformer building blocks” blog and the code can be found here. The torch.multinomial receives the probability distribution and selects n samples. Here’s an example:In [1]: import torch In [2]: probs = torch.tensor([0.3, 0.6, 0.1]) In [3]: torch.multinomial(probs, num_samples=1) Out[3]: tensor([1]) In [4]: torch.multinomial(probs, num_samples=1) Out[4]: tensor([0]) In [5]: torch.multinomial(probs, num_samples=1) Out[5]: tensor([1]) In [6]: torch.multinomial(probs, num_samples=1) Out[6]: tensor([0]) In [7]: torch.multinomial(probs, num_samples=1) Out[7]: tensor([1]) In [8]: torch.multinomial(probs, num_samples=1) Out[8]: tensor([1])We ran the method six times on probs, and as you can see it selects 0.6 four times and 0.3 two times because 0.6 is higher than 0.3.Top-k samplingIf we want to make the previous sampling method better, we need to limit the sampling space. Top-k sampling does this. K is a parameter that Top-k sampling uses to select top k tokens from the probability distribution and sample from these k tokens. Here is an example of top-k sampling:In [1]: import torch In [2]: logit = torch.randn(10) In [3]: logit Out[3]: tensor([-1.1147, 0.5769, 0.3831, -0.5841, 1.7528, -0.7718, -0.4438, 0.6529, 0.1500, 1.2592]) In [4]: topk_values, topk_indices = torch.topk(logit, 3) In [5]: topk_values Out[5]: tensor([1.7528, 1.2592, 0.6529]) In [6]: logit[logit < topk_values[-1]] = float('-inf') In [7]: logit Out[7]: tensor([ -inf, -inf, -inf, -inf, 1.7528, -inf, -inf, 0.6529, -inf, 1.2592]) In [8]: probs = logit.softmax(0) In [9]: probs Out[9]: tensor([0.0000, 0.0000, 0.0000, 0.0000, 0.5146, 0.0000, 0.0000, 0.1713, 0.0000, 0.3141]) In [10]: torch.multinomial(probs, num_samples=1) Out[10]: tensor([9]) In [11]: torch.multinomial(probs, num_samples=1) Out[11]: tensor([4]) In [12]: torch.multinomial(probs, num_samples=1) Out[12]: tensor([9])●    We first create a fake logit with torch.randn. Supposedly logit is the raw output of a network.●    We use torch.topk to select the top 3 values from logit. torch.topk returns top 3 values along with their indices. The values are sorted from top to bottom.●    We use advanced indexing to select logit values that are lower than the last top 3 values. When we say logit < topk_values[-1] we mean all the numbers in logit that are lower than topk_values[-1] (0.6529). ●    After selecting those numbers, we replace their value to float(‘-inf’), which is a negative infinite number. ●    After replacement, we run softmax over the logit to convert it into probabilities. ●    Now, we use torch.multinomial to sample from the probs.Nucleus samplingNucleus sampling is like Top-k sampling but with a dynamic selection of top tokens instead of selecting k tokens. The dynamic selection is better when we are unsure of selecting a suitable k for Top-k sampling. Nucleus sampling has a hyperparameter named p, let us say it is 0.9, and this method selects tokens from descending order and adds up their probabilities and when we reach a cumulative sum of p, we stop. What is the cumulative sum? Here’s an example of cumulative sum:In [1]: import torch In [2]: logit = torch.randn(10) In [3]: probs = logit.softmax(0) In [4]: probs Out[4]: tensor([0.0652, 0.0330, 0.0609, 0.0436, 0.2365, 0.1738, 0.0651, 0.0692, 0.0495, 0.2031]) In [5]: [probs[:x+1].sum() for x in range(probs.size(0))] Out[5]: [tensor(0.0652), tensor(0.0983), tensor(0.1592), tensor(0.2028), tensor(0.4394), tensor(0.6131), tensor(0.6782), tensor(0.7474), tensor(0.7969), tensor(1.)]I hope you understand how cumulative sum works from the code. We just add up n previous prob values. We can also use torch.cumsum and get the same result:In [9]: torch.cumsum(probs, dim=0) Out[9]: tensor([0.0652, 0.0983, 0.1592, 0.2028, 0.4394, 0.6131, 0.6782, 0.7474, 0.7969, 1.0000]) Okay. Here’s a nucleus sampling from scratch: In [1]: import torch In [2]: logit = torch.randn(10) In [3]: probs = logit.softmax(0) In [4]: probs Out[4]: tensor([0.7492, 0.0100, 0.0332, 0.0078, 0.0191, 0.0370, 0.0444, 0.0553, 0.0135, 0.0305]) In [5]: sprobs, indices = torch.sort(probs, dim=0, descending=True) In [6]: sprobs Out[6]: tensor([0.7492, 0.0553, 0.0444, 0.0370, 0.0332, 0.0305, 0.0191, 0.0135, 0.0100, 0.0078]) In [7]: cs_probs = torch.cumsum(sprobs, dim=0) In [8]: cs_probs Out[8]: tensor([0.7492, 0.8045, 0.8489, 0.8860, 0.9192, 0.9497, 0.9687, 0.9822, 0.9922, 1.0000]) In [9]: selected_tokens = cs_probs < 0.9 In [10]: selected_tokens Out[10]: tensor([ True, True, True, True, False, False, False, False, False, False]) In [11]: probs[indices[selected_tokens]] Out[11]: tensor([0.7492, 0.0553, 0.0444, 0.0370]) In [12]: probs = probs[indices[selected_tokens]] In [13]: torch.multinomial(probs, num_samples=1) Out[13]: tensor([0])●    Convert the logit to probabilities and sort it with descending order so that we can select them from top to bottom.●    Calculate cumulative sum.●    Using advanced indexing, we filter out values.●    Then, we sample from a limited and better space.Please note that you can use a combination of top-k and nucleus samplings. It is like selecting k tokens and doing nucleus sampling on these k tokens. You can also use top-k, nucleus, and beam search.ConclusionUnderstanding these methods is crucial for anyone working with language models, natural language processing, or text generation tasks. These techniques play a significant role in generating coherent and diverse sequences of text. Depending on the specific use case and desired outcomes, readers can choose the most appropriate method to employ. Overall, this knowledge can contribute to improving the quality of generated text and enhancing the capabilities of language models.Author BioSaeed Dehqan trains language models from scratch. Currently, his work is centered around Language Models for text generation, and he possesses a strong understanding of the underlying concepts of neural networks. He is proficient in using optimizers such as genetic algorithms to fine-tune network hyperparameters and has experience with neural architecture search (NAS) by using reinforcement learning (RL). He implements models starting from data gathering to monitoring, and deployment on mobile, web, cloud, etc. 
Read more
  • 0
  • 0
  • 25908

article-image-learning-dependency-injection-di
Packt
08 Mar 2018
15 min read
Save for later

Learning Dependency Injection (DI)

Packt
08 Mar 2018
15 min read
In this article by Sherwin John CallejaTragura, author of the book Spring 5.0 Cookbook, we will learn about implementation of Spring container using XML and JavaConfig,and also managing of beans in an XML-based container. In this article,you will learn how to: Implementing the Spring container using XML Implementing the Spring container using JavaConfig Managing the beans in an XML-based container (For more resources related to this topic, see here.) Implementing the Spring container using XML Let us begin with the creation of theSpring Web Project using the Maven plugin of our STS Eclipse 8.3. This web project will be implementing our first Spring 5.0 container using the XML-based technique. Thisis the most conventional butrobust way of creating the Spring container. The container is where the objects are created, managed, wired together with their dependencies, and monitored from their initialization up to their destruction.This recipe will mainly highlight how to create an XML-based Spring container. Getting ready Create a Maven project ready for development using the STS Eclipse 8.3. Be sure to have installed the correct JRE. Let us name the project ch02-xml. How to do it… After creating the project, certain Maven errors will be encountered. Bug fix the Maven issues of our ch02-xml projectin order to use the XML-based Spring 5.0 container by performing the following steps:  Open pom.xml of the project and add the following properties which contain the Spring build version and Servlet container to utilize: <properties> <spring.version>5.0.0.BUILD-SNAPSHOT</spring.version> <servlet.api.version>3.1.0</servlet.api.version> </properties> Add the following Spring 5 dependencies inside pom.xml. These dependencies are essential in providing us with the interfaces and classes to build our Spring container: <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> </dependencies> It is required to add the following repositories where Spring 5.0 dependencies in Step 2 will be downloaded: <repositories> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/libs-snapshot</url> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> Then add the Maven plugin for deployment but be sure to recognize web.xml as the deployment descriptor. This can be done by enabling<failOnMissingWebXml>or just deleting the<configuration>tag as follows: <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.3</version> </plugin> <plugin> Follow the Tomcat Maven plugin for deployment, as explained in Chapter 1. After the Maven configuration details, check if there is a WEB-INF folder inside src/main/webapp. If there is none, create one. This is mandatory for this project since we will be using a deployment descriptor (or web.xml). Inside theWEB-INF folder, create a deployment descriptor or drop a web.xml template inside src/main/webapp/WEB-INF directory. Then, create an XML-based Spring container named as ch02-beans.xmlinside thech02-xml/src/main/java/ directory. The configuration file must contain the following namespaces and tags: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring- context.xsd"> </beans> You can generate this file using theSTS Eclipse Wizard (Ctrl-N) and under the module SpringSpring Bean Configuration File option Save all the files. Clean and build the Maven project. Do not deploy yet because this is just a standalone project at the moment. How it works… This project just imported three major Spring 5.0 libraries, namely the Spring-Core, Spring-Beans, and Spring-Context,because the major classes and interfaces in creating the container are found in these libraries. This shows that Spring, unlike other frameworks, does not need the entire load of libraries just to setup the initial platform. Spring can be perceived as a huge enterprise framework nowadays but internally it is still lightweight. The basic container that manages objects in Spring is provided by the org.springframework.beans.factory.BeanFactoryinterfaceand can only be found in theSpring-Beansmodule. Once additional features are needed such as message resource handling, AOP capabilities, application-specific contexts and listener implementation, the sub-interface of BeanFactory, namely the org.springframework.context.ApplicationContextinterface, is then used.This ApplicationContext, found in Spring-Contextmodules, is the one that provides an enterprise-specific container for all its applications becauseit encompasses alarger scope of Spring components than itsBeanFactoryinterface. The container created,ch02-beans.xml, anApplicationContext, is an XML-based configuration that contains XSD schemas from the three main libraries imported. These schemashave tag libraries and bean properties, which areessential in managing the whole framework. But beware of runtime errors once libraries are removed from the dependencies because using these tags is equivalent to using the libraries per se. The final Spring Maven project directory structure must look like this: Implementing the Spring container using JavaConfig Another option of implementing the Spring 5.0 container is through the use of Spring JavaConfig. This is a technique that uses pure Java classes in configuring the framework's container. This technique eliminates the use of bulky and tedious XML metadata and also provides a type-safe and refactoring-free approach in configuring entities or collections of objects into the container. This recipe will showcase how to create the container usingJavaConfig in a web.xml-less approach. Getting ready Create another Maven project and name the projectch02-xml. This STSEclipse project will be using a Java class approach including its deployment descriptor. How to do it… To get rid of the usual Maven bugs, immediately open the pom.xmlof ch02-jc and add<properties>, <dependencies>, and <repositories>equivalent to what was added inthe Implementing the Spring Container using XMLrecipe. Next, get rid of the web.xml. Since the time Servlet 3.0 specification was implemented, servlet containers can now support projects without using web.xml. This is done by implementingthe handler abstract class called org.springframework.web.WebApplicationInitializer to programmatically configure ServletContext. Create aSpringWebinitializerclass and override its onStartup() method without any implementation yet: public class SpringWebinitializer implements WebApplicationInitializer { @Override public void onStartup(ServletContext container) throws ServletException { } } The lines in Step 2 will generate some runtime errors until you add the following Maven dependency: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> In pom.xml, disable the<failOnMissingWebXml>. After the Maven details, create a class named BeanConfig, the ApplicationContext definition, bearing an annotation @Configuration at the top of it. The class must be inside the org.packt.starter.ioc.contextpackage and must be an empty class at the moment: @Configuration public class BeanConfig { } Save all the files and clean and build the Maven project.How it works… The Maven project ch02-xml makes use of both JavaConfig and ServletContainerInitializer, meaning there will be no XML configuration from servlet to Spring 5.0 containers. The BeanConfigclass is the ApplicationContext of the project which has an annotation @Configuration,indicating that the class is used by JavaConfig as asource of bean definitions.This is handy when creating an XML-based configuration with lots of metadata. On the other hand,ch02-xmlimplemented org.springframework.web.WebApplicationInitializer,which is a handler of org.springframework.web.SpringServletContainerInitializer, the framework's implementation class to theservlet'sServletContainerInitializer. The SpringServletContainerInitializerisnotified byWebApplicationInitializerduring the execution of its startup(ServletContext) with regard to theprogramaticalregistration of filters, servlets, and listeners provided by the ServletContext . Eventually, the servlet container will acknowledge the status reported by SpringServletContainerInitialize,thus eliminating the use of web.xml. On Maven's side, the plugin for deployment must be notified that the project will not use web.xml.This is done through setting the<failOnMissingWebXml>to false inside its<configuration>tag. The final Spring Web Project directory structure must look like the following structure: Managing the beans in an XML-based container Frameworks become popular because of the principle behind the architecture they are made up from. Each framework is built from different design patterns that manage the creation and behavior of the objects they manage. This recipe will detail how Spring 5.0 manages objects of the applications and how it shares a set of methods and functions across the platform. Getting ready The two Maven projects previously created will be utilized in illustrating how Spring 5.0 loads objects into the heap memory.We will also be utilizing the ApplicationContextrather than the BeanFactorycontainer in preparation for the next recipes involving more Spring components. How to do it… With our ch02-xml, let us demonstrate how Spring loads objects using the XML-based Application Context container: Create a package layer,org.packt.starter.ioc.model,for our model classes. Our model classes will be typical Plain Old Java Objects(POJO),by which Spring 5.0 architecture is known for. Inside the newly created package, create the classes Employeeand Department,whichcontain the following blueprints: public class Employee { private String firstName; private String lastName; private Date birthdate; private Integer age; private Double salary; private String position; private Department dept; public Employee(){ System.out.println(" an employee is created."); } public Employee(String firstName, String lastName, Date birthdate, Integer age, Double salary, String position, Department dept) { his.firstName = firstName; his.lastName = lastName; his.birthdate = birthdate; his.age = age; his.salary = salary; his.position = position; his.dept = dept; System.out.println(" an employee is created."); } // getters and setters } public class Department { private Integer deptNo; private String deptName; public Department() { System.out.println("a department is created."); } // getters and setters } Afterwards, open the ApplicationContextch02-beans.xml. Register using the<bean>tag our first set of Employee and Department objects as follows: <bean id="empRec1" class="org.packt.starter.ioc.model.Employee" /> <bean id="dept1" class="org.packt.starter.ioc.model.Department" /> The beans in Step 3 containprivate instance variables that havezeroes and null default values. Toupdate them, our classes havemutators or setter methodsthat can be used to avoid NullPointerException, which happens always when we immediately use empty objects. In Spring,calling these setters is tantamount to injecting data into the<bean>,similar to how these following objectsare created: <bean id="empRec2" class="org.packt.starter.ioc.model.Employee"> <property name="firstName"><value>Juan</value></property> <property name="lastName"><value>Luna</value></property> <property name="age"><value>70</value></property> <property name="birthdate"><value>October 28, 1945</value></property> <property name="position"> <value>historian</value></property> <property name="salary"><value>150000</value></property> <property name="dept"><ref bean="dept2"/></property> </bean> <bean id="dept2" class="org.packt.starter.ioc.model.Department"> <property name="deptNo"><value>13456</value></property> <property name="deptName"> <value>History Department</value></property> </bean> A<property>tag is equivalent to a setter definition accepting an actual value oran object reference. The nameattributedefines the name of the setter minus the prefix set with the conversion to itscamel-case notation. The value attribute or the<value>tag both pertain to supported Spring-type values (for example,int, double, float, Boolean, Spring). The ref attribute or<ref>provides reference to another loaded<bean>in the container. Another way of writing the bean object empRec2 is through the use of ref and value attributes such as the following: <bean id="empRec3" class="org.packt.starter.ioc.model.Employee"> <property name="firstName" value="Jose"/> <property name="lastName" value="Rizal"/> <property name="age" value="101"/> <property name="birthdate" value="June 19, 1950"/> <property name="position" value="scriber"/> <property name="salary" value="90000"/> <property name="dept" ref="dept3"/> </bean> <bean id="dept3" class="org.packt.starter.ioc.model.Department"> <property name="deptNo" value="56748"/> <property name="deptName" value="Communication Department" /> </bean> Another way of updating the private instance variables of the model objects is to make use of the constructors. Actual Spring data and object references can be inserted to the through the metadata: <bean id="empRec5" class="org.packt.starter.ioc.model.Employee"> <constructor-arg><value>Poly</value></constructor-arg> <constructor-arg><value>Mabini</value></constructor-arg> <constructor-arg><value> August 10, 1948</value></constructor-arg> <constructor-arg><value>67</value></constructor-arg> <constructor-arg><value>45000</value></constructor-arg> <constructor-arg><value>Linguist</value></constructor-arg> <constructor-arg><ref bean="dept3"></ref></constructor-arg> </bean> After all the modifications, save ch02-beans.xml.Create a TestBeans class inside thesrc/test/java directory. This class will load the XML configuration resource to the ApplicationContext container throughorg.springframework.context.support.ClassPathXmlApplicationContextand fetch all the objects created through its getBean() method. public class TestBeans { public static void main(String args[]){ ApplicationContext context = new ClassPathXmlApplicationContext("ch02-beans.xml"); System.out.println("application context loaded."); System.out.println("****The empRec1 bean****"); Employee empRec1 = (Employee) context.getBean("empRec1"); System.out.println("****The empRec2*****"); Employee empRec2 = (Employee) context.getBean("empRec2"); Department dept2 = empRec2.getDept(); System.out.println("First Name: " + empRec2.getFirstName()); System.out.println("Last Name: " + empRec2.getLastName()); System.out.println("Birthdate: " + empRec2.getBirthdate()); System.out.println("Salary: " + empRec2.getSalary()); System.out.println("Dept. Name: " + dept2.getDeptName()); System.out.println("****The empRec5 bean****"); Employee empRec5 = context.getBean("empRec5", Employee.class); Department dept3 = empRec5.getDept(); System.out.println("First Name: " + empRec5.getFirstName()); System.out.println("Last Name: " + empRec5.getLastName()); System.out.println("Dept. Name: " + dept3.getDeptName()); } } The expected output after running the main() thread will be: an employee is created. an employee is created. a department is created. an employee is created. a department is created. an employee is created. a department is created. application context loaded. *********The empRec1 bean *************** *********The empRec2 bean *************** First Name: Juan Last Name: Luna Birthdate: Sun Oct 28 00:00:00 CST 1945 Salary: 150000.0 Dept. Name: History Department *********The empRec5 bean *************** First Name: Poly Last Name: Mabini Dept. Name: Communication Department How it works… The principle behind creating<bean>objects into the container is called the Inverse of Control design pattern. In order to use the objects, its dependencies, and also its behavior, these must be placed within the framework per se. After registering them in the container, Spring will just take care of their instantiation and their availability to other objects. Developer can just "fetch" them if they want to include them in their software modules,as shown in the following diagram: The IoC design pattern can be synonymous to the Hollywood Principle (“Don't call us, we’ll call you!”), which is a popular line in most object-oriented programming languages. The framework does not care whether the developer needs the objects or not because the lifespan of the objects lies on the framework's rules. In the case of setting new values or updating values of the object's private variables, IoC has an implementation which can be used for "injecting" new actual values or object references to and it is popularly known as the Dependency Injection(DI) design pattern. This principle exposes all the to the public through its setter methods or the constructors. Injecting Spring values and object references to the method signature using the <property>tag without knowing its implementation is called the Method Injection type of DI. On the other hand, if we create the bean with initialized values injected to its constructor through<constructor-arg>, it is known as Constructor Injection. To create the ApplicationContext container, we need to instantiate ClassPathXmlApplicationContext or FileSystemApplicationContext, depending on the location of the XML definition file. Since the file is found in ch02-xml/src/main/java/, ClassPathXmlApplicationContext implementation is the best option. This proves that the ApplicationContext is an object too,bearing all those XML metadata. It has several overloaded getBean() methods used to fetch all the objects loaded with it. Summary In this article we went overhow to create an XML-based Spring container, how to create the container using JavaConfig in a web.xml-less approach andhow Spring 5.0 manages objects of the applications and how it shares a set of methods and functions across the platform. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 25875

article-image-its-black-friday-but-whats-the-business-and-developer-cost-of-downtime
Richard Gall
23 Nov 2018
4 min read
Save for later

It's Black Friday: But what's the business (and developer) cost of downtime?

Richard Gall
23 Nov 2018
4 min read
Black Friday is back, and, as you've probably already noticed, with a considerable vengeance. According to Adobe Analytics data, online spending is predicted to hit $3.7 billion over this holiday season in the U.S, up from $2.9 billion in 2017. But while consumers clamour for deals and businesses reap the rewards, it's important to remember there's a largely hidden plane of software engineering labour. Without this army of developers, consumers will most likely be hitting their devices in frustration, while business leaders will be missing tough revenue targets - so, as we enter into Black Friday let's pour one out for all those engineers on call and trying their best to keep eCommerce sites on their feet. Here's to the software engineers keeping things running on Black Friday Of course, the pain that hits on days like Black Friday and Cyber Monday can be minimised with smart planning and effective decision making long before those sales begin. However, for engineering teams under-resourced and lacking the right tools, that is simply impossible. This means that software engineers are left in a position where they're treading water, knowing that they're going to be sinking once those big days come around. It doesn't have to be like this. With smarter leadership and, indeed, more respect for the intensive work engineers put in to make websites and apps actually work, revenue driving platforms can become more secure, resilient and stable. Chaos engineering platform Gremlin publishes the 'true cost of downtime' This is the central argument of chaos engineering platform Gremlin, who we've covered a number of times this year. To coincide with Black Friday the team has put together what they believe is the 'true cost of downtime'. On the one hand this is a good marketing hook for their chaos engineering platform, but, cynicism aside, it's also a good explanation of why the principles of chaos engineering can be so valuable from both a business and developer perspective. Estimating the annual revenue of some of the biggest companies in the world, Gremlin has been then created an interactive table to demonstrate what the cost of downtime for each of those businesses would be, for the length of time you are on the page. For 20 minutes downtime, Amazon.com would have lost a staggering $4.4 million. For Walgreens it's more than $80,000. Gremlin provide some context to all this, saying: "Enterprise commerce businesses typically rely on a complex microservices architecture, from fulfillment, to website security, ability to scale with holiday traffic, and payment processing - there is a lot that can go wrong and impact revenue, damage customer trust, and consume engineering time. If an ecommerce site isn’t 100% online and performant, it’s losing revenue." "The holiday season is especially demanding for SREs working in ecommerce. Even the most skilled engineering teams can struggle to keep up with the demands of peak holiday traffic (i.e. Black Friday and Cyber Monday). Just going down for a few seconds can mean thousands in lost revenue, but for some sites, downtime can be exponentially more expensive." For Gremlin, chaos engineering is clearly the answer to many of the problems days like Black Friday poses. While it might not work for every single organization, it's nevertheless true that failing to pay attention to the value of your applications and websites at an hour by hour level could be incredibly damaging. With outages on Facebook, WhatsApp, and Instagram happening earlier this week, these problems aren't hidden away - they're in full view of the public. What does remain hidden, however, is the work and stress that goes in to tackling these issues and ensuring things are working as they should be. Perhaps it's time to start learning the lessons of Black Friday - business revenues will be that little bit healthier, but engineers will also be that little bit happier. 
Read more
  • 0
  • 0
  • 25874

article-image-swifts-core-libraries
Packt
13 Oct 2016
26 min read
Save for later

Swift's Core Libraries

Packt
13 Oct 2016
26 min read
In this article, Jon Hoffman, the author of the book Mastering Swift 3, talks about how he was really excited when Apple announced that they were going to release a version of Swift for Linux, so he could use Swift for his Linux and embedded development as well as his Mac OS and iOS development. When Apple first released Swift 2.2 for Linux, he was very excited but was also a little disappointed because he could not read/write files, access network services, or use Grand Central Dispatch (GCD or libdispatch) on Linux like he could on Apple's platforms. With the release of Swift 3, Apple has corrected this with the release of Swift's core libraries. (For more resources related to this topic, see here.) In this article, you will learn about the following topics: What are the Swift core libraries How to use Apple's URL loading system How to use the Formatter classes How to use the File Manager class The Swift Core Libraries are written to provide a rich set of APIs that are consistent across the various platforms that Swift supports. By using these libraries, developers will be able to write code that will be portable to all platforms that Swift supports. These libraries provide a higher level of functionality as compared to the Swift standard library. The core libraries provide functionality in a number of areas, such as: Networking Unit testing Scheduling and execution of work (libdispatch) Property lists, JSON parsing, and XML parsing Support for dates, times, and calendar calculations Abstraction of OS-specific behavior Interaction with the file system User preferences We are unable to cover all of the Core Libraries in this single article; however, we will look at some of the more useful ones. We will start off by looking at Apple's URL loading system that is used for network development. Apple's URL loading system Apple's URL loading system is a framework of classes available to interact with URLs. We can use these classes to communicate with services that use standard Internet protocols. The classes that we will be using in this section to connect to and retrieve information from REST services are as follows: URLSession: This is the main session object. URLSessionConfiguration: This is used to configure the behavior of the URLSession object. URLSessionTask: This is a base class to handle the data being retrieved from the URL. Apple provides three concrete subclasses of the URLSessionTask class. URL: This is an object that represents the URL to connect to. URLRequest: This class contains information about the request that we are making and is used by the URLSessionTask service to make the request. HTTPURLResponse: This class contains the response to our request. Now, let's look at each of these classes a little more in depth so that we have a basic understanding of what each does. URLSession A URLSession object provides an API for interacting with various protocols such as HTTP and HTTPS. The session object, which is an instance of the URLSession, manages this interaction. These session objects are highly configurable, which allows us to control how our requests are made and how we handle the data that is returned. Like most networking APIs, URLSession is asynchronous. This means that we have to provide a way to return the response from the service back to the code that needs it. The most popular way to return the results from a session is to pass a completion handler block (closure) to the session. This completion handler is then called when the service successfully responds or we receive an error. All of the examples in this article use completion handlers to process the data that is returned from the services. URLSessionConfiguration The URLSessionConfiguration class defines the behavior and policies to use when using the URLSession object to connect to a URL. When using the URLSession object, we usually create a URLSessionConfiguration instance first, because an instance of this class is required when we create an instance of the URLSession class. The URLSessionConfiguration class defines three session types: Default session configuration: Manages the upload and download tasks with default configurations Ephemeral session configuration: This configuration behaves similar to the default session configuration, except that it does not cache anything to disk Background session configuration: This session allows for uploads and downloads to be performed, even when the app is running in the background It is important to note that we should make sure that we configure the URLSessionConfiguration object appropriately before we use it to create an instance of the URLSession class. When the session object is created, it creates a copy of the configuration object that we provide. Any changes made to the configuration object once the session object is created are ignored by the session. If we need to make changes to the configuration, we must create another instance of the URLSession class. URLSessionTask The URLSession service uses an instance of the URLSessionTask class to make the call to the service that we are connecting to. The URLSessionTask class is a base class, and Apple has provided three concrete subclasses that we can use: URLSessionDataTask: This returns the response, in memory, directly to the application as one or more Data objects. This is the task that we generally use most often. URLSessionDownloadTask: This writes the response directly to a temporary file. URLSessionUploadTask: This is used for making requests that require a request body, such as a POST or PUT request. It is important to note that a task will not send the request to the service until we call the resume() method. URL The URL object represents the URL that we are going to connect to. The URL class is not limited to URLs that represent remote servers, but it can also be used to represent a local file on disk. In this article, we will be using the URL class exclusively to represent the URL of the remote service that we are connecting to. URLRequest We use the URLRequest class to encapsulate our URL and the request properties. It is important to understand that the URLRequest class is used to encapsulate the necessary information to make our request, but it does not make the actual request. To make the request, we use instances of the URLSession and URLSessionTask classes. HTTPURLResponse The HTTPURLResponse class is a subclass of the URLResponse class that encapsulates the metadata associated with the response to a URL request. The HTTPURLResponse class provides methods for accessing specific information associated with an HTTP response. Specifically, this class allows us to access the HTTP header fields and the response status codes. We briefly covered a number of classes in this section and it may not be clear how they all actually fit together; however, once you see the examples a little further in this article, it will become much clearer. Before we go into our examples, let's take a quick look at the type of service that we will be connecting to. REST web services REST has become one of the most important technologies for stateless communications between devices. Due to the lightweight and stateless nature of the REST-based services, its importance is likely to continue to grow as more devices are connected to the Internet. REST is an architecture style for designing networked applications. The idea behind REST is that instead of using complex mechanisms, such as SOAP or CORBA to communicate between devices, we use simple HTTP requests for the communication. While, in theory, REST is not dependent on the Internet protocols, it is almost always implemented using them. Therefore, when we are accessing REST services, we are almost always interacting with web servers in the same way that our web browsers interact with these servers. REST web services use the HTTP POST, GET, PUT, or DELETE methods. If we think about a standard Create, Read, Update, Delete (CRUD) application, we would use a POST request to create or update data, a GET request to read data, and a DELETE request to delete data. When we type a URL into our browser's address bar and hit Enter, we are generally making a GET request to the server and asking it to send us the web page associated with that URL. When we fill out a web form and click the submit button, we are generally making a POST request to the server. We then include the parameters from the web form in the body of our POST request. Now, let's look at how to make an HTTP GET request using Apple's networking API. Making an HTTP GET request In this example, we will make a GET request to Apple's iTunes search API to get a list of items related to the search term "Jimmy Buffett". Since we are retrieving data from the service, by REST standards, we should use a GET request to retrieve the data. While the REST standard is to use GET requests to retrieve data from a service, there is nothing stopping a developer of a web service from using a GET request to create or update a data object. It is not recommended to use a GET request in this manner, but just be aware that there are services out there that do not adhere to the REST standards. The following code makes a request to Apple's iTunes search API and then prints the results to the console: public typealias dataFromURLCompletionClosure = (URLResponse?, Data?) -> Void public func sendGetRequest ( _ handler: @escaping dataFromURLCompletionClosure) { let sessionConfiguration = URLSessionConfiguration.default; let urlString = "https://itunes.apple.com/search?term=jimmy+buffett" if let encodeString = urlString.addingPercentEncoding( withAllowedCharacters: CharacterSet.urlQueryAllowed), let url = URL(string: encodeString) { var request = URLRequest(url:url) request.httpMethod = "GET" let urlSession = URLSession( configuration:sessionConfiguration, delegate: nil, delegateQueue: nil) let sessionTask = urlSession.dataTask(with: request) { (data, response, error) in handler(response, data) } sessionTask.resume() } } We start off by creating a type alias named DataFromURLCompletionClosure. The DataFromURLCompletionClosure type will be used for both the GET and POST examples of this article. If you are not familiar with using a typealias object to define a closure type. We then create a function named sendGetRequest(), which will be used to make the GET request to Apple's iTunes API. This function accepts one argument named handler, which is a closure that conforms to the DataFromURLCompletionClosure type. The handler closure will be used to return the results from the request. Stating with Swift 3, the default for closure arguments to functions is not escaping, which means that, by default, the closures argument cannot escape the function body. A closure is considered to escape a function when that closure, which is passed as an argument to the function, is called after the function returns. Since the closure will be called after the function returns, we use the @escaping attribute before the parameter type to indicate it is allowed to escape. Within our sendGetRequest() method, we begin by creating an instance of the URLSessionConfiguration class using the default settings. If we need to, we can modify the session configuration properties after we create it, but in this example, the default configuration is what we want. After we create our session configuration, we create the URL string. This is the URL of the service we are connecting to. With a GET request, we put our parameters in the URL itself. In this specific example, https://itunes.apple.com/search is the URL of the web service. We then follow the web service URL with a question mark (?), which indicates that the rest of the URL string consists of parameters for the web service. The parameters take the form of key/value pairs, which means that each parameter has a key and a value. The key and value of a parameter, in a URL, are separated by an equals sign (=). In our example, the key is term and the value is jimmy+buffett. Next, we run the URL string that we just created through the addingPercentEncoding() method to make sure our URL string is encoded properly. We use the CharacterSet.urlQueryAllowed character set with this method to ensure we have a valid URL string. Next, we use the URL string that we just built to create a URL instance named url. Since we are making a GET request, this URL instance will represent both the location of the web service and the parameters that we are sending to it. We create an instance of the URLRequest class using the URL instance that we just created. In this example, we set the HTTPMethod property; however, we can also set other properties, such as the timeout interval or add items to our HTTP header. Now, we use the sessionConfiguration constant that we created at the beginning of the sendGetRequest() function to create an instance of the URLSession class. The URLSession class provides the API that we will use to connect to Apple's iTunes search API. In this example, we use the dataTask(with:) method of the URLSession instance to return an instance of the URLSessionDataTask type named sessionTask. The sessionTask instance is what makes the request to the iTunes search API. When we receive the response from the service, we use the handler callback to return both the URLResponse object and the Data object. The URLResponse contains information about the response, and the Data instance contains the body of the response. Finally, we call the resume() method of the URLSessionDataTask instance to make the request to the web service. Remember, as we mentioned earlier, a URLSessionTask instance will not send the request to the service until we call the resume() method. Now, let's look at how we would call the sendGetRequest() function. The first thing we need to do is to create a closure that will be passed to the sendGetRequest() function and called when the response from the web service is received. In this example, we will simply print the response to the console. Since the response is in the JSON format, we could use the JSONSerialization class, which we will see later in this article, to parse the response; however, since this section is on networking, we will simply print the response to the console. Here is the code: let printResultsClosure: dataFromURLCompletionClosure = { if let data = $1 { let sString = let sString = String(data: data, encoding: String.Encoding(rawValue: String.Encoding.utf8.rawValue)) print(sString) } else { print("Data is nil") } } We define this closure, named printResultsClosure, to be an instance of the DataFromURLCompletionClosure type. Within the closure, we unwrap the first parameter and set the value to a constant named data. If the first parameter is not nil, we convert the data constant to an instance of the String class, which is then printed to the console. Now, let's call the sendGetRequest() method with the following code: let aConnect = HttpConnect() aConnect.sendGetRequest(printResultsClosure) This code creates an instance of the HttpConnect class and then calls the sendGetRequest() method, passing the printResultsClosure closure as the only parameter. If we run this code while we are connected to the Internet, we will receive a JSON response that contains a list of items related to Jimmy Buffett on iTunes. Now that we have seen how to make a simple HTTP GET request, let's look at how we would make an HTTP POST request to a web service. Making an HTTP POST request Since Apple's iTunes, APIs use GET requests to retrieve data. In this section, we will use the free http://httpbin.org service to show you how to make a POST request. The POST service that http://httpbin.org provides can be found at http://httpbin.org/post. This service will echo back the parameters that it receives so that we can verify that our request was made properly. When we make a POST request, we generally have some data that we want to send or post to the server. This data takes the form of key/value pairs. These pairs are separated by an ampersand (&) symbol, and each key is separated from its value by an equals sign (=). As an example, let's say that we want to submit the following data to our service: firstname: Jon lastname: Hoffman age: 47 years The body of the POST request would take the following format: firstname=Jon&lastname=Hoffman&age=47 Once we have the data in the proper format, we will then use the dataUsingEncoding() method, as we did with the GET request to properly encode the POST data. Since the data going to the server is in the key/value format, the most appropriate way to store this data, prior to sending it to the service, is with a Dictionary object. With this in mind, we will need to create a method that will take a Dictionary object and return a string object that can be used for the POST request. The following code will do that: private func dictionaryToQueryString(_ dict: [String : String]) -> String { var parts = [String]() for (key, value) in dict { let part : String = key + "=" + value parts.append(part); } return parts.joined(separator: "&") } This function loops through each key/value pair of the Dictionary object and creates a String object that contains the key and the value separated by the equals sign (=). We then use the joinWithSeperator() function to join each item in the array, separated by the specified sting. In our case, we want to separate each string with the ampersand symbol (&). We then return this newly created string to the code that called it. Now, let's create the sendPostRequest() function that will send the POST request to the http://httpbin.org post service. We will see a lot of similarities between this sendPostRequest() function and the sendGetRequest() function, which we showed you in the Making an HTTP GET request section. Let's take a look at the following code: public func sendPostRequest(_ handler: @escaping dataFromURLCompletionClosure) { let sessionConfiguration = URLSessionConfiguration.default let urlString = "http://httpbin.org/post" if let encodeString = urlString.addingPercentEncoding( withAllowedCharacters: CharacterSet.urlQueryAllowed), let url = URL(string: encodeString) { var request = URLRequest(url:url) request.httpMethod = "POST" let params = dictionaryToQueryString(["One":"1 and 1", "Two":"2 and 2"]) request.httpBody = params.data( using: String.Encoding.utf8, allowLossyConversion: true) let urlSession = URLSession( configuration:sessionConfiguration, delegate: nil, delegateQueue: nil) let sessionTask = urlSession.dataTask(with: request) { (data, response, error) in handler(response, data) } sessionTask.resume() } } This code is very similar to the sendGetRequest() function that we saw earlier in this section. The two main differences are the httpMethod of the URLRequest is set to POST rather than GET and how we set the parameters. In this function, we set the httpBody property of the URLRequest instance to the parameters we are submitting. Now that we have seen how to use the URL Loading System, let's look at how we can use Formatters. Formatter Formatter is an abstract class that declares an interface for an object that creates, converts, or validates a human readable form of some data. The types that subclass the Formatter class are generally used when we want to take a particular object, like an instance of the Date class, and present the value in a form that the user of our application can understand. Apple has provided several concrete implementations of the Formatter class, and in this section we will look at two of them. It is important to remember that the formatters that Apple provides will provide the proper format for the default locale of the device the application is running on. We will start off by looking at the DateFormatter type. DateFormatter The DateFormatter class is a subclass of the Formatter abstract class that can be used to convert a Date object into a human readable string. It can also be used to convert a String representation of a date into a Date object. We will look at both of these use cases in this section. Let's begin by seeing how we could covert a Date object into a human readable string. The DateFormatter type has five predefined styles that we can use when we are converting a Date object to a human readable string. The following chart shows what the five styles look like for an en-US locale. DateFormatter Style Date Time .none No Format No Format .short 12/25/16 6:00 AM .medium Dec 25, 2016 6:00:00 AM .long December 25, 2016 6:00:00 AM EST .full Sunday December 25, 2016 6:00:00 AM Eastern Standard Time The following code shows how we would use the predefined DateFormatter styles: let now = Date() let formatter = DateFormatter() formatter.dateStyle = .short formatter.timeStyle = .medium let dateStr = formatter.string(from: now) We use the string(from:) method to convert the now date to a human readable string. In this example, for the en-US locale, the dateStr constant would contain text similar to "Aug 19, 2016, 6:40 PM". There are numerous times when the predefined styles do not meet our needs. For those times, we can define our own styles using a custom format string. This string is a series of characters that the DateFormatter type knows are stand-ins for the values we want to show. The DateFormatter instance will replace these stand-ins with the appropriate values. The following table shows some of the formatting values that we can use to format our Date objects: Stand-in Format Description Example output yy Two digit year 16, 14, 04 yyyy Four digit year 2016, 2014, 2004 MM Two digit month 06, 08, 12 MMM Three letter Month Jul, Aug, Dec MMMM Full month name July, August dd Two digit day 10, 11, 30 EEE Three letter Day Mon, Sat, Sun EEEE Full day Monday, Sunday a Period of day AM, PM hh Two digit hour 02, 03, 04 HH Two digit hour for 24 hour clock 11, 14, 16 mm Two digit minute 30, 35, 45 ss Two digit seconds 30, 35, 45 The following code shows how we would use these custom formatters: let now = Date() let formatter2 = DateFormatter() formatter2.dateFormat = "YYYY-MM-dd HH:mm:ss" let dateStr2 = formatter2.string(from: now) We use the string(from:) method to convert the now date to a human readable string. In this example, for the en-US locale, the dateStr2 constant would contain text similar to 2016-08-19 19:03:23. Now let's look at how we would take a formatted date string and convert it to a Date object. We will use the same format string that we used in our last example: formatter2.dateFormat = "YYYY-MM-dd HH:mm:ss" let dateStr3 = "2016-08-19 16:32:02" let date = formatter2.date(from: dateStr3) In this example, we took the human readable date string and converted it to a Date object using the date(from:) method. If the format of the human readable date string does not match the format specified in the DateFormatter instance, then the conversion will fail and return nil. Now let's take a look at the NumberFormatter type. NumberFormatter The NumberFormatter class is a subclass of the Formatter abstract class that can be used to convert a number into a human readable string with a specified format. This formatter is especially useful when we want to display a currency string since it will convert the number to the proper currency to the proper locale. Let's begin by looking at how we would convert a number into a currency string: let formatter1 = NumberFormatter() formatter1.numberStyle = .currency let num1 = formatter1.string(from: 23.99) In the previous code, we define our number style to be .currency, which tells our formatter that we want to convert our number to a currency string. We then use the string(from:) method to convert the number to a string. In this example, for the en-US locale, the num1 constant would contain the string "$23.99". Now let's see how we would round a number using the NumberFormatter type. The following code will round the number to two decimal points: let formatter2 = NumberFormatter() formatter2.numberStyle = .decimal formatter2.maximumFractionDigits = 2 let num2 = formatter2.string(from: 23.2357) In this example, we set the numberStyle property to the .decimal style and we also define the maximum number of decimal digits to be two using the maximumFractionDigits property. We use the string(from:) method to convert the number to a string. In this example, the num2 constant will contain the string "23.24". Now let's see how we can spell out a number using the NumberFormatter type. The following code will take a number and spell out the number: let formatter3 = NumberFormatter() formatter3.numberStyle = .spellOut let num3 = formatter3.string(from: 2015) In this example, we set the numberStyle property to the .spellout style. This style will spell out the number. For this example, the num3 constant will contain the string two thousand fifteen. Now let's look at how we can manipulate the file system using the FileManager class FileManager The file system is a complex topic and how we manipulate it within our applications is generally specific to the operating system that our application is running on. This can be an issue when we are trying to port code from one operating system to another. Apple has addressed this issue by putting the FileManager object in the core libraries. The FileManager object lets us examine and make changes to the file system in a uniform manner across all operating systems that Swift supports. The FileManager class provides us with a shared instance that we can use. This instance should be suitable for most of our file system related tasks. We can access this shared instance using the default property. When we use the FileManager object we can provide paths as either an instance of the URL or String types. In this section, all of our paths will be String types for consistency purposes. Let's start off by seeing how we could list the contents of a directory using the FileManager object. The following code shows how to do this: let fileManager = FileManager.default do { let path = "/Users/jonhoffman/" let dirContents = try fileManager.contentsOfDirectory(atPath: path) for item in dirContents { print(item); } } catch let error { print("Failed reading contents of directory: (error)") } We start off by getting the shared instance of the FileManager object using the default property. We will use this same shared instance for all of our examples in this section rather than redefining it for each example. Next, we define our path and use it with the contentsOfDirectory(atPath:) method. This method returns an array of String types that contains the names of the items in the path. We use a for loop to list these items. Next let's look at how we would create a directory using the file manager. The following code shows how to do this: do { let path = "/Users/jonhoffman/masteringswift/test/dir" try fileManager.createDirectory(atPath: path, withIntermediateDirectories: true) } catch let error { print("Failed creating directory, (error) ") } In this example, we use the createDirectory(atPath: withIntermediateDirectories:) method to create the directory. When we set the withIntermediateDirectories parameter to true, this method will create any parent directories that are missing. When this parameter is set to false, if any parent directories are missing the operation will fail. Now let's look at how we would copy an item from one location to another: do { let pathOrig = "/Users/jonhoffman/masteringswift/" let pathNew = "/Users/jonhoffman/masteringswift2/" try fileManager.copyItem(atPath: pathOrig, toPath: pathNew) } catch let error { print("Failed copying directory, (error) ") } In this example, we use the copyItem(atPath: toPath:) method to copy one item to another location. This method can be used to copy either directories or files. If it is used for directories, the entire path structure below the directory specified by the path is copied. The file manager also has a method that will let us move an item rather than copying it. Let's see how to do this: do { let pathOrig = "/Users/jonhoffman/masteringswift2/" let pathNew = "/Users/jonhoffman/masteringswift3/" try fileManager.moveItem(atPath: pathOrig, toPath: pathNew) } catch let error { print("Failed moving directory, (error) ") } To move an item, we use the moveItem(atPath: toPath:) method. Just like the copy example we just saw, the move method can be used for either files or directories. If the path specifies a directory, then the entire directory structure below that path will be moved. Now let's see how we can delete an item from the file system: do { let path = "/Users/jonhoffman/masteringswift/" try fileManager.removeItem(atPath: path) } catch let error { print("Failed Removing directory, (error) ") } In this example, we use the removeItem(atPath:) method to remove the item from the file system. A word of warning, once you delete something it is gone and there is no getting it back. Next let's look at how we can read permissions for items in our file system. For this we will need to create a file named test.file, which our path will point to: let path = "/Users/jonhoffman/masteringswift3/test.file" if fileManager.fileExists(atPath: path) { let isReadable = fileManager.isReadableFile(atPath: path) let isWriteable = fileManager.isWritableFile(atPath: path) let isExecutable = fileManager.isExecutableFile(atPath: path) let isDeleteable = fileManager.isDeletableFile(atPath: path) print("can read (isReadable)") print("can write (isWriteable)") print("can execute (isExecutable)") print("can delete (isDeleteable)") } In this example, we use four different methods to read the file system permissions for the file system item. These methods are: isReadableFile(atPath:): true if the file is readable isWritableFile(atPath:):true if the file is writable isExecutableFile(atPath:): true if the file is executable isDeletableFile(atPath:): true if the file is deletable If we wanted to read or write text files, we could use methods provided by the String type rather than the FileManager type. Even though we do not use the FileManager class, for this example, we wanted to show how to read and write text files. Let's see how we would write some text to a file: let filePath = "/Users/jonhoffman/Documents/test.file" let outString = "Write this text to the file" do { try outString.write(toFile: filePath, atomically: true, encoding: .utf8) } catch let error { print("Failed writing to path: (error)") } In this example, we start off by defining our path as a String type just as we did in the previous examples. We then create another instance of the String type that contains the text we want to put in our file. To write the text to the file we use the write(toFile: atomically: encoding:) method of the String instance. This method will create the file if needed and write the contents of the String instance to the file. It is just as easy to read a text file using the String type. The following example shows how to do this: let filePath = "/Users/jonhoffman/Documents/test.file" var inString = "" do { inString = try String(contentsOfFile: filePath) } catch let error { print("Failed reading from path: (error)") } print("Text Read: (inString)") In this example, we use the contentsOfFile: initiator to create an instance of the String type that contains the contents of the file specified by the file path. Now that we have seen how to use the FileManager type, let's look at how we would serialize and deserialize JSON documents using the JSONSerialization type. Summary In this article, we looked at some of the libraries that make up the Swift Core Libraries. While these libraries are only a small portion of the libraries that make up the Swift Core Libraries they are arguable some of the most useful libraries. To explore the other libraries that, you can refer to Apple's GitHub page: https://github.com/apple/swift-corelibs-foundation. Resources for Article: Further resources on this subject: Flappy Swift [article] Network Development with Swift [article] Your First Swift 2 Project [article]
Read more
  • 0
  • 0
  • 25872
article-image-an-unpatched-vulnerability-in-nsas-ghidra-allows-a-remote-attacker-to-compromise-exposed-systems
Savia Lobo
01 Oct 2019
3 min read
Save for later

An unpatched vulnerability in NSA’s Ghidra allows a remote attacker to compromise exposed systems

Savia Lobo
01 Oct 2019
3 min read
On September 28, the National Security Agency revealed a vulnerability in Ghidra, a free, open-source software reverse-engineering tool. The NSA released the Ghidra toolkit at the RSA security conference in San Francisco on March 6, this year. The vulnerability, tracked as CVE-2019-16941, allows a remote attacker to compromise exposed systems, according to a NIST National Vulnerability Database description. This vulnerability is reported as medium severity and currently does not have a fix available. The NSA tweeted on its official account, “A flaw currently exists within Ghidra versions through 9.0.4. The conditions needed to exploit this flaw are rare and a patch is currently being worked. This flaw is not a serious issue as long as you don’t accept XML files from an untrusted source.” According to the bug description, the flaw manifests itself “when [Ghidra] experimental mode is enabled.” This “allows arbitrary code execution if the Read XML Files feature of Bit Patterns Explorer is used with a modified XML document,” the description further reads. “Researchers add since the feature is experimental, to begin with, it’s already an area to expect bugs and vulnerabilities. They also contend, that despite descriptions of how the bug can be exploited, it can’t be triggered remotely,” Threatpost reports. Ghidra, a disassembler written in Java, breaks down executable files into assembly code that can then be analyzed. By deconstructing malicious code and malware, cybersecurity professionals can gain a better understanding of potential vulnerabilities in their networks and systems. The NSA has used it internally for years, and recently decided to open-source it. Other instances when bugs have been found in Ghidra include, in March, a proof-of-concept was released showing how an XML external entity (XXE) vulnerability (rated serious) can be exploited to attack Ghidra project users (version 9.0 and below). In July, researchers found an additional path-retrieval bug (CVE-2019-13623) that was also rated high severity. The bug, similar to CVE-2019-1694, also impacts the ghidra.app.plugin.core.archive and allows an attacker to achieve arbitrary code execution on vulnerable systems, Threatpost reports. Researchers said they are unaware that this most recent bug (CVE-2019-16941) has been exploited in the wild. To know more about this news in detail, read the bug description. A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency 10 times ethical hackers spotted a software vulnerability and averted a crisis A zero-day pre-auth vulnerability is currently being exploited in vBulletin, reports an anonymous researcher
Read more
  • 0
  • 0
  • 25815

article-image-w3c-world-wide-web-consortium-declares-webassembly-1-0-as-an-official-web-standard
Sugandha Lahoti
09 Dec 2019
3 min read
Save for later

W3C (World Wide Web Consortium) declares WebAssembly 1.0 as an official web standard

Sugandha Lahoti
09 Dec 2019
3 min read
Last Thursday, the World Wide Web Consortium declared Web Assembly 1.0 as an official W3C Recommendation. With this announcement, WebAssembly becomes the fourth language to run natively in browsers following HTML, CSS, and JavaScript. “The arrival of Web Assembly expands the range of applications that can be achieved by simply using Open Web Platform technologies. In a world where machine learning and Artificial Intelligence become more and more common, it is important to enabling high-performance applications on the Web, without compromising the safety of the users,” declared Philippe Le Hégaret, W3C Project Lead in the official press release. Web Assembly has been the talk of the town for providing a safe, portable, low-level code format designed for efficient execution and compact representation. According to the W3C consortium, WebAssembly enables the Web platform to perform a more efficient execution of computationally-intensive algorithms, which in turn makes it practical to deliver whole new classes of user experience on the Web and elsewhere. Because it is a platform-independent execution environment, it can also be used on any other computer platform. W3C has published three WebAssembly specifications as W3C Recommendations: WebAssembly Core Specification defines a low-level virtual machine that closely mimics the functionality of many microprocessors upon which it is run. WebAssembly Web API which defines a Promise-based interface for requesting and executing a .wasm resource. WebAssembly JavaScript Interface that provides a JavaScript API for invoking and passing parameters to WebAssembly functions. W3C is also working on a range of features for future versions of the standard. These include Threading: Threads provide the benefits of shared-memory multi-threading and atomic memory accesses. Fixed-width SIMD: Vector operations that execute loops in parallel. Reference types: Allow Web Assembly code to directly reference host objects. Tail calls: Enable calling functions without using extra stack space. ECMAScript module integration: Interact with JavaScript by loading WebAssembly executables as ES6 modules. There are many other longer-term projects that W3C is working on. Many of them are aimed at improving the usability and availability of Web Assembly. For example garbage collection, debugging interfaces, and Web Assembly System Interface (WASI). In other news, recently, Mozilla partnered with Fastly, Intel and Red Hat to form the Bytecode Alliance to build a secure-by-default future for WebAssembly and to take it beyond the browser. Introducing Woz, a Progressive WebAssembly Application (PWA + WebAssembly) generator written entirely in Rust. You can now use WebAssembly from .NET with Wasmtime! 4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more.
Read more
  • 0
  • 0
  • 25814

article-image-mysql-data-transfer-using-sql-server-integration-services-ssis
Packt Editorial Staff
12 Aug 2009
8 min read
Save for later

MySQL Data Transfer using Sql Server Integration Services (SSIS)

Packt Editorial Staff
12 Aug 2009
8 min read
There are a large number of posts on various difficulties experienced while transferring data from MySQL using Microsoft SQL Server Integration Services. While the transfer of data from MySQL to Microsoft SQL Server 2008 is not fraught with any blocking issues, transfer of data from SQL Server 2008 to MySQL has presented various problems. There are some workarounds suggested. In this article by Dr. Jay Krishnaswamy, data transfer to MySQL using SQL Server Integration Services will be described. If you are new to SQL Server Integration Services (SSIS) you may want to read a book by the same author on Beginners Guide to SQL Server Integration Services Using Visual Studio 2005, published by Packt. Connectivity with MySQL For data interchange with MySQL there are two options one of which can be accessed in the connection wizards of SQL Server Integration Services assuming you have installed the programs. The other can be used to set up a ODBC DSN as described further down. The two connection options are: MySQL Connector/ODBC 5.1 Connector/Net 5.2 New versions 6.0 & 6.1 In this article we will be using the ODBC connector for MySQL which can be downloaded from the MySQL Site. The connector will be used to create an ODBC DSN. Transferring a table from SQL Server 2008 to MySQL We will transfer a table in the TestNorthwind database on SQL Server 2008 (Enterprise & Evaluation) to MySQL server database. The MySQL database we are using is described in the article on Exporting data from MS Access 2003 to MySQL. In another article, MySQL Linked Server on SQL Server 2008, creating an ODBC DSN for MySQL was described. We will be using the DSN created in that article. Creating an Integration Services project in Visual Studio 2008 Start the Visual Studio 2008 program from its shortcut. Click File | New | Project... to open the New Project window and select an integration services template from the business intelligence projects by providing a suitable name. The project folder will have a file called Package.dtsx which can be renamed with a custom name. Add and configure an ADO.NET Source The Project's package designer will be open displaying the Control Flow tab. Drag and drop a Data Flow Task on to the control flow tabbed page. Click next on the Data Flow tab in the designer to display the Data Flow page. Read the instructions on this page. Drag and drop a ADO.NET Source from the Data Flow Sources items in the Toolbox. It is assumed that you can set up a connection manager to the resident SQL Server 2008 on your machine. The next figure shows the configured connection manager to the SQL Server 2008. The table (PrincetonTemp) that will be transferred is in the TestNorthwind database. The authentication is Windows and a .NET provider is used to access the data. You may also test the connection by clicking the Test Connection button. If the connection shown above is correctly configured, the test should indicate a successful connection. Right click the ADO.NET source and from the drop-down click Edit. The ADO.NET Source Editor gets displayed. As mentioned earlier you should be able to access the table and view objects on the database as shown in the next figure. We have chosen to transfer a simple table, PrincetonTemp from the TestNorthwind database on SQL Server 2008. It has a only couple of columns as shown in the Columns page of the ADO.NET Source Editor. The default for the Error page setting has been assumed, that is, if there is an error or truncation of data the task will fail. Add an ADO.NET destination and port the data from the source Drag and drop an ADO.NET destination item from under Data Flow Destinations items in the Toolbox on to the data flow page of the designer. There are two ways to arrange for the data to flow from source to the destination. The easy way is just drag the green dangling line from the source with your mouse and let go on the ADO.NET destination. A solid line will connect the source and the destination as shown. (For more resources on Microsoft, see here.) Configure a connection manager to connect to MySQL In the Connection Manager's pane under the Package designer right click to display a pop-up menu which allows you to make a new connection. When you agree to make a new ADO.NET Connection the Configure ADO.NET connection Manager's window shows up and click on New... button on this page. The connection manager's page gets displayed as shown. In the Providers drop-down you will see a number of providers. There are the two providers that you can use, the ODBC through the connector and the MySQL Data Provider. Click on the Odbc Data Provider. As mentioned previously we will be using the System DSN MySQL_Link created earlier for the other article shown in the drop-down list of available ODBC DSN's. Provide the USERID and Password; click the Test Connection button. If all the information is correct you should get a success message as shown. Close out of the message as well as the Configure ADO.NET Connection Manager windows. Right click the ADO.NET Destination to display its editor window. In the drop-down for connection manager you should be able to pick the connection Manager you created in the previous step (MySQL_INK.root) as shown. Click on the New... button to create a Table or View. You will get a warning message regarding not knowing the mapping to SSIS as shown. Click OK. The create table window gets displayed as shown. Notice that the table is displaying all the columns from the table that the source is sending out. If you were to click OK, you would get an error that the syntax is not correct as shown. Modify the table as shown to change the destination table name (your choice) and the data type. CREATE TABLE From2k8( "Id" INT, "Month" VARCHAR(10), "Temperature" DOUBLE PRECISION, "RecordHigh" DOUBLE PRECISION ) Click OK. Again you get the same error regarding syntax not being correct. Modify the Create Table statement further as shown. CREATE TABLE From2k8 ( Id INT, Month VARCHAR(10), Temperature DOUBLE PRECISION, RecordHigh DOUBLE PRECISION ) Click OK after the above modification. The table gets added to the ADO.NET Destination Manager Editor as shown. Click on the Mappings on the left side of the ADO.NET Destination Editor. The column mappings page gets displayed as shown. We accept the default settings for Error Output page. Click OK. Build the project and execute the package by right clicking the package and choosing Execute Package. The program runs and processes the package and ends up being unsuccessful with the error message in the Progress tab of the project as shown (only relevant message is shown here). .... ..... [SSIS.Pipeline] Information: Execute phase is beginning. [ADO NET Destination 1 [165]] Error: An exception has occurred during data insertion, the message returned from the provider is: ERROR [42000] [MySQL][ODBC 5.1 Driver] [mysqld-5.1.30-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"Id", "Month", "Temperature", "RecordHigh") VALUES (1, 'Jan ', 4.000000000' at line 1 [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "ADO NET Destination 1" (165) failed with error code 0xC020844B while processing input "ADO NET Destination Input" (168). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure. [SSIS.Pipeline] Information: Post Execute phase is beginning. ...... .... Task Data Flow Task failed .... Start the MySQL Server and login to it. Run the following commands as shown in the next figure. By setting the mode to 'ANSI' makes the syntax more standard like as MySQL can cater to clients using other SQL modes. This is why the above error is returned although the syntax itself appears correct. In fact a create statement run on command line to create a table directly on MySQL could not create a table and returned an error when SSIS was used to create the same table. After running the above statements, build the BI project and execute the package. This time the execution is will be successful and you can query the MySQL Server as in the following: Summary The article describes step by step transferring a table from SQL Server 2008 to MySQL using ODBC connectivity. For successful transfer, the data type differences between SQL Server 2008 and the MySQL version must be properly taken into consideration as well as correctly setting the SQL_Mode property of MySQL Server. Further resources on this subject: Easy guide to understand WCF in Visual Studio 2008 SP1 and Visual Studio 2010 Express MySQL Linked Server on SQL Server 2008 Displaying MySQL data on an ASP.NET Web Page Exporting data from MS Access 2003 to MySQL Transferring Data from MS Access 2003 to SQL Server 2008
Read more
  • 0
  • 0
  • 25808
article-image-aquarium-monitor
Packt
27 Jan 2016
16 min read
Save for later

Aquarium Monitor

Packt
27 Jan 2016
16 min read
In this article by Rodolfo Giometti, author of the book BeagleBone Home Automation Blueprints, we'll see how to realize an aquarium monitor where we'll be able to record all the environment data and then control the life of our loved fishes from a web panel. (For more resources related to this topic, see here.) By using specific sensors, you'll learn how to monitor your aquarium with the possibility to set alarms, log the aquarium data (water temperature), and to do some actions like cooling the water and feeding the fishes. Simply speaking, we're going to implement a simple aquarium web monitor with a real-time live video, some alarms in case of malfunctioning, and a simple temperature data logging that allows us to monitor the system from a standard PC as well as from a smartphone or tablet, without using any specifying mobile app, but just using the on-board standard browser only. The basic of functioning This aquarium monitor is a good (even if very simple) example about how a web monitoring system should be implemented, giving to the reader some basic ideas about how a mid-complex system works and how we can interact with it in order to modify some system settings, displaying some alarms in case of malfunctioning and plotting a data logging on a PC, smartphone, or tablet. We have a periodic task that collects the data and then decides what to do. However, this time, we have a user interface (the web panel) to manage, and a video streaming to be redirected into a web page. Note also that in this project, we need an additional power supply in order to power and manage 12V devices (like a water pump, a lamp, and a cooler) with the BeagleBone Black, which is powered at 5V instead. Note that I'm not going to test this prototype on a real aquarium (since I don't have one), but by using a normal tea cup filled with water! So you should consider this project for educational purpose only, even if, with some enhancements, it could be used on a real aquarium too! Setting up the hardware About the hardware, there are at least two major issues to be pointed out: first of all, the power supply. We have two different voltages to use due to the fact the water pump, the lamp, and the cooler are 12V powered, while the other devices are 5V/3.3V powered. So, we have to use a dual output power source (or two different power sources) to power up our prototype. The second issue is about using a proper interface circuitry between the 12V world and the 5V one in such a way as to not damage the BeagleBone Black or other devices. Let me remark that a single GPIO of the BeagleBone Black can manage a voltage of 3.3V, so we need a proper circuitry to manage a 12V device. Setting up the 12V devices As just stated, these devices need special attention and a dedicated 12V power line which, of course, cannot be the one we wish to use to supply the BeagleBone Black. On my prototype, I used a 12V power supplier that can supply a current till 1A. These characteristics should be enough to manage a single water pump, a lamp, and a cooler. After you get a proper power supplier, we can pass to show the circuitry to use to manage the 12V devices. Since all of them are simple on/off devices, we can use a relay to control them. I used the device shown in the following image where we have 8 relays: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/5v-relays-array Then, the schematic to connect a single 12V device is shown in the following diagram: Simply speaking, for each device, we can turn on and off the power supply simply by moving a specific GPIO of our BeagleBone Black. Note that each relays of the array board can be managed in direct or inverse logic by simply choosing the right connections accordingly as reported on the board itself, that is, we can decide that, by putting the GPIO into a logic 0 state, we can activate the relay, and then, turning on the attached device, while putting the GPIO into a logic 1 state, we can deactivate the relay, and then turn off the attached device. By using the following logic, when the LED of a relay is turned on, the corresponding device is powered on. The BeagleBone Black's GPIOs and the pins of the relays array I used with 12V devices are reported in the following table: Pin Relays Array pin 12V Device P8.10 - GPIO66 3 Lamp P8.9 - GPIO69 2 Cooler P8.12 - GPIO68 1 Pump P9.1 - GND GND   P9.5 - 5V Vcc   To test the functionality of each GPIO line, we can use the following command to set up the GPIO as an output line at high state: root@arm:~# ./bin/gpio_set.sh 68 out 1 Note that the off state of the relay is 1, while the on state is 0. Then, we can turn the relay on and off by just writing 0 and 1 into /sys/class/gpio/gpio68/value file, as follows: root@arm:~# echo 0 > /sys/class/gpio/gpio68/value root@arm:~# echo 1 > /sys/class/gpio/gpio68/value Setting up the webcam The webcam I'm using in my prototype is a normal UVC-based webcam, but you can safely use another one which is supported by the mjpg-streamer tool. See the mjpg-streamer project's home site for further information at http://sourceforge.net/projects/mjpg-streamer/. Once connected to the BeagleBone Black USB host port, I get the following kernel activities: usb 1-1: New USB device found, idVendor=045e, idProduct=0766 usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 1-1: Product: Microsoft LifeCam VX-800 usb 1-1: Manufacturer: Microsoft ... uvcvideo 1-1:1.0: usb_probe_interface uvcvideo 1-1:1.0: usb_probe_interface - got id uvcvideo: Found UVC 1.00 device Microsoft LifeCam VX-800 (045e:0766) Now, a new driver called uvcvideo is loaded into the kernel: root@beaglebone:~# lsmod Module Size Used by snd_usb_audio 95766 0 snd_hwdep 4818 1 snd_usb_audio snd_usbmidi_lib 14457 1 snd_usb_audio uvcvideo 53354 0 videobuf2_vmalloc 2418 1 uvcvideo ... Ok, now, to have a streaming server, we need to download the mjpg-streamer source code and compile it. We can do everything within the BeagleBone Black itself with the following command: root@beaglebone:~# svn checkout svn://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code The svn command is part of the subversion package and can be installed by using the following command: root@beaglebone:~# aptitude install subversion After the download is finished, we can compile and install the code by using the following command line: root@beaglebone:~# cd mjpg-streamer-code/mjpg-streamer/ && make && make install If no errors are reported, you should now be able to execute the new command as follows, where we ask for the help message: root@beaglebone:~# mjpg_streamer --help ----------------------------------------------------------------------- Usage: mjpg_streamer -i | --input "<input-plugin.so> [parameters]" -o | --output "<output-plugin.so> [parameters]" [-h | --help ]........: display this help [-v | --version ].....: display version information [-b | --background]...: fork to the background, daemon mode ... If you get an error like the following: make[1]: Entering directory `/root/mjpg-streamer-code/mjpg-streamer/plugins/input_testpicture' convert pictures/960x720_1.jpg -resize 640x480! pictures/640x480_1.jpg /bin/sh: 1: convert: not found make[1]: *** [pictures/640x480_1.jpg] Error 127 ...it means that your system misses the convert tool. You can install it by using the usual aptitude command: root@beaglebone:~# aptitude install imagemagick OK, now we are ready to test the webcam. Just run the following command line and then point a web browser to the address http://192.168.32.46:8080/?action=stream (where you should replace my IP address 192.168.32.46 with your BeagleBone Black's one) in order to get the live video from your webcam: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -w /var/www/" Note that you can use the USB ethernet address 192.168.7.2 too if you're not using the BeagleBone Black's Ethernet port. If everything works well, you should get something as shown in the following screenshot: If you get an error as follows: bind: Address already in use ...it means that some other process is holding the 8080 port, and most probably, it's occupied by the Bone101 service. To disable it, you can use the following commands: root@BeagleBone:~# systemctl stop bonescript.socket root@BeagleBone:~# systemctl disable bonescript.socket rm '/etc/systemd/system/sockets.target.wants/bonescript.socket' Or, you can simply use another port, maybe port 8090, with the following command line: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -p 8090 -w /var/www/" Connecting the temperature sensor The temperature sensor used in my prototype is the one shown in the following screenshot: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/waterproof-temperature-sensor. The datasheet of this device is available at http://datasheets.maximintegrated.com/en/ds/DS18B20.pdf. As you can see, it's a waterproof device so we can safely put it into the water to get its temperature. This device is a 1-wire device and we can get access to it by using the w1-gpio driver, which emulates a 1-wire controller by using a standard BeagleBone Black GPIO pin. The electrical connection must be done according to the following table, keeping in mind that the sensor has three colored connection cables: Pin Cable color P9.4 - Vcc Red P8.11 - GPIO1_13 White P9.2 - GND Black Interested readers can follow this URL for more information about how 1-Wire works: http://en.wikipedia.org/wiki/1-Wire Keep in mind that, since our 1-wire controller is implemented in software, we have to add a pull-up resistor of 4.7KΩ between the red and white cable in order to make it work! Once all connections are in place, we can enable the 1-wire controller on the P8.11 pin of the BeagleBone Black's expansion connector. The following snippet shows the relevant code where we enable the w1-gpio driver and assign to it the proper GPIO: fragment@1 { target = <&ocp>; __overlay__ { #address-cells = <1>; #size-cell = <0>; status = "okay"; /* Setup the pins */ pinctrl-names = "default"; pinctrl-0 = <&bb_w1_pins>; /* Define the new one-wire master as based on w1-gpio * and using GPIO1_13 */ onewire@0 { compatible = "w1-gpio"; gpios = <&gpio2 13 0>; }; }; }; To enable it, we must use the dtc program to compile it as follows: root@beaglebone:~# dtc -O dtb -o /lib/firmware/BB-W1-GPIO-00A0.dtbo -b 0 -@ BB-W1-GPIO-00A0.dts Then, we have to load it into the kernel with the following command: root@beaglebone:~# echo BB-W1-GPIO > /sys/devices/bone_capemgr.9/slots If everything works well, we should see a new 1-wire device under the /sys/bus/w1/devices/ directory, as follows: root@beaglebone:~# ls /sys/bus/w1/devices/ 28-000004b541e9 w1_bus_master1 The new temperature sensor is represented by the directory named 28-000004b541e9. To read the current temperature, we can use the cat command on the w1_slave file as follows: root@beaglebone:~# cat /sys/bus/w1/devices/28-000004b541e9/w1_slave d8 01 00 04 1f ff 08 10 1c : crc=1c YES d8 01 00 04 1f ff 08 10 1c t=29500 Note that your sensors have a different ID, so in your system you'll get a different path name in the /sys/bus/w1/devices/28-NNNNNNNNNNNN/w1_slave form. In the preceding example, the current temperature is t=29500, which is expressed in millicelsius degree (m°C), so it's equivalent to 29.5° C. The reader can take a look at the book BeagleBone Essentials, edited by Packt Publishing and written by the author of this book, in order to have more information regarding the management of the 1-wire devices on the BeagleBone Black. Connecting the feeder The fish feeder is a device that can release some feed by moving a motor. Its functioning is represented in the following diagram: In the closed position, the motor is at horizontal position, so the feed cannot fall down, while in the open position, the motor is at vertical position, so that the feed can fall down. I have no real fish feeder, but looking at the above functioning we can simulate it by using the servo motor shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/nano-servo-motor. The datasheet of this device is available at http://hitecrcd.com/files/Servomanual.pdf. This device can be controlled in position, and it can rotate by 90 degrees with a proper PWM signal in input. In fact, reading into the datasheet, we discover that the servo can be managed by using a periodic square waveform with a period (T) of 20 ms and with an high state time (thigh) between 0.9 ms and 2.1 ms with 1.5 ms as (more or less) center. So, we can consider the motor in the open position when thigh =1 ms and in the close position when thigh=2 ms (of course, these values should be carefully calibrated once the feeder is really built up!) Let's connect the servo as described by the following table: Pin Cable color P9.3 - Vcc Red P9.22 - PWM Yellow P9.1 - GND Black Interested readers can find more details about the PWM at https://en.wikipedia.org/wiki/Pulse-width_modulation. To test the connections, we have to enable one PWM generator of the BeagleBone Black. So, to respect the preceding connections, we need the one which has its output line on pin P9.22 of the expansion connectors. To do it, we can use the following commands: root@beaglebone:~# echo am33xx_pwm > /sys/devices/bone_capemgr.9/slots root@beaglebone:~# echo bone_pwm_P9_22 > /sys/devices/bone_capemgr.9/slots Then, in the /sys/devices/ocp.3 directory, we should find a new entry related to the new enabled PWM device, as follows: root@beaglebone:~# ls -d /sys/devices/ocp.3/pwm_* /sys/devices/ocp.3/pwm_test_P9_22.12 Looking at the /sys/devices/ocp.3/pwm_test_P9_22.12 directory, we see the files we can use to manage our new PWM device: root@beaglebone:~# ls /sys/devices/ocp.3/pwm_test_P9_22.12/ driver duty modalias period polarity power run subsystem uevent As we can deduce from the preceding file names, we have to properly set up the values into the files named as polarity, period and duty. So, for instance, the center position of the servo can be achieved by using the following commands: root@beaglebone:~# echo 0 > /sys/devices/ocp.3/pwm_test_P9_22.12/polarity root@beaglebone:~# echo 20000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/period root@beaglebone:~# echo 1500000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty The polarity is set to 0 to invert it, while the values written in the other files are time values expressed in nanoseconds, set at a period of 20 ms and a duty cycle of 1.5 ms, as requested by the datasheet (time values are all in nanoseconds.) Now, to move the gear totally clockwise, we can use the following command: root@beaglebone:~# echo 2100000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty On the other hand, the following command is to move it totally anticlockwise: root@beaglebone:~# echo 900000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty So, by using the following command sequence, we can open and then close (with a delay of 1 second) the gate of the feeder: echo 1000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty sleep 1 echo 2000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty Note that by simply modifying the delay, we can control how much feed should fall down when the feeder is activated. The water sensor The water sensor I used is shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/water_sensor. This is a really simple device that implements what is shown in the following screenshot, where the resistor (R) has been added to limit the current when the water closes the circuit: When a single drop of water touches two or more teeth of the comb in the schematic, the circuit is closed and the output voltage (Vout) drops from Vcc to 0V. So, if we wish to check the water level in our aquarium, that is, if we wish to check for a water leakage, we can manage to put the aquarium into a sort of saucer, and then this device into it, so, if a water leakage occurs, the water is collected by the saucer, and the output voltage from the sensor should move from Vcc to GND. The GPIO used for this device are shown in the following table: Pin Cable color P9.3 - 3.3V Red P8.16 - GPIO67 Yellow P9.1 - GND Black To test the connections, we have to define GPIO67 as an input line with the following command: root@beaglebone:~# ../bin/gpio_set.sh 67 in Then, we can try to read the GPIO status while the sensor is into the water and when it is not, by using the following two commands: root@beaglebone:~# cat /sys/class/gpio/gpio67/value 0 root@beaglebone:~# cat /sys/class/gpio/gpio67/value 1 The final picture The following screenshot shows the prototype I realized to implement this project and to test the software. As you can see, the aquarium has been replaced by a cup of water! Note that we have two external power suppliers: the usual one at 5V for the BeagleBone Black, and the other one with an output voltage of 12V for the other devices (you can see its connector in the upper right corner on the right of the webcam.) Summary In this article we've discovered how to interface our BeagleBone Black to several devices with a different power supply voltage and how to manage a 1-wire device and PWM one. Resources for Article: Further resources on this subject: Building robots that can walk [article] Getting Your Own Video and Feeds [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 25795

article-image-docker-19-03-introduces-an-experimental-rootless-docker-mode-that-helps-mitigate-vulnerabilities-by-hardening-the-docker-daemon
Savia Lobo
29 Jul 2019
3 min read
Save for later

Docker 19.03 introduces an experimental rootless Docker mode that helps mitigate vulnerabilities by hardening the Docker daemon

Savia Lobo
29 Jul 2019
3 min read
Tõnis Tiigi, a software engineer at Docker and also a maintainer of Moby/Docker engine, in his recent post on Medium, explained how users can now leverage Docker’s non-root user privileges with Docker 19.03 release. He explains the Docker engine provides functionalities which are often tightly coupled to that of the Linux Kernel. For instance, to create namespaces in Linux users need privileged capabilities this is because a component of container isolation is based on Linux namespaces. “Historically Docker daemon has always needed to be started by the root user”, Tiigi explains. Docker is looking forward to changing this notion by introducing rootless support. With the help of Moby and BuildKit maintainer, Akihiro Suda, “we added rootless support to BuildKit image builder in 2018 and from February 2019 the same rootless support was merged to Moby upstream and is available for all Docker users to try in experimental mode”, Tiigi mentions. The rootless mode will help reduce the security footprint of the daemon and expose Docker capabilities to systems where users cannot gain root privileges. Rootless Docker and its benefits As the name suggests, a rootless mode in Docker allows a user to run Docker daemon, including the containers, as a non-root user on the host. The benefit to this is, even if it gets compromised the attacker will not be able to gain root access to the host. Akhiro Suda in his presentation, “Hardening Docker daemon with Rootless mode” explains rootless mode does not entirely fix vulnerabilities and misconfigurations but can mitigate attacks. With rootless mode attacker would not be able to access files owned by other users, modify firmware and kernel with an undetectable malware, or perform ARP spoofing. A few Caveats to the rootless Docker mode Docker engineers say the rootless mode cannot be considered a replacement for the complete suite of Docker engine features. Some limitation to the rootless mode include: cgroups resource controls, apparmor security profiles, checkpoint/restore, overlay networks etc. do not work on rootless mode. Exposing ports from containers currently requires manual socat helper process. Only Ubuntu-based distros support overlay filesystems in rootless mode. Rootless mode is currently only provided for nightly builds that may not be as stable as you are used to. As a lot of Linux features that Docker needs require privileged capabilities, the rootless mode takes advantage of user namespaces. “User namespaces map a range of user ID-s so that the root user in the inner namespace maps to an unprivileged range in the parent namespace. A fresh process in user namespace also picks up a full set of process capabilities”, Tiigi explains. Source: Medium In the recent release of the experimental rootless mode on GitHub, engineers mention rootless mode allows running dockerd as an unprivileged user, using user_namespaces(7), mount_namespaces(7), network_namespaces(7). Users need to run dockerd-rootless.sh instead of dockerd. $ dockerd-rootless.sh --experimental As Rootless mode is experimental, users need to always run dockerd-rootless.sh with --experimental. To know more about the Docker rootless mode in detail, read Tõnis Tiigi’s Medium post. You can also check out Akihiro Suda’s presentation Hardening Docker daemon with Rootless mode. Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Announcing Docker Enterprise 3.0 Public Beta! Are Debian and Docker slowly losing popularity?
Read more
  • 0
  • 0
  • 25787
Modal Close icon
Modal Close icon