Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-unity-3x-scripting-character-controller-versus-rigidbody
Packt
22 Jun 2012
16 min read
Save for later

Unity 3.x Scripting-Character Controller versus Rigidbody

Packt
22 Jun 2012
16 min read
Creating a controllable character There are two ways to create a controllable character in Unity, by using the Character Controller component or physical Rigidbody. Both of them have their pros and cons, and the choice to use one or the other is usually based on the needs of the project. For instance, if we want to create a basic role playing game, where a character is expected to be able to walk, fight, run, and interact with treasure chests, we would recommend using the Character Controller component. The character is not going to be affected by physical forces, and the Character Controller component gives us the ability to go up slopes and stairs without the need to add extra code. Sounds amazing, doesn't it? There is one caveat. The Character Controller component becomes useless if we decide to make our character non-humanoid. If our character is a dragon, spaceship, ball, or a piece of gum, the Character Controller component won't know what to do with it. It's not programmed for those entities and their behavior. So, if we want our character to swing across the pit with his whip and dodge traps by rolling over his shoulder, the Character Controller component will cause us many problems. In this article, we will look into the creation of a character that is greatly affected by physical forces, therefore, we will look into the creation of a custom Character Controller with Rigidbody, as shown in the preceding screenshot. Custom Character Controller In this section, we will write a script that will take control of basic character manipulations. It will register a player's input and translate it into movement. We will talk about vectors and vector arithmetic, try out raycasting, make a character obey our controls and see different ways to register input, describe the purpose of the FixedUpdate function, and learn to control Rigidbody. We shall start with teaching our character to walk in all directions, but before we start coding, there is a bit of theory that we need to know behind character movement. Most game engines, if not all, use vectors to control the movement of objects. Vectors simply represent direction and magnitude, and they are usually used to define an object's position (specifically its pivot point) in a 3D space. Vector is a structure that consists of three variables—X, Y, and Z. In Unity, this structure is called Vector3: To make the object move, knowing its vector is not enough. Length of vectors is known as magnitude. In physics, speed is a pure scalar, or something with a magnitude but no direction. To give an object a direction, we use vectors. Greater magnitude means greater speed. By controlling vectors and magnitude, we can easily change our direction or increase speed at any time we want. Vectors are very important to understand if we want to create any movement in a game. Through the examples in this article, we will explain some basic vector manipulations and describe their influence on the character. It is recommended that you learn extra material about vectors to be able to perfect a Character Controller based on game needs. Setting up the project To start this section, we need an example scene. Perform the following steps: Select Chapter 2 folder from book assets, and click on on the Unity_chapter2 scene inside the custom_scene folder. In the Custom scripts folder, create a new JavaScript file. Call it CH_Controller (we will reference this script in the future, so try to remember its name, if you choose a different one): In a Hierarchy view, click on the object called robot. Translate the mouse to a Scene view and press F; the camera will focus on a funny looking character that we will teach to walk, run, jump, and behave as a character from a video game. Creating movement The following is the theory of what needs to be done to make a character move: Register a player's input. Store information into a vector variable. Use it to move a character. Sounds like a simple task, doesn't it? However, when it comes to moving a player-controlled character, there are a lot of things that we need to keep in mind, such as vector manipulation, registering input from the user, raycasting, Character Controller component manipulation, and so on. All these things are simple on their own, but when it comes to putting them all together, they might bring a few problems. To make sure that none of these problems will catch us by surprise, we will go through each of them step by step. Manipulating character vector By receiving input from the player, we will be able to manipulate character movement. The following is the list of actions that we need to perform in Unity: Open the CH_Character script. Declare public variables Speed and MoveDirection of types float and Vector3 respectively. Speed is self-explanatory, it will determine at which speed our character will be moving. MoveDirection is a vector that will contain information about the direction in which our character will be moving. Declare a new function called Movement. It will be checking horizontal and vertical inputs from the player. Finally, we will use this information and apply movement to the character. An example of the code is as follows: public var Speed : float = 5.0; public var MoveDirection : Vector3 = Vector3.zero; function Movement (){ if (Input.GetAxis("Horizontal") || Input.GetAxis("Vertical")) MoveDirection = Vector3(Input.GetAxisRaw("Horizontal"),MoveDirecti on.y, Input.GetAxisRaw("Vertical")); this.transform.Translate(MoveDirection); } Register input from the user In order to move the character, we need to register an input from the user. To do that, we will use the Input.GetAxis function. It registers input and returns values from -1 to 1 from the keyboard and joystick. Input.GetAxis can only register input that had been defined by passing a string parameter to it. To find out which options are available, we will go to Edit | Projectsettings | Input. In the Inspector view, we will see Input Manager. Click on the Axes drop-down menu and you will be able to see all available input information that can be passed to the Input.GetAxis function. Alternatively, we can use Input.GetAxisRaw. The only difference is that we aren't using Unity's built-in smoothing and processing data as it is, which allows us to have greater control over character movement. To create your own input axes, simply increase the size of the array by 1 and specify your preferences (later we will look into a better way of doing and registering input for different buttons). this.transform is an access to transformation of this particular object. transform contains all the information about translation, rotation, scale, and children of this object. Translate is a function inside Unity that translates GameObject to a specific direction based on a given vector. If we simply leave it as it is, our character will move with the speed of light. That happens because translation is being applied on character every frame. Relying on frame rate when dealing with translation is very risky, and as each computer has different processing power, execution of our function will vary based on performance. To solve this problem, we will tell it to apply movement based on a common factor—time: this.transform.Translate(MoveDirection * Time.deltaTime); This will make our character move one Unity unit every second, which is still a bit too slow. Therefore, we will multiply our movement speed by the Speed variable: this.transform.Translate((MoveDirection * Speed) * Time.deltaTime); Now, when the Movement function is written, we need to call it from Update. A word of warning though—controlling GameObject or Rigidbody from the usual Update function is not recommended since, as mentioned previously, that frame rate is unreliable. Thankfully, there is a FixedUpdate function that will help us by applying movement at every fixed frame. Simply change the Update function to FixedUpdate and call the Movement function from there: function FixedUpdate (){ Movement(); } The Rigidbody component Now, when our character is moving, take a closer look at the Rigidbody component that we have attached to it. Under the Constraints drop-down menu, we will notice that Freeze Rotation for X and Z axes is checked, as shown in the following screenshot: If we uncheck those boxes and try to move our character, we will notice that it starts to fall in the direction of the movement. Why is this happening? Well, remember, we talked about Rigidbody being affected by physics laws in the engine? That applies to friction as well. To avoid force of friction affecting our character, we forced it to avoid rotation along all axes but Y. We will use the Y axis to rotate our character from left to right in the future. Another problem that we will see when moving our character around is a significant increase in speed when walking in a diagonal direction. This is not an unusual bug, but an expected behavior of the MoveDirection vector. That happens because for directional movement we use vertical and horizontal vectors. As a result, we have a vector that inherits magnitude from both, in other words, its magnitude is equal to the sum of vertical and horizontal vectors. To prevent that from happening, we need to set the magnitude of the new vector to 1. This operation is called vector normalization. With normalization and speed multiplier, we can always make sure to control our magnitude: this.transform.Translate((MoveDirection.normalized * Speed) * Time. deltaTime); Jumping Jumping is not as hard as it seems. Thanks to Rigidbody, our character is already affected by gravity, so the only thing we need to do is to send it up in the air. Jump force is different from the speed that we applied to movement. To make a decent jump, we need to set it to 500.0). For this specific example, we don't want our character to be controllable in the air (as in real life, that is physically impossible). Instead, we will make sure that he preserves transition velocity when jumping, to be able to jump in different directions. But, for now, let's limit our movement in air by declaring a separate vector for jumping. User input verification In order to make a jump, we need to be sure that we are on the ground and not floating in the air. To check that, we will declare three variables—IsGrounded, Jumping, and inAir—of a type boolean. IsGrounded will check if we are grounded. Jumping will determine if we pressed the jump button to perform a jump. inAir will help us to deal with a jump if we jumped off the platform without pressing the jump button. In this case, we don't want our character to fly with the same speed as he walks; we need to add an airControl variable that will smooth our fall. Just as we did with movement, we need to register if the player pressed a jump button. To achieve this, we will perform a check right after registering Vertical and Horizontal inputs: public var jumpSpeed : float = 500.0; public var jumpDirection : Vector3 = Vector3.zero; public var IsGrounded : boolean = false; public var Jumping : boolean = false; public var inAir : boolean = false; public var airControl : float = 0.5; function Movement(){ if (Input.GetAxis("Horizontal") || Input.GetAxis("Vertical")) { MoveDirection = Vector3(Input.GetAxisRaw("Horizontal"),MoveDirection.y,Input.GetAxisRaw("Vertical")); } if (Input.GetButtonDown("Jump") && isGrounded) {} } GetButtonDown determines if we pressed a specific button (in this case, Space bar), as specified in Input Manager. We also need to check if our character is grounded to make a jump. We will apply vertical force to a rigidbody by using the AddForce function that takes the vector as a parameter and pushes a rigidbody in the specified direction. We will also toggle Jumping boolean to true, as we pressed the jump button and preserve velocity with JumpDirection: if (Input.GetButtonDown("Jump") &&isGrounded){ Jumping = true; jumpDirection = MoveDirection; rigidbody.AddForce((transform.up) * jumpSpeed); } if (isGrounded) this.transform.Translate((MoveDirection.normalized * Speed) * Time.deltaTime); else if (Jumping || inAir) this.transform.Translate((jumpDirection * Speed * airControl) * Time.deltaTime); To make sure that our character doesn't float in space, we need to restrict its movement and apply translation with MoveDirection only, when our character is on the ground, or else we will use jumpDirection. Raycasting The jumping functionality is almost written; we now need to determine whether our character is grounded. The easiest way to check that is to apply raycasting. Raycasting simply casts a ray in a specified direction and length, and returns if it hits any collider on its way (a collider of the object that the ray had been cast from is ignored): To perform a raycast, we will need to specify a starting position, direction (vector), and length of the ray. In return, we will receive true, if the ray hits something, or false, if it doesn't: function FixedUpdate () { if (Physics.Raycast(transform.position, -transform.up, collider. height/2 + 2)){ isGrounded = true; Jumping = false; inAir = false; } else if (!inAir){ inAir = true; JumpDirection = MoveDirection; } Movement(); } As we have already mentioned, we used transform.position to specify the starting position of the ray as a center of our collider. -transform.up is a vector that is pointing downwards and collider.height is the height of the attached collider. We are using half of the height, as the starting position is located in the middle of the collider and extended ray for two units, to make sure that our ray will hit the ground. The rest of the code is simply toggling state booleans. Improving efficiency in raycasting But what if the ray didn't hit anything? That can happen in two cases—if we walk off the cliff or are performing a jump. In any case, we have to check for it. If the ray didn't hit a collider, then obviously we are in the air and need to specify that. As this is our first check, we need to preserve our current velocity to ensure that our character doesn't drop down instantly. Raycasting is a very handy thing and being used in many games. However, you should not rely on it too often. It is very expensive and can dramatically drop down your frame rate. Right now, we are casting rays every frame, which is extremely inefficient. To improve our performance, we only need to cast rays when performing a jump, but never when grounded. To ensure this, we will put all our raycasting section in FixedUpdate to fire when the character is not grounded. function FixedUpdate (){ if (!isGrounded){ if (Physics.Raycast(transform.position, -transform.up, collider.height/2 + 0.2)){ isGrounded = true; Jumping = false; inAir = false; } else if (!inAir){ inAir = true; jumpDirection = MoveDirection; } } Movement(); } function OnCollisionExit(collisionInfo : Collision){ isGrounded = false; } To determine if our character is not on the ground, we will use a default function— OnCollisionExit(). Unlike OnControllerColliderHit(), which had been used with Character Controller, this function is only for colliders and rigidbodies. So, whenever our character is not touching any collider or rigidbody, we will expect to be in the air, therefore, not grounded. Let's hit Play and see our character jumping on our command. Additional jump functionality Now that we have our character jumping, there are a few issues that should be resolved. First of all, if we decide to jump on the sharp edge of the platform, we will see that our collider penetrates other colliders. Thus, our collider ends up being stuck in the wall without a chance of getting out: A quick patch to this problem will be pushing the character away from the contact point while jumping. We will use the OnCollisionStay() function that's called at every frame when we are colliding with an object. This function receives collision contact information that can help us determine who we are colliding with, its velocity, name, if it has Rigidbody, and so on. In our case we are interested in contact points. Perform the following steps: Declare a new private variable contact of a ContactPoint type that describes the collision point of colliding objects. Declare the OnCollisonStay function. Inside this function, we will take the first point of contact with the collider and assign it to our private variable. Add force to the contact position to reverse the character's velocity, but only if the character is not on the ground. Declare a new variable and call it jumpClimax of boolean type. Contacts is an array of all contact points. Finally, we need to move away from that contact point by reversing our velocity. The AddForceAtPosition function will help us here. It is similar to the one that we used for jumping, however, this one applies force at a specified position (contact point): public var jumpClimax :boolean = false; ... function OnCollisionStay(collisionInfo : Collision){ contact = collisionInfo.contacts[0]; if (inAir || Jumping) rigidbody.AddForceAtPosition(-rigidbody.velocity, contact.point); } The next patch will aid us in the future, when we will be adding animation to our character later in this article. To make sure that our jumping animation runs smoothly, we need to know when our character reaches jumping climax, in other words, when it stops going up and start a falling. In the FixedUpdate function, right after the last else if statement, put the following code snippet: else if (inAir&&rigidbody.velocity.y == 0.0) { jumpClimax = true; } Nothing complex here. In theory, the moment we stop going up is a climax of our jump, that's why we check if we are in the air (obviously we can't reach jump climax when on the ground), and if vertical velocity of rigidbody is 0.0. The last part is to set our jumping climax to false. We'll do that at the moment when we touch the ground: if (Physics.Raycast(transform.position, -transform.up, collider. height/2 + 2)){ isGrounded = true; Jumping = false; inAir = false; jumpClimax = false; } Running We taught our character to walk, jump, and stand aimlessly on the same spot. The next logical step will be to teach him running. From a technical point of view, there is nothing too hard. Running is simply the same thing as walking, but with a greater speed. Perform the following steps: Declare a new variable IsRunning of a type boolean, which will be used to determine whether our character has been told to run or not. Inside the Movement function, at the very top, we will check if the player is pressing left or right, and shift and assign an appropriate value to isRunning: public var isRunning : boolean = false; ... function Movement() { if (Input.GetKey (KeyCode.LeftShift) || Input.GetKey (KeyCode. RightShift)) isRunning = true; else isRunning = false; ... } Another way to get input from the user is to use KeyCode. It is an enumeration for all physical keys on the keyboard. Look at the KeyCode script reference for a complete list of available keys, on the official website: http://unity3d.com/ support/documentation/ScriptReference/KeyCode We will return to running later, in the animation section.
Read more
  • 0
  • 1
  • 49506

article-image-import-add-background-music-audacity
Packt
03 Mar 2010
2 min read
Save for later

Importing and Adding Background Music with Audacity 1.3

Packt
03 Mar 2010
2 min read
Audacity is commonly used to import music into your project, convert different audio files from one format to another, bring in multiple files and convert them, and more. In this article, we will learn how to add background music into your podcast, overdub and fade in and out. We will also discuss some additional information about importing music from CDs, cassette tapes, and vinyl records. This Audacity tutorial has been taken from Getting Started with Audacity 1.3. Read more here. Importing digital music into Audacity Before you can add background music to any Audacity project, you'll first have to import a digital music file into to the project itself. Importing WAV, AIFF, MP3, and MP4/M4A files Audacity can import a number of music file formats. WAV, AIFF, MP3 are most common, but it can also import MP4/M4A files, as long as they are not rights-managed or copy-protected (like some songs purchased through stores such as iTunes). To import a song into Audacity: Open your sample project. From the main menu, select File, Import, and then Audio. The audio selection window is displayed. Choose the music file from your computer, and then click on Open. A new track is added to your project at the very bottom of the project window. Importing music from iTunes Your iTunes library can contain protected and unprotected music files. The main difference is that the protected files were typically purchased from the iTunes store and can't be played outside of that software. There is no easy way to determine visually which music tracks are protected or unprotected, so you can try both methods outlined next to import into Audacity. However, remember there are copyright laws for songs written and recorded by popular artists, so you need to investigate how to use music legally for your own use or for distribution through a podcast. Unprotected files from iTunes If the songs that you want to import from iTunes aren't copy-protected, importing them is easy. Click-and-drag the song from the iTunes window and drop it into your Audacity project window (with your project open, of course). Within a few moments, the music track is shown at the bottom of your project window. The music track is now ready to be edited in, as an introduction or however you desire, in the main podcast file.
Read more
  • 0
  • 2
  • 49470

article-image-creating-a-simple-modular-application-in-java-11-tutorial
Prasad Ramesh
20 Feb 2019
11 min read
Save for later

Creating a simple modular application in Java 11 [Tutorial]

Prasad Ramesh
20 Feb 2019
11 min read
Modular programming enables one to organize code into independent, cohesive modules, which can be combined to achieve the desired functionality. This article is an excerpt from a book written by Nick Samoylov and Mohamed Sanaulla titled Java 11 Cookbook - Second Edition. In this book, you will learn how to implement object-oriented designs using classes and interfaces in Java 11. The complete code for the examples shown in this tutorial can be found on GitHub. You should be wondering what this modularity is all about, and how to create a modular application in Java. In this article, we will try to clear up the confusion around creating modular applications in Java by walking you through a simple example. Our goal is to show you how to create a modular application; hence, we picked a simple example so as to focus on our goal. Our example is a simple advanced calculator, which checks whether a number is prime, calculates the sum of prime numbers, checks whether a number is even, and calculates the sum of even and odd numbers. Getting ready We will divide our application into two modules: The math.util module, which contains the APIs for performing the mathematical calculations The calculator module, which launches an advanced calculator How to do it Let's implement the APIs in the com.packt.math.MathUtil class, starting with the isPrime(Integer number) API: public static Boolean isPrime(Integer number){ if ( number == 1 ) { return false; } return IntStream.range(2,num).noneMatch(i -> num % i == 0 ); } Implement the sumOfFirstNPrimes(Integer count) API: public static Integer sumOfFirstNPrimes(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isPrime(j)) .limit(count).sum(); } Let's write a function to check whether the number is even: public static Boolean isEven(Integer number){ return number % 2 == 0; } The negation of isEven tells us whether the number is odd. We can have functions to find the sum of the first N even numbers and the first N odd numbers, as shown here: public static Integer sumOfFirstNEvens(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isEven(j)) .limit(count).sum(); } public static Integer sumOfFirstNOdds(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> !isEven(j)) .limit(count).sum(); } We can see in the preceding APIs that the following operations are repeated: An infinite sequence of numbers starting from 1 Filtering the numbers based on some condition Limiting the stream of numbers to a given count Finding the sum of numbers thus obtained Based on our observation, we can refactor the preceding APIs and extract these operations into a method, as follows: Integer computeFirstNSum(Integer count, IntPredicate filter){ return IntStream.iterate(1,i -> i+1) .filter(filter) .limit(count).sum(); } Here, count is the limit of numbers we need to find the sum of, and filter is the condition for picking the numbers for summing. Let's rewrite the APIs based on the refactoring we just did: public static Integer sumOfFirstNPrimes(Integer count){ return computeFirstNSum(count, (i -> isPrime(i))); } public static Integer sumOfFirstNEvens(Integer count){ return computeFirstNSum(count, (i -> isEven(i))); } public static Integer sumOfFirstNOdds(Integer count){ return computeFirstNSum(count, (i -> !isEven(i))); So far, we have seen a few APIs around mathematical computations. These APIs are part of our com.packt.math.MathUtil class. The complete code for this class can be found at Chapter03/2_simple-modular-math-util/math.util/com/packt/math, in the codebase downloaded for this book. Let's make this small utility class part of a module named math.util. The following are some conventions we use to create a module: Place all the code related to the module under a directory named math.util and treat this as our module root directory. In the root folder, insert a file named module-info.java. Place the packages and the code files under the root directory. What does module-info.java contain? The following: The name of the module The packages it exports, that is, the one it makes available for other modules to use The modules it depends on The services it uses The service for which it provides implementation Our math.util module doesn't depend on any other module (except, of course, the java.base module). However, it makes its API available for other modules (if not, then this module's existence is questionable). Let's go ahead and put this statement into code: module math.util{ exports com.packt.math; } We are telling the Java compiler and runtime that our math.util module is exporting the code in the com.packt.math package to any module that depends on math.util. The code for this module can be found at Chapter03/2_simple-modular-math-util/math.util. Now, let's create another module calculator that uses the math.util module. This module has a Calculator class whose work is to accept the user's choice for which mathematical operation to execute and then the input required to execute the operation. The user can choose from five available mathematical operations: Prime number check Even number check Sum of N primes Sum of N evens Sum of N odds Let's see this in code: private static Integer acceptChoice(Scanner reader){ System.out.println("************Advanced Calculator************"); System.out.println("1. Prime Number check"); System.out.println("2. Even Number check"); System.out.println("3. Sum of N Primes"); System.out.println("4. Sum of N Evens"); System.out.println("5. Sum of N Odds"); System.out.println("6. Exit"); System.out.println("Enter the number to choose operation"); return reader.nextInt(); } Then, for each of the choices, we accept the required input and invoke the corresponding MathUtil API, as follows: switch(choice){ case 1: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isPrime(number)){ System.out.println("The number " + number +" is prime"); }else{ System.out.println("The number " + number +" is not prime"); } break; case 2: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isEven(number)){ System.out.println("The number " + number +" is even"); } break; case 3: System.out.println("How many primes?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d primes is %d", count, MathUtil.sumOfFirstNPrimes(count))); break; case 4: System.out.println("How many evens?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d evens is %d", count, MathUtil.sumOfFirstNEvens(count))); break; case 5: System.out.println("How many odds?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d odds is %d", count, MathUtil.sumOfFirstNOdds(count))); break; } The complete code for the Calculator class can be found at Chapter03/2_simple-modular-math-util/calculator/com/packt/calculator/Calculator.java. Let's create the module definition for our calculator module in the same way we created it for the math.util module: module calculator{ requires math.util; } In the preceding module definition, we mentioned that the calculator module depends on the math.util module by using the required keyword. The code for this module can be found at Chapter03/2_simple-modular-math-util/calculator. Let's compile the code: javac -d mods --module-source-path . $(find . -name "*.java") The preceding command has to be executed from Chapter03/2_simple-modular-math-util. Also, you should have the compiled code from across both the modules, math.util and calculator, in the mods directory. Just a single command and everything including the dependency between the modules is taken care of by the compiler. We didn't require build tools such as ant to manage the compilation of modules. The --module-source-path command is the new command-line option for javac, specifying the location of our module source code. Let's execute the preceding code: java --module-path mods -m calculator/com.packt.calculator.Calculator The --module-path command, similar to --classpath, is the new command-line option  java, specifying the location of the compiled modules. After running the preceding command, you will see the calculator in action: Congratulations! With this, we have a simple modular application up and running. We have provided scripts to test out the code on both Windows and Linux platforms. Please use run.bat for Windows and run.sh for Linux. How it works Now that you have been through the example, we will look at how to generalize it so that we can apply the same pattern in all our modules. We followed a particular convention to create the modules: |application_root_directory |--module1_root |----module-info.java |----com |------packt |--------sample |----------MyClass.java |--module2_root |----module-info.java |----com |------packt |--------test |----------MyAnotherClass.java We place the module-specific code within its folders with a corresponding module-info.java file at the root of the folder. This way, the code is organized well. Let's look into what module-info.java can contain. From the Java language specification (http://cr.openjdk.java.net/~mr/jigsaw/spec/lang-vm.html), a module declaration is of the following form: {Annotation} [open] module ModuleName { {ModuleStatement} } Here's the syntax, explained: {Annotation}: This is any annotation of the form @Annotation(2). open: This keyword is optional. An open module makes all its components accessible at runtime via reflection. However, at compile-time and runtime, only those components that are explicitly exported are accessible. module: This is the keyword used to declare a module. ModuleName: This is the name of the module that is a valid Java identifier with a permissible dot (.) between the identifier names—similar to math.util. {ModuleStatement}: This is a collection of the permissible statements within a module definition. Let's expand this next. A module statement is of the following form: ModuleStatement: requires {RequiresModifier} ModuleName ; exports PackageName [to ModuleName {, ModuleName}] ; opens PackageName [to ModuleName {, ModuleName}] ; uses TypeName ; provides TypeName with TypeName {, TypeName} ; The module statement is decoded here: requires: This is used to declare a dependency on a module. {RequiresModifier} can be transitive, static, or both. Transitive means that any module that depends on the given module also implicitly depends on the module that is required by the given module transitively. Static means that the module dependence is mandatory at compile time, but optional at runtime. Some examples are requires math.util, requires transitive math.util, and requires static math.util. exports: This is used to make the given packages accessible to the dependent modules. Optionally, we can force the package's accessibility to specific modules by specifying the module name, such as exports com.package.math to claculator. opens: This is used to open a specific package. We saw earlier that we can open a module by specifying the open keyword with the module declaration. But this can be less restrictive. So, to make it more restrictive, we can open a specific package for reflective access at runtime by using the opens keyword—opens com.packt.math. uses: This is used to declare a dependency on a service interface that is accessible via java.util.ServiceLoader. The service interface can be in the current module or in any module that the current module depends on. provides: This is used to declare a service interface and provide it with at least one implementation. The service interface can be declared in the current module or in any other dependent module. However, the service implementation must be provided in the same module; otherwise, a compile-time error will occur. We will look at the uses and provides clauses in more detail in the Using services to create loose coupling between the consumer and provider modules recipe. The module source of all modules can be compiled at once using the --module-source-path command-line option. This way, all the modules will be compiled and placed in their corresponding directories under the directory provided by the -d option. For example, javac -d mods --module-source-path . $(find . -name "*.java") compiles the code in the current directory into a mods directory. Running the code is equally simple. We specify the path where all our modules are compiled into using the command-line option --module-path. Then, we mention the module name along with the fully qualified main class name using the command-line option -m, for example, java --module-path mods -m calculator/com.packt.calculator.Calculator. In this tutorial, we learned to create a simple modular Java application. To learn more Java 11 recipes, check out the book Java 11 Cookbook - Second Edition. Brian Goetz on Java futures at FOSDEM 2019 7 things Java programmers need to watch for in 2019 Clojure 1.10 released with Prepl, improved error reporting and Java compatibility
Read more
  • 0
  • 0
  • 49465

article-image-working-webcam-and-pi-camera
Packt
09 Feb 2016
13 min read
Save for later

Working with a Webcam and Pi Camera

Packt
09 Feb 2016
13 min read
In this article by Ashwin Pajankar and Arush Kakkar, the author of the book Raspberry Pi By Example we will learn how to use different types and uses of cameras with our Pi. Let's take a look at the topics we will study and implement in this article: Working with a webcam Crontab Timelapse using a webcam Webcam video recording and playback Pi Camera and Pi NOIR comparison Timelapse using Pi Camera The PiCamera module in Python (For more resources related to this topic, see here.) Working with webcams USB webcams are a great way to capture images and videos. Raspberry Pi supports common USB webcams. To be on the safe side, here is a list of the webcams supported by Pi: http://elinux.org/RPi_USB_Webcams. I am using a Logitech HD c310 USB Webcam. You can purchase it online, and you can find the product details and the specifications at http://www.logitech.com/en-in/product/hd-webcam-c310. Attach your USB webcam to Raspberry Pi through the USB port on Pi and run the lsusb command in the terminal. This command lists all the USB devices connected to the computer. The output should be similar to the following output depending on which port is used to connect the USB webcam:   pi@raspberrypi ~/book/chapter04 $ lsusb Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 148f:2070 Ralink Technology, Corp. RT2070 Wireless Adapter Bus 001 Device 007: ID 046d:081b Logitech, Inc. Webcam C310 Bus 001 Device 006: ID 1c4f:0003 SiGma Micro HID controller Bus 001 Device 005: ID 1c4f:0002 SiGma Micro Keyboard TRACER Gamma Ivory Then, install the fswebcam utility by running the following command: sudo apt-get install fswebcam The fswebcam is a simple command-line utility that captures images with webcams for Linux computers. Once the installation is done, you can use the following command to create a directory for output images: mkdir /home/pi/book/output Then, run the following command to capture the image: fswebcam -r 1280x960 --no-banner ~/book/output/camtest.jpg This will capture an image with a resolution of 1280 x 960. You might want to try another resolution for your learning. The --no-banner command will disable the timestamp banner. The image will be saved with the filename mentioned. If you run this command multiple times with the same filename, the image file will be overwritten each time. So, make sure that you change the filename if you want to save previously captured images. The text output of the command should be similar to the following: --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd5 Captured frame in 0.00 seconds. --- Processing captured image... Disabling banner. Writing JPEG image to '/home/pi/book/output/camtest.jpg'. Crontab A cron is a time-based job scheduler in Unix-like computer operating systems. It is driven by a crontab (cron table) file, which is a configuration file that specifies shell commands to be run periodically on a given schedule. It is used to schedule commands or shell scripts to run periodically at a fixed time, date, or interval. The syntax for crontab in order to schedule a command or script is as follows: 1 2 3 4 5 /location/command Here, the following are the definitions: 1: Minutes (0-59) 2: Hours (0-23) 3: Days (0-31) 4: Months [0-12 (1 for January)] 5: Days of the week [0-7 ( 7 or 0 for Sunday)] /location/command: The script or command name to be scheduled The crontab entry to run any script or command every minute is as follows: * * * * * /location/command 2>&1 In the next section, we will learn how to use crontab to schedule a script to capture images periodically in order to create the timelapse sequence. You can refer to this URL for more details oncrontab: http://www.adminschoice.com/crontab-quick-reference. Creating a timelapse sequence using fswebcam Timelapse photography means capturing photographs in regular intervals and playing the images with a higher frequency in time than those that were shot. For example, if you capture images with a frequency of one image per minute for 10 hours, you will get 600 images. If you combine all these images in a video with 30 images per second, you will get 10 hours of timelapse video compressed in 20 seconds. You can use your USB webcam with Raspberry Pi to achieve this. We already know how to use the Raspberry Pi with a Webcam and the fswebcam utility to capture an image. The trick is to write a script that captures images with different names and then add this script in crontab and make it run at regular intervals. Begin with creating a directory for captured images: mkdir /home/pi/book/output/timelapse Open an editor of your choice, write the following code, and save it as timelapse.sh: #!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") fswebcam -r 1280x960 --no-banner /home/pi/book/output/timelapse/garden_$DATE.jpg Make the script executable using: chmod +x timelapse.sh This shell script captures the image and saves it with the current timestamp in its name. Thus, we get an image with a new filename every time as the file contains the timestamp. The second line in the script creates the timestamp that we're using in the filename. Run this script manually once, and make sure that the image is saved in the /home/pi/book/output/timelapse directory with the garden_<timestamp>.jpg name. To run this script at regular intervals, we need to schedule it in crontab. The crontab entry to run our script every minute is as follows: * * * * * /home/pi/book/chapter04/timelapse.sh 2>&1 Open the crontab of the Pi user with crontab –e. It will open crontab with nano as the editor. Add the preceding line to crontab, save it, and exit it. Once you exit crontab, it will show the following message: no crontab for pi - using an empty one crontab: installing new crontab Our timelapse webcam setup is now live. If you want to change the image capture frequency, then you have to change the crontab settings. To set it every 5 minutes, change it to */5 * * * *. To set it for every 2 hours, use 0 */2 * * *. Make sure that your MicroSD card has enough free space to store all the images for the time duration for which you need to keep your timelapse setup. Once you capture all the images, the next part is to encode them all in a fast playing video, preferably 20 to 30 frames per second. For this part, the mencoder utility is recommended. The following are the steps to create a timelapse video with mencoder on a Raspberry Pi or any Debian/Ubuntu machine: Install mencoder using sudo apt-get install mencoder. Navigate to the output directory by issuing: cd /home/pi/book/output/timelapse Create a list of your timelapse sequence images using: ls garden_*.jpg > timelapse.txt Use the following command to create a video: mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4:aspect=16/9:vbitrate=8000000 -vf scale=1280:960 -o timelapse.avi -mf type=jpeg:fps=30 mf://@timelapse.txt This will create a video with name timelapse.avi in the current directory with all the images listed in timelapse.txt with a 30 fps frame rate. The statement contains the details of the video codec, aspect ratio, bit rate, and scale. For more information, you can run man mencoder on Command Prompt. We will cover how to play a video in the next section. Webcam video recording and playback We can use a webcam to record live videos using avconv. Install avconv using sudo apt-get install libav-tools. Use the following command to record a video: avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi It will show following output on the screen. pi@raspberrypi ~ $ avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi avconv version 9.14-6:9.14-1rpi1rpi1, Copyright (c) 2000-2014 the Libav developers built on Jul 22 2014 15:08:12 with gcc 4.6 (Debian 4.6.3-14+rpi1) [video4linux2 @ 0x5d6720] The driver changed the time per frame from 1/25 to 2/15 [video4linux2 @ 0x5d6720] Estimating duration from bitrate, this may be inaccurate Input #0, video4linux2, from '/dev/video0': Duration: N/A, start: 629.030244, bitrate: 147456 kb/s Stream #0.0: Video: rawvideo, yuyv422, 1280x960, 147456 kb/s, 1000k tbn, 7.50 tbc Output #0, avi, to '/home/pi/book/output/VideoStream.avi': Metadata: ISFT : Lavf54.20.4 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> mpeg4) Press ctrl-c to stop encoding frame= 182 fps= 7 q=31.0 Lsize= 802kB time=7.28 bitrate= 902.4kbits/s video:792kB audio:0kB global headers:0kB muxing overhead 1.249878% Received signal 2: terminating. You can terminate the recording sequence by pressing Ctrl + C. We can play the video using omxplayer. It comes with the latest raspbian, so there is no need to install it. To play a file with the name vid.mjpg, use the following command: omxplayer ~/book/output/VideoStream.avi It will play the video and display some output similar to the one here: pi@raspberrypi ~ $ omxplayer ~/book/output/VideoStream.avi Video codec omx-mpeg4 width 1280 height 960 profile 0 fps 25.000000 Subtitle count: 0, state: off, index: 1, delay: 0 V:PortSettingsChanged: 1280x960@25.00 interlace:0 deinterlace:0 anaglyph:0 par:1.00 layer:0 have a nice day ;) Try playing timelapse and record videos using omxplayer. Working with the Pi Camera and NoIR Camera Modules These camera modules are specially manufactured for Raspberry Pi and work with all the available models. You will need to connect the camera module to the CSI port, located behind the Ethernet port, and activate the camera using the raspi-config utility if you haven't already. You can find the video instructions to connect the camera module to Raspberry Pi at http://www.raspberrypi.org/help/camera-module-setup/. This page lists the types of camera modules available: http://www.raspberrypi.org/products/. Two types of camera modules are available for the Pi. These are Pi Camera and Pi NoIR camera, and they can be found at https://www.raspberrypi.org/products/camera-module/ and https://www.raspberrypi.org/products/pi-noir-camera/, respectively. The following image shows Pi Camera and Pi NoIR Camera boards side by side: The following image shows the Pi Camera board connected to the Pi: The following is an image of the Pi camera board placed in the camera case: The main difference between Pi Camera and Pi NoIR Camera is that Pi Camera gives better results in good lighting conditions, whereas Pi NoIR (NoIR stands for No-Infra Red) is used for low light photography. To use NoIR Camera in complete darkness, we need to flood the object to be photographed with infrared light. This is a good time to take a look at the various enclosures for Raspberry Pi Models. You can find various cases available online at https://www.adafruit.com/categories/289. An example of a Raspberry Pi case is as follows: Using raspistill and raspivid To capture images and videos using the Raspberry Pi camera module, we need to use raspistill and raspivid utilities. To capture an image, run the following command: raspistill -o cam_module_pic.jpg This will capture and save the image with name cam_module_pic.jpg in the current directory. To capture a 20 second video with the camera module, run the following command: raspivid –o test.avi –t 20000 This will capture and save the video with name test.avi in the current directory. Unlike fswebcam and avconv, raspistill and raspivid do not write anything to the console. So, you need to check the current directory for the output. Also, one can run the echo $? command to check whether these commands executed successfully. We can also mention the complete location of the file to be saved in these command, as shown in the following example: raspistill -o /home/pi/book/output/cam_module_pic.jpg Just like fswebcam, raspistill can be used to record the timelapse sequence. In our timelapse shell script, replace the line that contains fswebcam with the appropriate raspistill command to capture the timelapse sequence and use mencoder again to create the video. This is left as an exercise for the readers. Now, let's take a look at the images taken with the Pi camera under different lighting conditions. The following is the image with normal lighting and the backlight: The following is the image with only the backlight: The following is the image with normal lighting and no backlight: For NoIR camera usage in the night under low light conditions, use IR illuminator light for better results. You can get it online. A typical off-the-shelf LED IR illuminator suitable for our purpose will look like the one shown here: Using picamera in Python with the Pi Camera module picamera is a Python package that provides a programming interface to the Pi Camera module. The most recent version of raspbian has picamera preinstalled. If you do not have it installed, you can install it using: sudo apt-get install python-picamera The following program quickly demonstrates the basic usage of the picamera module to capture an image: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(5) cam.capture('/home/pi/book/output/still.jpg') We have to import time and picamera modules first. cam.start_preview()will start the preview, and time.sleep(5) will wait for 5 seconds before cam.capture() captures and saves image in the specified file. There is a built-in function in picamera for timelapse photography. The following program demonstrates its usage: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(3) for count, imagefile in enumerate(cam.capture_continuous ('/home/pi/book/output/image{counter: 02d}.jpg')): print 'Capturing and saving ' + imagefile time.sleep(1) if count == 10: break In the preceding code, cam.capture_continuous()is used to capture the timelapse sequence using the Pi camera module. Checkout more examples and API references for the picamera module at http://picamera.readthedocs.org/. The Pi camera versus the webcam Now, after using the webcam and the Pi camera, it's a good time to understand the differences, the pros, and the cons of using these. The Pi camera board does not use a USB port and is directly interfaced to the Pi. So, it provides better performance than a webcam in terms of the frame rate and resolution. We can directly use the picamera module in Python to work on images and videos. However, the Pi camera cannot be used with any other computer. A webcam uses an USB port for interface, and because of that, it can be used with any computer. However, compared to the Pi camera its performance, it is lower in terms of the frame rate and resolution. Summary In this article, we learned how to use a webcam and the Pi camera. We also learned how to use utilities such as fswebcam, avconv, raspistill, raspivid, mencoder, and omxplayer. We covered how to use crontab. We used the Python picamera module to programmatically work with the Pi camera board. Finally, we compared the Pi camera and the webcam. We will be reusing all the code examples and concepts for some real-life projects soon. Resources for Article: Further resources on this subject: Introduction to the Raspberry Pi's Architecture and Setup [article] Raspberry Pi LED Blueprints [article] Hacking a Raspberry Pi project? Understand electronics first! [article]
Read more
  • 0
  • 0
  • 49404

article-image-restful-java-web-services-swagger
Fatema Patrawala
22 May 2018
14 min read
Save for later

Documenting RESTful Java Web Services using Swagger

Fatema Patrawala
22 May 2018
14 min read
In the earlier days, many software solution providers did not really pay attention to documenting their RESTful web APIs. However, many API vendors soon realized the need for a good API documentation solution. Today, you will find a variety of approaches to documenting RESTful web APIs. There are some popular solutions available today for describing, producing, consuming, and visualizing RESTful web services. In this tutorial, we will explore Swagger which offers a specification and a complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. The Swagger framework works with many popular programming languages, such as Java, Scala, Clojure, Groovy, JavaScript, and .NET. This tutorial is an extract taken from the book RESTFul Java Web Services - Third Edition, written by Bogunuva Mohanram Balachandar. This book will help you master core REST concepts and create RESTful web services in Java. A glance at the market adoption of Swagger The greatest strength of Swagger is its powerful API platform, which satisfies the client, documentation, and server needs. The Swagger UI framework serves as the documentation and testing utility. Its support for different languages and its matured tooling support have really grabbed the attention of many API vendors, and it seems to be the one with the most traction in the community today. Swagger is built using Scala. This means that when you package your application, you need to have the entire Scala runtime into your build, which may considerably increase the size of your deployable artifact (the EAR or WAR file). That said, Swagger is however improving with each release. For example, the Swagger 2.0 release allows you to use YAML for describing APIs. So, keep a watch on this framework. The Swagger framework has the following three major components: Server: This component hosts the RESTful web API descriptions for the services that the clients want to use Client: This component uses the RESTful web API descriptions from the server to provide an automated interfacing mechanism to invoke the REST APIs User interface: This part of the framework reads a description of the APIs from the server, renders it as a web page, and provides an interactive sandbox to test the APIs A quick overview of the Swagger structure Let's take a quick look at the Swagger file structure before moving further. The Swagger 1.x file contents that describe the RESTful APIs are represented as the JSON objects. With Swagger 2.0 release onwards, you can also use the YAML format to describe the RESTful web APIs. This section discusses the Swagger file contents represented as JSON. The basic constructs that we'll discuss in this section for JSON are also applicable for the YAML representation of APIs, although the syntax differs. When using the JSON structure for describing the REST APIs, the Swagger file uses a Swagger object as the root document object for describing the APIs. Here is a quick overview of the various properties that you will find in a Swagger object: The following properties describe the basic information about the RESTful web application: swagger: This property specifies the Swagger version. info: This property provides metadata about the API. host: This property locates the server where the APIs are hosted. basePath: This property is the base path for the API. It is relative to the value set for the host field. schemes: This property transfers the protocol for a RESTful web API, such as HTTP and HTTPS. The following properties allow you to specify the default values at the application level, which can be optionally overridden for each operation: consumes: This property specifies the default internet media types that the APIs consume. It can be overridden at the API level. produces: This property specifies the default internet media types that the APIs produce. It can be overridden at the API level. securityDefinitions: This property globally defines the security schemes for the document. These definitions can be referred to from the security schemes specified for each API. The following properties describe the operations (REST APIs): paths: This property specifies the path to the API or the resources. The path must be appended to the basePath property in order to get the full URI. definitions: This property specifies the input and output entity types for operations on REST resources (APIs). parameters: This property specifies the parameter details for an operation. responses: This property specifies the response type for an operation. security: This property specifies the security schemes in order to execute this operation. externalDocs: This property links to the external documentation. A complete Swagger 2.0 specification is available at Github. The Swagger specification was donated to the Open API initiative, which aims to standardize the format of the API specification and bring uniformity in usage. Swagger 2.0 was enhanced as part of the Open API initiative, and a new specification OAS 3.0 (Open API specification 3.0) was released in July 2017. The tools are still being worked on to support OAS3.0. I encourage you to explore the Open API Specification 3.0 available at Github. Overview of Swagger APIs The Swagger framework consists of many sub-projects in the Git repository, each built with a specific purpose. Here is a quick summary of the key projects: swagger-spec: This repository contains the Swagger specification and the project in general. swagger-ui: This project is an interactive tool to display and execute the Swagger specification files. It is based on swagger-js (the JavaScript library). swagger-editor: This project allows you to edit the YAML files. It is released as a part of Swagger 2.0. swagger-core: This project provides the scala and java library to generate the Swagger specifications directly from code. It supports JAX-RS, the Servlet APIs, and the Play framework. swagger-codegen: This project provides a tool that can read the Swagger specification files and generate the client and server code that consumes and produces the specifications. In the next section, you will learn how to use the swagger-core project offerings to generate the Swagger file for a JAX-RS application. Generating Swagger from JAX-RS Both the WADL and RAML tools that we discussed in the previous sections use the JAX-RS annotations metadata to generate the documentation for the APIs. The Swagger framework does not fully rely on the JAX-RS annotations but offers a set of proprietary annotations for describing the resources. This helps in the following scenarios: The Swagger core annotations provide more flexibility for generating documentations compliant with the Swagger specifications It allows you to use Swagger for generating documentations for web components that do not use the JAX-RS annotations, such as servlet and the servlet filter The Swagger annotations are designed to work with JAX-RS, improving the quality of the API documentation generated by the framework. Note that the swagger-core project is currently supported on the Jersey and Restlet implementations. If you are considering any other runtime for your JAX-RS application, check the respective product manual and ensure the support before you start using Swagger for describing APIs. Some of the commonly used Swagger annotations are as follows: The @com.wordnik.swagger.annotations.Api annotation marks a class as a Swagger resource. Note that only classes that are annotated with @Api will be considered for generating the documentation. The @Api annotation is used along with class-level JAX-RS annotations such as @Produces and @Path. Annotations that declare an operation are as follows: @com.wordnik.swagger.annotations.ApiOperation: This annotation describes a resource method (operation) that is designated to respond to HTTP action methods, such as GET, PUT, POST, and DELETE. @com.wordnik.swagger.annotations.ApiParam: This annotation is used for describing parameters used in an operation. This is designed for use in conjunction with the JAX-RS parameters, such as @Path, @PathParam, @QueryParam, @HeaderParam, @FormParam, and @BeanParam. @com.wordnik.swagger.annotations.ApiImplicitParam: This annotation allows you to define the operation parameters manually. You can use this to override the @PathParam or @QueryParam values specified on a resource method with custom values. If you have multiple ImplicitParam for an operation, wrap them with @ApiImplicitParams. @com.wordnik.swagger.annotations.ApiResponse: This annotation describes the status codes returned by an operation. If you have multiple responses, wrap them by using @ApiResponses. @com.wordnik.swagger.annotations.ResponseHeader: This annotation describes a header that can be provided as part of the response. @com.wordnik.swagger.annotations.Authorization: This annotation is used within either Api or ApiOperation to describe the authorization scheme used on a resource or an operation. Annotations that declare API models are as follows: @com.wordnik.swagger.annotations.ApiModel: This annotation describes the model objects used in the application. @com.wordnik.swagger.annotations.ApiModelProperty: This annotation describes the properties present in the ApiModel object. A complete list of the Swagger core annotations is available at Github. Having learned the basics of Swagger, it is time for us to move on and build a simple example to get a feel of the real-life use of Swagger in a JAX-RS application. As always, this example uses the Jersey implementation of JAX-RS. Specifying dependency to Swagger To use Swagger in your Jersey 2 application, specify the dependency to swagger-jersey2-jaxrs jar. If you use Maven for building the source, the dependency to the swagger-core library will look as follows: <dependency> <groupId>com.wordnik</groupId> <artifactId>swagger-jersey2-jaxrs</artifactId> <version>1.5.1-M1</version> <!-use the appropriate version here, 1.5.x supports Swagger 2.0 spec --> </dependency> You should be careful while choosing the swagger-core version for your product. Note that swagger-core 1.3 produces the Swagger 1.2 definitions, whereas swagger-core 1.5 produces the Swagger 2.0 definitions. The next step is to hook the Swagger provider components into your Jersey application. This is done by configuring the Jersey servlet (org.glassfish.jersey.servlet.ServletContainer) in web.xml, as shown here: <servlet> <servlet-name>jersey</servlet-name> <servlet-class> org.glassfish.jersey.servlet.ServletContainer </servlet-class> <init-param> <param-name>jersey.config.server.provider.packages </param-name> <param-value> com.wordnik.swagger.jaxrs.json, com.packtpub.rest.ch7.swagger </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jersey</servlet-name> <url-pattern>/webresources/*</url-pattern> </servlet-mapping> To enable the Swagger documentation features, it is necessary to load the Swagger framework provider classes from the com.wordnik.swagger.jaxrs.listing package. The package names of the JAX-RS resource classes and provider components are configured as the value for the jersey.config.server.provider.packages init parameter. The Jersey framework scans through the configured packages for identifying the resource classes and provider components during the deployment of the application. Map the Jersey servlet to a request URI so that it responds to the REST resource calls that match the URI. If you prefer not to use web.xml, you can also use the custom application subclass for (programmatically) specifying all the configuration entries discussed here. To try this option, refer to Github. Configuring the Swagger definition After specifying the Swagger provider components, the next step is to configure and initialize the Swagger definition. This is done by configuring the com.wordnik.swagger.jersey.config.JerseyJaxrsConfig servlet in web.xml, as follows: <servlet> <servlet-name>Jersey2Config</servlet-name> <servlet-class> com.wordnik.swagger.jersey.config.JerseyJaxrsConfig </servlet-class> <init-param> <param-name>api.version</param-name> <param-value>1.0.0</param-value> </init-param> <init-param> <param-name>swagger.api.basepath</param-name> <param-value> http://localhost:8080/hrapp/webresources </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> Here is a brief overview of the initialization parameters used for JerseyJaxrsConfig: api.version: This parameter specifies the API version for your application swagger.api.basepath: This parameter specifies the base path for your application With this step, we have finished all the configuration entries for using Swagger in a JAX-RS (Jersey 2 implementation) application. In the next section, we will see how to use the Swagger metadata annotation on a JAX-RS resource class for describing the resources and operations. Adding a Swagger annotation on a JAX-RS resource class Let's revisit the DepartmentResource class used in the previous sections. In this example, we will enhance the DepartmentResource class by adding the Swagger annotations discussed earlier. We use @Api to mark DepartmentResource as the Swagger resource. The @ApiOperation annotation describes the operation exposed by the DepartmentResource class: import com.wordnik.swagger.annotations.Api; import com.wordnik.swagger.annotations.ApiOperation; import com.wordnik.swagger.annotations.ApiParam; import com.wordnik.swagger.annotations.ApiResponse; import com.wordnik.swagger.annotations.ApiResponses; //Other imports are removed for brevity @Stateless @Path("departments") @Api(value = "/departments", description = "Get departments details") public class DepartmentResource { @ApiOperation(value = "Find department by id", notes = "Specify a valid department id", response = Department.class) @ApiResponses(value = { @ApiResponse(code = 400, message = "Invalid department id supplied"), @ApiResponse(code = 404, message = "Department not found") }) @GET @Path("{id}") @Produces("application/json") public Department findDepartment( @ApiParam(value = "The department id", required = true) @PathParam("id") Integer id){ return findDepartmentEntity(id); } //Rest of the codes are removed for brevity } To view the Swagger documentation, build the source and deploy it to the server. Once the application is deployed, you can navigate to http://<host>:<port>/<application-name>/<application-path>/swagger.json to view the Swagger resource listing in the JSON format. The Swagger URL for this example will look like the following: http://localhost:8080/hrapp/webresource/swagger.json The following sample Swagger representation is for the DepartmentResource class discussed in this section: { "swagger": "2.0", "info": { "version": "1.0.0", "title": "" }, "host": "localhost:8080", "basePath": "/hrapp/webresources", "tags": [ { "name": "user" } ], "schemes": [ "http" ], "paths": { "/departments/{id}": { "get": { "tags": [ "user" ], "summary": "Find department by id", "description": "", "operationId": "loginUser", "produces": [ "application/json" ], "parameters": [ { "name": "id", "in": "path", "description": "The department id", "required": true, "type": "integer", "format": "int32" } ], "responses": { "200": { "description": "successful operation", "schema": { "$ref": "#/definitions/Department" } }, "400": { "description": "Invalid department id supplied" }, "404": { "description": "Department not found" } } } } }, "definitions": { "Department": { "properties": { "departmentId": { "type": "integer", "format": "int32" }, "departmentName": { "type": "string" }, "_persistence_shouldRefreshFetchGroup": { "type": "boolean" } } } } } As mentioned at the beginning of this section, from the Swagger 2.0 release onward it supports the YAML representation of APIs. You can access the YAML representation by navigating to swagger.yaml. For instance, in the preceding example, the following URI gives you the YAML file: http://<host>:<port>/<application-name>/<application-path>/swagger.yaml The complete source code for this example is available at the Packt website. You can download the example from the Packt website link that we mentioned at the beginning of this book, in the Preface section. In the downloaded source code, see the rest-chapter7-service-doctools/rest-chapter7-jaxrs2swagger project. Generating a Java client from Swagger The Swagger framework is packaged with the Swagger code generation tool as well (swagger-codegen-cli), which allows you to generate client libraries by parsing the Swagger documentation file. You can download the swagger-codegen-cli.jar file from the Maven central repository by searching for swagger-codegen-cli in search maven. Alternatively, you can clone the Git repository and build the source locally by executing mvn install. Once you have swagger-codegen-cli.jar locally available, run the following command to generate the Java client for the REST API described in Swagger: java -jar swagger-codegen-cli.jar generate -i <Input-URI-or-File-location-for-swagger.json> -l <client-language-to-generate> -o <output-directory> The following example illustrates the use of this tool: java -jar swagger-codegen-cli-2.1.0-M2.jar generate -i http://localhost:8080/hrapp/webresources/swagger.json -l java -o generated-sources/java When you run this tool, it scans through the RESTful web API description available at http://localhost:8080/hrapp/webresources/swagger.json and generates a Java client source in the generated-sources/java folder. Note that the Swagger code generation process uses the mustache templates for generating the client source. If you are not happy with the generated source, Swagger lets you specify your own mustache template files. Use the -t flag to specify your template folder. To learn more, refer to the README.md file at Github. To learn more grab this latest edition of RESTful Java Web Services to build robust, scalable and secure RESTful web services using Java APIs. Getting started with Django RESTful Web Services How to develop RESTful web services in Spring Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 49304

article-image-apples-realm-google-deepminds-gecko-xais-grok-15-salesforce-ais-moira-stability-ais-stable-audio-20-twin-gpt-chatgpt-instant-usage
Merlyn Shelley
08 Apr 2024
12 min read
Save for later

Apple’s ReALM, Google DeepMind’s Gecko, X.ai's Grok 1.5, Salesforce AI’s Moira, Stability AI’s Stable Audio 2.0, TWIN-GPT, ChatGPT Instant usage

Merlyn Shelley
08 Apr 2024
12 min read
Subscribe to our Data Pro newsletter for the latest insights. Don't miss out – sign up today!👋 Hello,Welcome to DataPro#88 – Your portal to the innovations in Data Science & Machine Learning! 🚀 In this edition, you'll find: ⚙️ LLMs & GPTs Unleashed TWIN-GPT: Digital Twins for Clinical Trials. Apple’s ReALM: AI with contextual understanding. Stability AI’s Stable Audio 2.0: Audio synthesis revolution. Salesforce AI’s Moira: Enhancing customer engagement. Google DeepMind’s Gecko: Versatile Text Embeddings. X.ai's Grok 1.5: Enhanced reasoning and context. ✨ What's Fresh & Exciting Distribute LLMs with llamafile: 5 Simple Steps. Dockerized Python Environment: The Elegant Way. Knowledge Distillation: Clone Powerful LLMs. Sora’s Diffusion Transformer (DiT): A Deep Dive. Generative AI: Copyright Reckoning. OpenAI Agent: Function Calling Capabilities. ⚡ Industry Pulse AWS & Mistral AI: Democratizing generative AI. Amazon SageMaker: No-code to code-first ML. Google Cloud Next: Database success stories. Google’s SEEDS in Weather Forecasting: AI quantifies uncertainty. Microsoft’s LLMs in the Imaginarium: Tool Learning. OpenAI: Fine-tuning API and custom models. ChatGPT: Instant usage. Synthetic Voices: Challenges and Opportunities. 📚 Packt's Latest Gem MATLAB for Machine Learning - Second Edition, By Giuseppe Ciaburro. DataPro Newsletter is not just a publication; it’s a comprehensive toolkit for anyone serious about mastering the ever-changing landscape of data and AI. Grab your copy and start transforming your data expertise today! 📥 Feedback on the Weekly EditionTake our weekly survey and get a free PDF copy of our best-selling book, "Interactive Data Visualization with Python - Second Edition."We appreciate your input and hope you enjoy the book!Share your Feedback!Cheers,Merlyn ShelleyEditor-in-Chief, PacktSign Up | Advertise | Archives🔰 GitHub Finds: Any of These Repos in Your Toolbox?🛠️ UpstageAI/dataverse: Dataverse simplifies ETL pipelines in Python, providing a user-friendly solution for data processing and management, accessible to all. 🛠️ GAP-LAB-CUHK-SZ/gaustudio: GauStudio is a modular framework for 3D Gaussian Splatting, providing streamlined pipelines and tools for easier implementation and deployment. 🛠️ TencentARC/BrushNet: BrushNet is a text-guided image inpainting model that enhances pre-trained diffusion models, focusing on divided features and dense control. 🛠️ agiresearch/AIOS: AIOS embeds LLMs into OS, enhancing resource allocation, context switch, concurrent execution, tool service, access control, and toolkit availability for developers. 🛠️ jasonppy/VoiceCraft: VoiceCraft excels in speech editing and zero-shot text-to-speech, requiring only a few seconds of reference to clone or edit voices. 📚 Expert Insights from Packt CommunityMATLAB for Machine Learning - Second Edition, By Giuseppe Ciaburro.Anomaly Detection in MATLAB Throughout the life cycle of a physical system, the occurrence of failures or malfunctions poses a potential threat to its normal functioning. To safeguard against critical interruptions, it becomes imperative to implement an anomaly detection system within the facility. Termed as a fault diagnosis system, this mechanism is designed to identify potential malfunctions within the monitored system. The pursuit of fault detection stands as a pivotal and defining phase in maintenance interventions, demanding a systematic and deterministic approach to comprehensively analyze all conceivable causes that might have led to the malfunction. Anomaly detection overview Anomaly detection is a technique used in data analysis and ML to identify data points or patterns that deviate significantly from the expected or normal behavior within a dataset. Anomalies, also known as outliers, are data points that do not conform to most of the data and may indicate errors, fraud, unusual events, or other important information. Anomaly detection has various applications across different domains, such as cybersecurity, industrial quality control (QC), finance, healthcare, and more. We can start to get an overview of different types of anomalies to understand what is intended with this term, we will list some types of anomalies: Point anomalies: These are individual data points that are considered anomalies, such as a single fraudulent transaction in a credit card dataset. Contextual anomalies: These are anomalies that are context-dependent. A data point might not be an anomaly on its own but is unusual in a particular context or time, such as a sudden spike in web traffic during a holiday sale. Collective anomalies: These are anomalies that are identified by examining a group of data points collectively. These anomalies involve patterns or relationships between data points. There are several methods for addressing anomaly detection problems, ranging from simple statistical techniques to complex ML algorithms. The choice of method depends on the nature of the data and the specific problem you are trying to solve. Here, we are listing the most used ones: Statistical methods: Statistical techniques such as z-scores, percentiles, and boxplots can be used to identify anomalies based on deviations from the mean or median of the data distribution. ML: Supervised, unsupervised, and semi-supervised ML algorithms can be used for anomaly detection. Some popular methods include Isolation Forest, One-Class Support Vector Machine (One-Class SVM), autoencoders (AEs), and k-means clustering. Time series analysis: Specialized techniques are used for detecting anomalies in time series data, such as autoregressive (AR) models, exponential smoothing, and moving averages (MAs). Density estimation: Methods such as kernel density estimation (KDE) and Gaussian Mixture Models (GMMs) are used to estimate the probability density function of the data and identify anomalies as low-density regions. Deep learning (DL): Neural networks (NNs), especially deep AEs (DAEs) and recurrent NNs (RNNs), are used for anomaly detection in high-dimensional data or sequences. Ensemble methods: Combining multiple anomaly detection models can improve overall performance and robustness. In addressing anomaly detection problems, we have to face some challenges. For example, determining an appropriate threshold for defining anomalies can be challenging. Imbalanced datasets, where anomalies are rare, can make model training and evaluation tricky. Handling high-dimensional data and noisy datasets can also be challenging. Anomaly detection is a valuable tool for identifying rare but potentially important events or patterns in large datasets. The choice of method depends on the specific domain, data characteristics, and the nature of anomalies that need to be detected. Discover more insights from "MATLAB for Machine Learning - Second Edition" by Giuseppe Ciaburro. Unlock access to the full book and a wealth of other titles with a 7-day free trial in the Packt Library. Start exploring today!Read Here!⚡ Tech Tidbits: Stay Wired to the Latest Industry Buzz! AWS ML Made Easy 🌀 AWS and Mistral AI commit to democratizing generative AI with a strengthened collaboration: The article discusses the growing use of generative AI applications across industries, facilitated by Amazon Bedrock. It highlights Mistral AI's Mistral Large model, now available on Amazon Bedrock, offering advanced language capabilities. This collaboration aims to provide customers with diverse model options to suit their specific business needs, promoting innovation in AI technology. 🌀 Seamlessly transition between no-code and code-first machine learning with Amazon SageMaker Canvas and Amazon SageMaker Studio: This post discusses Amazon SageMaker Studio, an integrated ML development environment, and SageMaker Canvas, a no-code ML tool, highlighting their features and integration for seamless collaboration between non-ML and ML experts. Google Research 🌀 Get inspired: Database success stories at Google Cloud Next. This blog post previews Google Cloud Next '24, focusing on customers using Google Cloud databases for transformative purposes. It highlights sessions featuring Nuro, Lightricks, Bayer, Yahoo!, and Statsig, showcasing their innovative use cases.🌀 Generative AI to quantify uncertainty in weather forecasting: Google is advancing weather forecasting with innovations like MetNet-3 and SEEDS, a generative AI model. SEEDS efficiently generates probabilistic ensembles, addressing the butterfly effect's uncertainty, and offers cost-effective solutions for extreme weather events. Microsoft Research🌀 LLMs in the Imaginarium: Tool Learning through Simulated Trial and Error. This research enhances large language models' (LLMs) tool usage accuracy through simulated trial and error (STE), inspired by biological systems. STE improves learning by simulating tool use scenarios, interacting with tools, and leveraging short and long-term memory. Results show significant performance boosts over existing methods.OpenAI Updates🌀 Introducing improvements to the fine-tuning API and expanding our custom models program: This update discusses techniques to improve model performance, such as retrieval-augmented generation (RAG) and fine-tuning and introduces new API features for developers to control their fine-tuning jobs, enhancing model quality, reducing costs, and latency. 🌀 Start using ChatGPT instantly: This new initiative aims to make AI more accessible by allowing instant access to ChatGPT without the need to sign up. It targets those curious about AI's potential but hesitant to set up an account, offering a seamless experience for learning, creative inspiration, and answering questions. 🌀 Navigating the Challenges and Opportunities of Synthetic Voices: Voice Engine is a model by OpenAI that generates natural-sounding speech from text input and a short audio sample, closely resembling the original speaker. They're sharing insights from a small-scale preview, highlighting its potential for various applications like reading assistance and personalized responses in education. Email Forwarded? Join DataPro Here!🔍 From Bits to BERT: Keeping Up with LLMs & GPTs 🌀 TWIN-GPT: Digital Twins for Clinical Trials via LLM. The research explores virtual clinical trials' benefits in healthcare, emphasizing patient safety and cost reduction. Existing methods struggle with prediction accuracy due to limited data. TWIN-GPT, a proposed approach, uses large language models to create personalized digital twins, improving predictions and showcasing digital twins' potential in healthcare. 🌀 Apple’s ReALM: AI that can “See” to understand the context: ReALM (Reference Resolution As Language Modeling) addresses the challenge of context understanding, including non-conversational entities like on-screen elements. By leveraging Language Models (LLMs), it demonstrates significant improvements in reference resolution, even outperforming GPT-4, offering over 5% gains for on-screen references. 🌀 Stability AI’s Stable Audio 2.0: Stable Audio 2.0 introduces a groundbreaking AI-generated audio standard, offering high-quality, full tracks up to three minutes long at 44.1kHz stereo. It features audio-to-audio generation, honoring creator rights, and expands creative possibilities, available for free on the Stable Audio website. 🌀 Salesforce AI’s Moira: Moirai is a universal time series forecasting model designed to address diverse forecasting tasks across various domains, frequencies, and variables in a zero-shot manner. It tackles key challenges in forecasting and offers robust performance, making it valuable for IT operations, sales forecasting, and more. 🌀 Google DeepMind’s Gecko: Versatile Text Embeddings Distilled from LLMs. Gecko is a compact text embedding model that achieves strong retrieval performance by distilling knowledge from large language models (LLMs). Its two-step distillation process, generating synthetic paired data and refining data quality, outperforms larger models on the Massive Text Embedding Benchmark. Gecko with 256 dimensions outperforms all entries with 768 dimensions; Gecko with 768 dimensions competes with models 7x larger and 5x higher dimensional embeddings. 🌀 X.ai Unveils Grok 1.5: Enhanced Reasoning and Long Context Features. Grok-1.5, the latest version of x.ai's Grok model, offers improved reasoning and long context capabilities. It excels in coding and math tasks, scoring 50.6% on MATH and 90% on GSM8K benchmarks. Grok-1.5 can process long contexts up to 128K tokens and boasts robust infrastructure for large-scale training. Early testers and existing Grok users on the x.ai platform will soon have access to Grok-1.5, with further features expected to roll out gradually. ✨ On the Radar: Catch Up on What's Fresh🌀 Distribute and Run LLMs with llamafile in 5 Simple Steps: This blog introduces llamaFile, a framework that simplifies using large language models (LLMs) by providing a one-file executable that runs locally without installation. It explains how to use llamaFile with the LLaVa model, a 7-billion-parameter model quantized to 4 bits, for tasks like chat, image uploading, and question-answering. 🌀 Setting A Dockerized Python Environment — The Elegant Way. This blog post demonstrates a more elegant method for setting up a dockerized Python development environment using VScode and the Dev Containers extension. It provides step-by-step instructions and prerequisites, including Docker Desktop, a Docker Hub account, and VScode with the Dev Containers extension installed. The tutorial focuses on using the official Python image (`python:3.10`) and explains the Dev Containers extension's role in creating an isolated VScode session inside a docker container. 🌀 Clone the Abilities of Powerful LLMs into Small Local Models Using Knowledge Distillation: This post explores the use of specialized, smaller-scale language models for specific NLP tasks, such as grammatical error correction. It discusses the process of constructing tailored models through data annotation and fine-tuning, and the use of knowledge distillation to automate labeling. The post provides a workflow for distilling knowledge from a large language model to a smaller one, using prompts and APIs, and demonstrates this process in the context of building a grammatical error correction model. 🌀 Deep Dive into Sora’s Diffusion Transformer (DiT) by Hand: This blog introduces Sora, OpenAI's text-to-video model, explaining its unique approach combining diffusion transformer and transformer strength for video prediction. It explores key concepts like diffusion, dimension reduction, and noise addition, offering insights into how Sora converts text prompts into realistic videos. Ideal for AI enthusiasts and those interested in video generation technologies. 🌀 The Coming Copyright Reckoning for Generative AI: This blog explores the complexities of copyright law in America, particularly in the context of generative AI. It discusses key concepts like original works, fair use, and the implications of generative AI on copyright. It also delves into legal cases and future considerations, offering insights for data scientists and AI enthusiasts. 🌀 Create an Agent with OpenAI Function Calling Capabilities: This article explores the advancements and challenges in developing AI-powered applications in 2024. It discusses how AI streamlines app features for a better user experience and introduces OpenAI's Function Calling to simplify structured data extraction. The article also highlights the ongoing innovations and the future of AI applications. See you next time!
Read more
  • 0
  • 0
  • 49070
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-step-by-step-guide-to-creating-odoo-addon-modules
Sugandha Lahoti
19 Jun 2018
15 min read
Save for later

A step by step guide to creating Odoo Addon Modules

Sugandha Lahoti
19 Jun 2018
15 min read
Odoo uses a client/server architecture in which clients are web browsers accessing the Odoo server via RPC. Both the server and client extensions are packaged as modules which are optionally loaded in a database. Odoo modules can either add brand new business logic to an Odoo system or alter and extend existing business logic. Everything in Odoo starts and ends with modules. In this article, we will cover the basics of creating Odoo Addon Modules. Recipes that we will cover in this. Creating and installing a new addon module Completing the addon module manifest Organizing the addon module file structure Adding Models Our main goal here is to understand how an addon module is structured and the typical incremental workflow to add components to it. This post is an excerpt from the book Odoo 11 Development Cookbook - Second Edition by Alexandre Fayolle and Holger Brunn. With this book, you can create fast and efficient server-side applications using the latest features of Odoo v11. For this article, you should have Odoo installed. You are also expected to be comfortable in discovering and installing extra addon modules. Creating and installing a new addon module In this recipe, we will create a new module, make it available in our Odoo instance, and install it. Getting ready We will need an Odoo instance ready to use. For explanation purposes, we will assume Odoo is available at ~/odoo-dev/odoo, although any other location of your preference can be used. We will also need a location for our Odoo modules. For the purpose of this recipe, we will use a local-addons directory alongside the odoo directory, at ~/odoo-dev/local-addons. How to do it... The following steps will create and install a new addon module: Change the working directory in which we will work and create the addons directory where our custom module will be placed: $ cd ~/odoo-dev $ mkdir local-addons Choose a technical name for the new module and create a directory with that name for the module. For our example, we will use my_module: $ mkdir local-addons/my_module A module's technical name must be a valid Python identifier; it must begin with a letter, and only contain letters (preferably lowercase), numbers, and underscore characters. Make the Python module importable by adding an __init__.py file: $ touch local-addons/my_module/__init__.py Add a minimal module manifest for Odoo to detect it. Create a __manifest__.py file with this line: {'name': 'My module'} Start your Odoo instance including our module directory in the addons path: $ odoo/odoo-bin --addons-path=odoo/addon/,local-addons/ If the --save option is added to the Odoo command, the addons path will be saved in the configuration file. Next time you start the server, if no addons path option is provided, this will be used. Make the new module available in your Odoo instance; log in to Odoo using admin, enable the Developer Mode in the About box, and in the Apps top menu, select Update Apps List. Now, Odoo should know about our Odoo module. Select the Apps menu at the top and, in the search bar in the top-right, delete the default Apps filter and search for my_module. Click on its Install button, and the installation will be concluded. How it works... An Odoo module is a directory containing code files and other assets. The directory name used is the module's technical name. The name key in the module manifest is its title. The __manifest__.py file is the module manifest. It contains a Python dictionary with information about the module, the modules it depends on, and the data files that it will load. In the example, a minimal manifest file was used, but in real modules, we will want to add a few other important keys. These are discussed in the Completing the module manifest recipe, which we will see next. The module directory must be Python-importable, so it also needs to have an __init__.py file, even if it's empty. To load a module, the Odoo server will import it. This will cause the code in the __init__.py file to be executed, so it works as an entry point to run the module Python code. Due to this, it will usually contain import statements to load the module Python files and submodules. Known modules can be installed directly from the command line using the --init or -i option. This list is initially set when you create a new database, from the modules found on the addons path provided at that time. It can be updated in an existing database with the Update Module List menu. Completing the addon module manifest The manifest is an important piece for Odoo modules. It contains important information about the module and declares the data files that should be loaded. Getting ready We should have a module to work with, already containing a __manifest__.py manifest file. You may want to follow the previous recipe to provide such a module to work with. How to do it... We will add a manifest file and an icon to our addon module: To create a manifest file with the most relevant keys, edit the module __manifest__.py file to look like this: { 'name': "Title", 'summary': "Short subtitle phrase", 'description': """Long description""", 'author': "Your name", 'license': "AGPL-3", 'website': "http://www.example.com", 'category': 'Uncategorized', 'version': '11.0.1.0.0', 'depends': ['base'], 'data': ['views.xml'], 'demo': ['demo.xml'], } To add an icon for the module, choose a PNG image to use and copy it to static/description/icon.png. How it works... The remaining content is a regular Python dictionary, with keys and values. The example manifest we used contains the most relevant keys: name: This is the title for the module. summary: This is the subtitle with a one-line description. description: This is a long description written in plain text or the ReStructuredText (RST) format. It is usually surrounded by triple quotes, and is used in Python to delimit multiline texts. For an RST quickstart reference, visit http://docutils.sourceforge.net/docs/user/rst/quickstart.html. author: This is a string with the name of the authors. When there is more than one, it is common practice to use a comma to separate their names, but note that it still should be a string, not a Python list. license: This is the identifier for the license under which the module is made available. It is limited to a predefined list, and the most frequent option is AGPL-3. Other possibilities include LGPL-3, Other OSI approved license, and Other proprietary. website: This is a URL people should visit to know more about the module or the authors. category: This is used to organize modules in areas of interest. The list of the standard category names available can be seen at https://github.com/odoo/odoo/blob/11.0/odoo/addons/base/module/module_data.xml. However, it's also possible to define other new category names here. version: This is the modules' version numbers. It can be used by the Odoo app store to detect newer versions for installed modules. If the version number does not begin with the Odoo target version (for example, 11.0), it will be automatically added. Nevertheless, it will be more informative if you explicitly state the Odoo target version, for example, using 11.0.1.0.0 or 11.0.1.0 instead of 1.0.0 or 1.0. depends: This is a list with the technical names of the modules it directly depends on. If none, we should at least depend on the base module. Don't forget to include any module defining XML IDs, Views, or Models referenced by this module. That will ensure that they all load in the correct order, avoiding hard-to-debug errors. data: This is a list of relative paths to the data files to load with module installation or upgrade. The paths are relative to the module root directory. Usually, these are XML and CSV files, but it's also possible to have YAML data files. demo: This is the list of relative paths to the files with demonstration data to load. These will only be loaded if the database was created with the Demo Data flag enabled. The image that is used as the module icon is the PNG file at static/description/icon.png. Odoo is expected to have significant changes between major versions, so modules built for one major version are likely to not be compatible with the next version without conversion and migration work. Due to this, it's important to be sure about a module's Odoo target version before installing it. There's more… Instead of having the long description in the module manifest, it's possible to have it in its own file. Since version 8.0, it can be replaced by a README file, with either a .txt, .rst, or an .md (Markdown) extension. Otherwise, include a description/index.html file in the module. This HTML description will override a description defined in the manifest file. There are a few more keys that are frequently used: application: If this is True, the module is listed as an application. Usually, this is used for the central module of a functional area auto_install: If this is True, it indicates that this is a "glue" module, which is automatically installed when all of its dependencies are installed installable: If this is True (the default value), it indicates that the module is available for installation Organizing the addon module file structure An addon module contains code files and other assets such as XML files and images. For most of these files, we are free to choose where to place them inside the module directory. However, Odoo uses some conventions on the module structure, so it is advisable to follow them. Getting ready We are expected to have an addon module directory with only the __init__.py and __manifest__.py files. In this recipe, we suppose this is local-addons/my_module. How to do it... To create the basic skeleton for the addon module, perform the given steps: Create the directories for code files: $ cd local-addons/my_module $ mkdir models $ touch models/__init__.py $ mkdir controllers $ touch controllers/__init__.py $ mkdir views $ mkdir security $ mkdir data $ mkdir demo $ mkdir i18n $ mkdir -p static/description Edit the module's top __init__.py file so that the code in subdirectories is loaded: from . import models from . import controllers This should get us started with a structure containing the most used directories, similar to this one: . ├── __init__.py ├── __manifest__.py │ ├── controllers │ └── __init__.py ├── data ├── demo ├── i18n ├── models │ └── __init__.py ├── security ├── static │ └── description └── views How it works... To provide some context, an Odoo addon module can have three types of files:  The Python code is loaded by the __init__.py files, where the .py files and code subdirectories are imported. Subdirectories containing code Python, in turn, need their own __init__.py. Data files that are to be declared in the data and demo keys of the __manifest__.py module manifest in order to be loaded are usually XML and CSV files for the user interface, fixture data, and demonstration data. There can also be YAML files, which can include some procedural instructions that are run when the module is loaded, for instance, to generate or update records programmatically rather than statically in an XML file. Web assets such as JavaScript code and libraries, CSS, and QWeb/HTML templates also play an important part. There are declared through an XML file extending the master templates to add these assets to the web client or website pages. The addon files are to be organized in these directories: models/ contains the backend code files, creating the Models and their business logic. A file per Model is recommended, with the same name as the model, for example, library_book.py for the library.book model. views/ contains the XML files for the user interface, with the actions, forms, lists, and so on. As with models, it is advised to have one file per model. Filenames for website templates are expected to end with the _template suffix. data/ contains other data files with module initial data. demo/ contains data files with demonstration data, useful for tests, training, or module evaluation. i18n/ is where Odoo will look for the translation .pot and .po files. These files don't need to be mentioned in the manifest file. security/ contains the data files defining access control lists, usually a ir.model.access.csv file, and possibly an XML file to define access Groups and Record Rules for row level security. controllers/ contains the code files for the website controllers, and for modules providing that kind of feature. static/ is where all web assets are expected to be placed. Unlike other directories, this directory name is not just a convention, and only files inside it can be made available for the Odoo web pages. They don't need to be mentioned in the module manifest, but will have to be referred to in the web template. When adding new files to a module, don't forget to declare them either in the __manifest__.py (for data files) or __init__.py (for code files); otherwise, those files will be ignored and won't be loaded. Adding models Models define the data structures to be used by our business applications. This recipe shows how to add a basic model to a module. We will use a simple book library example to explain this; we want a model to represent books. Each book has a name and a list of authors. Getting ready We should have a module to work with. We will use an empty my_module for our explanation. How to do it... To add a new Model, we add a Python file describing it and then upgrade the addon module (or install it, if it was not already done). The paths used are relative to our addon module location (for example, ~/odoo-dev/local-addons/my_module/): Add a Python file to the models/library_book.py module with the following code: from odoo import models, fields class LibraryBook(models.Model): _name = 'library.book' name = fields.Char('Title', required=True) date_release = fields.Date('Release Date') author_ids = fields.Many2many( 'res.partner', string='Authors' ) Add a Python initialization file with code files to be loaded by the models/__init__.py module with the following code: from . import library_book Edit the module Python initialization file to have the models/ directory loaded by the module: from . import models Upgrade the Odoo module, either from the command line or from the apps menu in the user interface. If you look closely at the server log while upgrading the module, you should see this line: odoo.modules.registry: module my_module: creating or updating database table After this, the new library.book model should be available in our Odoo instance. If we have the technical tools activated, we can confirm that by looking it up at Settings | Technical | Database Structure | Models. How it works... Our first step was to create a Python file where our new module was created. Odoo models are objects derived from the Odoo Model Python class. When a new model is defined, it is also added to a central model registry. This makes it easier for other modules to make modifications to it later. Models have a few generic attributes prefixed with an underscore. The most important one is _name, providing a unique internal identifier to be used throughout the Odoo instance. The model fields are defined as class attributes. We began defining the name field of the Char type. It is convenient for models to have this field, because by default, it is used as the record description when referenced from other models. We also used an example of a relational field, author_ids. It defines a many-to-many relation between Library Books and the partners. A book can have many authors and each author can have written many books. Next, we must make our module aware of this new Python file. This is done by the __init__.py files. Since we placed the code inside the models/ subdirectory, we need the previous __init__ file to import that directory, which should, in turn, contain another __init__ file importing each of the code files there (just one, in our case). Changes to Odoo models are activated by upgrading the module. The Odoo server will handle the translation of the model class into database structure changes. Although no example is provided here, business logic can also be added to these Python files, either by adding new methods to the Model's class or by extending the existing methods, such as create() or write(). Thus we learned about the structure of an Odoo addon module and learned, step-by-step, how to create a simple module from scratch. To know more about, how to define access rules for your data;  how to expose your data models to end users on the back end and on the front end; and how to create beautiful PDF versions of your data, read this book Odoo 11 Development Cookbook - Second Edition. ERP tool in focus: Odoo 11 Building Your First Odoo Application How to Scaffold a New module in Odoo 11
Read more
  • 0
  • 0
  • 49047

article-image-enhancing-observability-with-azure-native-isv-services-and-third-party-integrations
José Ángel Fernández, Manuel Lázaro Ramírez
02 Dec 2024
15 min read
Save for later

Enhancing Observability with Azure Native ISV Services and Third-Party Integrations

José Ángel Fernández, Manuel Lázaro Ramírez
02 Dec 2024
15 min read
This article is an excerpt from the book, "Cloud Observability with Azure Monitor", by José Ángel Fernández, Manuel Lázaro Ramírez. This book is your guide to understanding the dynamic landscape of cloud monitoring with Azure Monitor. You’ll gain practical insights into designing the monitoring strategies for your Azure resources with the help of examples and best practices.IntroductionAs organizations strive to maintain robust and comprehensive monitoring solutions, leveraging Azure Native ISV (Independent Software Vendor) services becomes increasingly valuable. These services are specifically designed to integrate seamlessly with Azure, providing enhanced monitoring, analytics, and management capabilities that complement Azure’s native tools. By incorporating ISV solutions, organizations can take advantage of specialized features, advanced analytics, and tailored monitoring capabilities that address unique business needs and operational requirements.In this article, we will explore the Azure Native ISV services available for monitoring. We’ll discuss the available service integration with Azure Monitor, their distinct advantages, and the added value they bring to your observability strategy. We will explore some of those services provided by Datadog, Elastic, Logz.io, Dynatrace, and New Relic. We’ll discuss the options these services provide to integrate with the Azure platform, as well as the benefits they offer.Azure Native DatadogAzure Native Datadog is a powerful, cloud-native monitoring and security platform that integrates seamlessly with Azure. Designed to provide comprehensive visibility into the health and performance of your applications and infrastructure, Datadog offers robust features such as real-time metrics, advanced analytics, and customizable dashboards. With Azure Native Datadog, organizations can monitor Azure resources alongside other cloud and on-premises environments, enabling a unified approach to observability.Datadog’s integration with Azure enables the automatic discovery and monitoring of Azure resources, including virtual machines, databases, and services. It provides real-time monitoring through continuous collection and analysis of metrics, logs, and traces from your Azure environment. It supports both IaaS and PaaS environments, thanks to its extensive integration with more than 40 services.Information collected can be used for advanced analytics and custom dashboards. You can utilize machine learning algorithms to detect anomalies and forecast trends, gain insights into application performance, and create detailed visualizations tailored to your specific needs, combining data from Azure and other sources.Security is also relevant, thanks to its alerting and incident management capabilities. Set up proactive alerts and manage incidents efficiently to minimize downtime and impact. Improve your security inside Azure through its Cloud Security management features.By leveraging Azure Native Datadog, organizations benefit from single-pane-of-glass visibility in hybrid and multi-cloud environments. Its costs are integrated into your Azure monthly bill directly, and access is transparent through the single sign-on integration.Metrics and activity log ingestion are automatically configured, and installation of the custom Datadog agents can be automated for your virtual machines. More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/datadog/create.Azure Native Elastic CloudAzure Native Elastic is an integrated solution that combines the power of Elasticsearch, Kibana, and other Elastic Stack components with Azure’s cloud capabilities. Elastic offers robust search, observability, and security solutions that help organizations gain deep insights into their Azure environments. By using Azure Native Elastic, you can seamlessly ingest, search, and visualize data from Azure resources, enabling advanced analytics and improved operational efficiency.Elastic’s integration with Azure provides a seamless experience for deploying and managing its CloudNative Observability Platform. It is provided as a Software-as-a-Service (SaaS) application through the Azure Marketplace, which centralizes log, metric, and trace analytics, simplifying the monitoring of Azure environments for Elastic clients.Users can manage Elastic solutions directly through the Azure portal, implementing monitoring for cloud workloads via a streamlined workflow. Provisioning Elastic resources is facilitated by a custom resource provider, allowing the creation, provisioning, and management of Elastic resources within Azure, with Elastic managing the SaaS application and associated accounts.It provides a similar experience to the previous solution through a single-pane-of-glass visibility platform, with a unified billing experience integrated into your Azure bill and transparent access to Elastic solutions through single sign-on integration. Metrics and activity log ingestion are automatically configured, and installation of the custom  Elastic agents can be automated for your virtual machines.More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/elastic/create.Azure Native Logz.ioAzure Native Logz.io is a cloud-native observability platform that combines the best open-source tools – OpenSearch, OpenTelemetry, and Prometheus – in a unified solution. Logz.io provides advanced log management, metrics monitoring, and tracing capabilities, helping organizations achieve comprehensive observability across their Azure environments. With seamless integration and powerful analytics, Azure Native Logz.io enhances your ability to monitor and troubleshoot applications and infrastructure.Logz.io’s integration with Azure simplifies the deployment and management of observability tools. It is also provided as a SaaS application through the Azure Marketplace, which centralizes log, metric, and trace analytics. You can now provision the Logz.io resources through a custom resource provider that creates, provisions, and manages Logz.io resources through the Azure portal. Logz.io runs the SaaS, and Azure provides the interface to manage the resources.Azure Native Logz.io empowers organizations to enhance their observability strategy, ensuring the reliability and performance of their applications and infrastructure through integrated log, metric, and trace management.More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/logzio/create.Azure Native DynatraceAzure Native Dynatrace is a comprehensive observability platform designed to provide deep insights into the performance and health of your Azure applications and infrastructure. Dynatrace leverages artificial intelligence and automation to deliver precise answers, helping organizations optimize their operations and improve user experiences. With seamless Azure integration, Dynatrace offers monitoring capabilities across cloud and hybrid environments.Dynatrace’s integration with Azure enables the automatic discovery and monitoring of Azure resources, offering a rich set of features such as AI-driven monitoring, using AI to automatically detect anomalies, identify root causes, and predict potential issues, or full stack observability that monitors the entire stack, from infrastructure to applications, in real-time.Azure Native Dynatrace provides the same key benefits discussed in the previous solutions related to integration, billing, and automation of agent deployment and information collection.More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/dynatrace/dynatrace-create.Azure Native New RelicAzure Native New Relic is a powerful observability platform that offers comprehensive monitoring and analytics capabilities for your Azure applications and infrastructure. Designed to provide real-time visibility and actionable insights, New Relic integrates seamlessly with Azure, enabling organizations to monitor the performance and health of their environments with precision. By leveraging Azure Native New Relic, you can optimize application performance, enhance user experiences, and ensure operational excellence.New Relic’s integration with Azure allows effortless monitoring of Azure resources, featuring continuous monitoring of applications and infrastructure for real-time insights, powerful analytics to gain a deeper understanding of performance metrics and user behavior, custom dashboards to visualize key performance indicators and trends, and distributed tracing to track and analyze end-to-end transactions across distributed systems, helping you to identify performance bottlenecks.Adopting Azure Native New Relic provides the same key benefits discussed in the previous solutions related to integration, billing, and automation of agent deployment and information collection.You can learn more i nformation at https://learn.microsoft.com/en-us/azure/ partner-solutions/new-relic/new-relic-create.Additional third-party services for integrationIn addition to Azure Native ISV services, numerous third-party services also offer robust integration capabilities with Azure Monitor. These integrations extend the functionality of Azure Monitor, providing specialized features and advanced analytics that enhance your observability strategy. Leveraging these third-party services allows organizations to tailor their monitoring and security solutions to meet specific business needs, ensuring comprehensive visibility and control over their Azure environments.Those third-party services are as follows:IBM QRadar is a leading Security Information and Event Management (SIEM) solution that helps organizations detect and respond to security threats. Integrating QRadar with Azure Monitor allows you to centralize security event data from your Azure environment and gain deeper insights into potential security incidents. You can read more about it at https:// www.ibm.com/docs/en/qsip/7.5?topic=extensions-azure.Splunk is a powerful platform for searching, monitoring, and analyzing machine-generated data. Integrating Splunk with Azure Monitor enables you to collect, analyze, and visualize data from your Azure resources, enhancing your ability to monitor performance and detect issues. More information about this is available at https://splunk.github.io/splunkadd-on-for-microsoft-cloud-services/.Sumo Logic is a cloud-native, continuous intelligence platform for log management and analytics. Integrating Sumo Logic with Azure Monitor allows you to aggregate, monitor, and analyze log and metric data from your Azure resources, improving operational and security insights. More information i s available at https://help.sumologic.com/docs/ send-data/collect-from-other-data-sources/azure-monitoring/.ArcSight is a leading SIEM solution that provides advanced threat detection and response capabilities. Integrating ArcSight with Azure Monitor allows you to centralize security event data and gain actionable insights to protect your Azure environment. Read more about it at  https://www.microfocus.com/documentation/arcsight/arcsightsmartconnectors/#gsc.tab=0.Syslog servers are a critical component of many IT infrastructures, providing centralized logging for network devices, servers, and applications. Integrating Syslog servers with Azure Monitor allows you to collect, store, and analyze Syslog data from your Azure environment, improving visibility and operational efficiency. Further information is available at https://learn. microsoft.com/en-us/azure/azure-monitor/agents/data-collectionsyslog.ConclusionAzure Native ISV services and third-party integrations provide organizations with a diverse set of tools to optimize observability, enhance operational efficiency, and address unique monitoring challenges. By leveraging these solutions, businesses can achieve comprehensive visibility across their Azure environments, enabling proactive management, improved performance, and robust security. Whether it's integrating Datadog for real-time analytics, Elastic for advanced search capabilities, or New Relic for deep performance insights, these services empower organizations to tailor their monitoring strategies and unlock the full potential of Azure.Author BioJosé Ángel Fernández has worked as a Microsoft Specialist and Cloud Solution Architect, specializing in advanced cloud migrations, with extensive technical expertise and a deep understanding of Azure solutions. He has been focused on the cloud for the last 11 years at Microsoft, starting at the same time virtual machines reached general availability and Azure Monitor was not yet a product.José Ángel graduated with a degree in telecommunications engineering from the Technical University of Madrid in 2013. He later earned a degree in big data analytics from the Graduate School of Engineering and Basic Sciences of Charles III University of Madrid in 2020.He resides in Madrid, Spain with his wife, his three-year-old child, and an adopted black cat that has never brought him bad luck.Manuel Lázaro Ramírez is a Microsoft Cloud Solution Architect with a wide technical breadth and deep understanding of Azure solutions. He has been focused on designing and implementing cloud architectures in different industries for the last 10 years.Manuel graduated with a degree in pure and applied mathematics from Complutense University of Madrid in 2013 and later earned a master&rsquo;s degree in pure and applied mathematics from Complutense University of Madrid in 2014.He resides in Madrid, Spain, with his wife, and his passion is developing code with their friends and working and solving real-world business problems with cloud technology to deliver real value.
Read more
  • 0
  • 0
  • 49007

article-image-what-is-a-support-vector-machine
Packt Editorial Staff
16 Apr 2018
7 min read
Save for later

What is a support vector machine?

Packt Editorial Staff
16 Apr 2018
7 min read
Support vector machines are machine learning algorithms whereby a model 'learns' to categorize data around a linear classifier. The linear classifier is, quite simply, a line that classifies. It's a line that that distinguishes between 2 'types' of data, like positive sentiment and negative language. This gives you control over data, allowing you to easily categorize and manage different data points in a way that's useful too. This tutorial is an extract from Statistics for Machine Learning. But support vector machines do more than linear classification - they are multidimensional algorithms, which is why they're so powerful. Using something called a kernel trick, which we'll look at in more detail later, support vector machines are able to create non-linear boundaries. Essentially they work at constructing a more complex linear classifier, called a hyperplane. Support vector machines work on a range of different types of data, but they are most effective on data sets with very high dimensions relative to the observations, for example: Text classification, in which language has the very dimensions of word vectors For the quality control of DNA sequencing by labeling chromatograms correctly Different types of support vector machines Support vector machines are generally classified into three different groups: Maximum margin classifiers Support vector classifiers Support vector machines Let's take a look at them now. Maximum margin classifiers People often use the term maximum margin classifier interchangeably with support vector machines. They're the most common type of support vector machine, but as you'll see, there are some important differences. The maximum margin classifier tackles the problem of what happens when your data isn't quite clear or clean enough to draw a simple line between two sets - it helps you find the best line, or hyperplane out of a range of options. The objective of the algorithm is to find  furthest distance between the two nearest points in two different categories of data - this is the 'maximum margin', and the hyperplane sits comfortably within it. The hyperplane is defined by this equation: So, this means that any data points that sit directly on the hyperplane have to follow this equation. There are also data points that will, of course, fall either side of this hyperplane. These should follow these equations: You can represent the maximum margin classifier like this: Constraint 2 ensures that observations will be on the correct side of the hyperplane by taking the product of coefficients with x variables and finally, with a class variable indicator. In the diagram below, you can see that we could draw a number of separate hyperplanes to separate the two classes (blue and red). However, the maximum margin classifier attempts to fit the widest slab (maximize the margin between positive and negative hyperplanes) between two classes and the observations touching both the positive and negative hyperplanes. These are the support vectors. It's important to note that in non-separable cases, the maximum margin classifier will not have a separating hyperplane - there's no feasible solution. This issue will be solved with support vector classifiers. Support vector classifiers Support vector classifiers are an extended version of maximum margin classifiers. Here, some violations are 'tolerated' for non-separable cases. This means a best fit can be created. In fact, in real-life scenarios, we hardly find any data with purely separable classes; most classes have a few or more observations in overlapping classes. The mathematical representation of the support vector classifier is as follows, a slight correction to the constraints to accommodate error terms: In constraint 4, the C value is a non-negative tuning parameter to either accommodate more or fewer overall errors in the model. Having a high value of C will lead to a more robust model, whereas a lower value creates the flexible model due to less violation of error terms. In practice, the C value would be a tuning parameter as is usual with all machine learning models. The impact of changing the C value on margins is shown in the two diagrams below. With the high value of C, the model would be more tolerating and also have space for violations (errors) in the left diagram, whereas with the lower value of C, no scope for accepting violations leads to a reduction in margin width. C is a tuning parameter in Support Vector Classifiers: Support vector machines Support vector machines are used when the decision boundary is non-linear. It's useful when it becomes impossible to separate with support vector classifiers. The diagram below explains the non-linearly separable cases for both 1-dimension and 2-dimensions: Clearly, you can't classify using support vector classifiers whatever the cost value is. This is why you would want to then introduce something called the kernel trick. In the diagram below, a polynomial kernel with degree 2 has been applied in transforming the data from 1-dimensional to 2-dimensional data. By doing so, the data becomes linearly separable in higher dimensions. In the left diagram, different classes (red and blue) are plotted on X1 only, whereas after applying degree 2, we now have 2-dimensions, X1 and X21 (the original and a new dimension). The degree of the polynomial kernel is a tuning parameter. You need to tune them with various values to check where higher accuracy might be possible with the model: However, in the 2-dimensional case, the kernel trick is applied as below with the polynomial kernel with degree 2. Observations have been classified successfully using a linear plane after projecting the data into higher dimensions: Different types of kernel functions Kernel functions are the functions that, given the original feature vectors, return the same value as the dot product of its corresponding mapped feature vectors. Kernel functions do not explicitly map the feature vectors to a higher-dimensional space, or calculate the dot product of the mapped vectors. Kernels produce the same value through a different series of operations that can often be computed more efficiently. The main reason for using kernel functions is to eliminate the computational requirement to derive the higher-dimensional vector space from the given basic vector space, so that observations be separated linearly in higher dimensions. Why someone needs to like this is, derived vector space will grow exponentially with the increase in dimensions and it will become almost too difficult to continue computation, even when you have a variable size of 30 or so. The following example shows how the size of the variables grows. Here's an example: When we have two variables such as x and y, with a polynomial degree kernel, it needs to compute x2, y2, and xy dimensions in addition. Whereas, if we have three variables x, y, and z, then we need to calculate the x2, y2, z2, xy, yz, xz, and xyz vector spaces. You will have realized by this time that the increase of one more dimension creates so many combinations. Hence, care needs to be taken to reduce its computational complexity; this is where kernels do wonders. Kernels are defined more formally in the following equation: Polynomial kernels are often used, especially with degree 2. In fact, the inventor of support vector machines, Vladimir N Vapnik, developed using a degree 2 kernel for classifying handwritten digits. Polynomial kernels are given by the following equation: Radial Basis Function kernels (sometimes called Gaussian kernels) are a good first choice for problems requiring nonlinear models. A decision boundary that is a hyperplane in the mapped feature space is similar to a decision boundary that is a hypersphere in the original space. The feature space produced by the Gaussian kernel can have an infinite number of dimensions, a feat that would be impossible otherwise. RBF kernels are represented by the following equation: This is sometimes simplified as the following equation: It is advisable to scale the features when using support vector machines, but it is very important when using the RBF kernel. When the value of the gamma value is small, it gives you a pointed bump in the higher dimensions. A larger value gives you a softer, broader bump. A small gamma will give you low bias and high variance solutions; on the other hand, a high gamma will give you high bias and low variance solutions and that is how you control the fit of the model using RBF kernels: Learn more about support vector machines Support vector machines as a classification engine [read now] 10 machine learning algorithms every engineer needs to know [read now]
Read more
  • 0
  • 0
  • 48969

article-image-facebook-releases-pytorch-1-3-with-named-tensors-pytorch-mobile-8-bit-model-quantization-and-more
Bhagyashree R
11 Oct 2019
5 min read
Save for later

Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more

Bhagyashree R
11 Oct 2019
5 min read
Yesterday, at the PyTorch Developer Conference, Facebook announced the release of PyTorch 1.3. This release comes with three experimental features: named tensors, 8-bit model quantization, and PyTorch Mobile. Along with these exciting features, Facebook also announced the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. Key updates in PyTorch 1.3 Named Tensors for more readable and maintainable code Though tensors are the building blocks of modern machine learning, researchers have argued that they are “broken.” Tensors have their own share of shortcomings: they expose private dimensions, broadcast based on absolute position, and keep the type information in the documentation. PyTorch 1.3 tries to solve this problem by introducing experimental support for named tensors, which was proposed by Sasha Rush, an Associate Professor at Cornell Tech. He has built a library called NamedTensor, which serves as a “thin-wrapper” on Torch tensor. This update introduces a few changes to the API. Dimension access and reduction now use a ‘dim’ argument instead of an index. Constructing and adding dimensions requires a “name” argument. Functions now broadcast based on set operations, not through heuristic ordering rules. 8-bit model quantization for mobile-optimized AI Quantization in deep learning is the method of approximating a neural network that uses 32-bit floating-point numbers by a neural network that uses a lower-precision numerical format. It is used to reduce the bandwidth and compute requirements of deep learning models. This is extremely essential for on-device applications that have limited memory size and number of computations. PyTorch 1.3 brings experimental support for 8-bit model quantization with the eager mode Python API for efficient deployment on servers and edge devices. This feature includes techniques like post-training quantization, dynamic quantization, and quantization-aware training. Moving from 32-bits to 8-bits can result in two to four times faster computations with one-quarter the memory usage. PyTorch Mobile for more efficient on-device machine learning Running machine learning models directly on edge devices is of great importance as it reduces latency. This is why PyTorch 1.3 introduces PyTorch Mobile that enables “an end-to-end workflow from Python to deployment on iOS and Android.” The current release is experimental. In the future releases, we can expect PyTorch Mobile to come with build-level optimization, selective compilation, support for QNNPACK quantized kernel libraries and ARM CPUs, further performance improvements, and more. Model interpretability and privacy tools in PyTorch 1.3 Captum and Captum Insights Captum is an easy-to-use model interpretability library for PyTorch. It is backed by state-of-the-art interpretability algorithms such as Integrated Gradients, DeepLIFT, and Conductance to help developers improve and troubleshoot their models. Developers can identify different features that contribute to a model’s output and improve its design. Facebook has also released an early release of Captum Insights. It is an interpretability visualization widget built on top of Captum. It works across images, text, and other features to help users understand feature attribution. Check out Facebook’s announcement to know more about Captum. CrypTen Machine learning via cloud-based platforms poses various security and privacy challenges. Facebook writes, “In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools.” PyTorch 1.3 comes with CrypTen, a framework for privacy-preserving machine learning. It aims to make secure computing techniques accessible to machine learning practitioners. You can find more about CrypTen on GitHub. Libraries for multimodal AI systems Detectron2: It is an object detection library implemented in PyTorch. It features support for the latest models and tasks and increased flexibility to aid computer vision research. There are also improvements in maintainability and scalability to support production use cases. Fairseq gets speech extensions: With this release, Fairseq, a framework for sequence-to-sequence applications such as language translation includes support for end-to-end learning for speech and audio recognition tasks. The release of PyTorch 1.3 started a discussion on Hacker News and naturally many developers compared it with TensorFlow 2.0. Here’s what a user commented, “This is a common trend for being second in the market when we see Pytorch and TensorFlow 2.0, TF 2.0 was created to compete directly with Pytorch pythonic implementation (Keras based, Eager execution).” They further added, “Facebook at least on PyTorch has been delivering a quality product. Although for us running production pipelines TF is still ahead in many areas (GPU, TPU implementation, TensorRT, TFX and other pipeline tools) I can see Pytorch catching up on the next couple of years which by my prediction many companies will be running serious and advanced workflows and we may be able to see a winner there.” The named tensors implementation is being well-received by the PyTorch community: https://twitter.com/leopd/status/1182342855886376965 https://twitter.com/rasbt/status/1182647527906140161 These were some of the updates in PyTorch 1.3. Check out the official announcement by Facebook to know more. PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 48856
article-image-introducing-liferay-your-intranet
Packt
04 Sep 2015
32 min read
Save for later

Introducing Liferay for Your Intranet

Packt
04 Sep 2015
32 min read
In this article by Navin Agarwal, author of the book Liferay Portal 6.2 Enterprise Intranets, we will learn that Liferay is an enterprise application solution. It provides a lot of functionalities, which helps an organization to grow and is a one-solution package as a portal and content management solution. In this article, we will look at the following topics: The complete features you want your organization's intranet solution to have Reasons why Liferay is an excellent choice to build your intranet Where and how Liferay is used besides intranet portals Easy integration with other open source tools and applications Getting into more technical information about what Liferay is and how it works So, let's start looking at exactly what kind of site we're going to build. (For more resources related to this topic, see here.) Liferay Portal makes life easy We're going to build a complete corporate intranet solution using Liferay. Let's discuss some of the features your intranet portal will have. Hosted discussions Are you still using e-mail for group discussions? Then, it's time you found a better way! Running group discussions over e-mail clogs up the team's inbox—this means you have to choose your distribution list in advance, and that makes it hard for team members to opt in and out of the discussion. Using Liferay, we will build a range of discussion boards for discussion within and between teams. The discussions are archived in one place, which means that it's always possible to go back and refer to them later. On one level, it's just more convenient to move e-mail discussions to a discussion forum designed for the purpose. But once the forum is in place, you will find that a more productive group discussion takes place here than it ever did over e-mail. Collaborative documents using wikis Your company probably has guideline documents that should be updated regularly but swiftly lose their relevance as practices and procedures change. Even worse, each of your staff will know useful, productive tricks and techniques—but there's probably no easy way to record that knowledge in a way that is easy for others to find and use. We will see how to host wikis within Liferay. A wiki enables anybody to create and edit web pages and link all of those web pages together without requiring any HTML or programming skills. You can put your guideline documents into a wiki, and as practices change, your frontline staff can quickly and effortlessly update the guideline documentation. Wikis can also act as a shared notebook, enabling team members to collaborate and share ideas and findings and work together on documents. Team and individual blogs Your company probably needs frequent, chronological publications of personal thoughts and web links in the intranet. Your company probably has teams and individuals working on specific projects in order to share files and blogs about a project process and more. By using the Liferay Blog features, you can use HTML text editors to create or update files and blogs and to provide RSS feeds. Liferay provides an easy way for teams and individuals to share files with the help of blogs. Blogs provide a straightforward blogging solution with features such as RSS, user and guest comments, browsable categories, tags and labels, and a rating system. Liferay's RSS with the subscription feature provides the ability to frequently read RSS feeds from within the portal framework. At the same time, What You See Is What You Get (WYSIWYG) editors provide the ability to edit web content, including the blogs' content. Less technical people can use the WYSIWYG editor instead of sifting through complex code. Shared calendars Many companies require calendar information and share the calendar among users from different departments. We will see how to share a calendar within Liferay. The shared calendar can satisfy the basic business requirements incorporated into a featured business intranet, such as scheduling meetings, sending meeting invitations, checking for attendees' availability, and so on. Therefore, you can provide an environment for users to manage events and share calendars. Document management – CMS When there is a need for document sharing and document management, Liferay's Documents and Media library helps you with lots of features. The Documents and Media portlet allows you to add folders and subfolders for documents and media files, and also allows users to publish documents. It serves as a repository for all types of files and makes Content management systems (CMSes) available for intranets. The Documents and Media library portlet is equipped with customizable folders and acts as a web-based solution to share documents and media files among all your team members—just as a shared drive would. All the intranet users will be able to access the files from anywhere, and the content is accessible only by those authorized by administrators. All the files are secured by the permission layer by the administrator. Web content management – WCM Your company may have a lot of images and documents, and you may need to manage all these images and documents as well. Therefore, you require the ability to manage a lot of web content and then publish web content in intranets. We will see how to manage web content and how to publish web content within Liferay. Liferay Journal (Web Content) not only provides high availability to publish, manage, and maintain web content and documents, but it also separates content from the layout. Liferay WCM allows us to create, edit, and publish web content (articles). It also allows quick changes in the preview of the web content by changing the layout. It has built-in functionality, such as workflow, search, article versioning, scheduling, and metadata. Personalization and internalization All users can get a personal space that can be either made public (published as a website with a unique, friendly URL) or kept private. You can also customize how the space looks, what tools and applications are included, what goes into Documents and Media, and who can view and access all of this content. In addition, Liferay supports multiple languages, where you can select your own language. Multilingual organizations get out-of-the-box support for up to 45 languages. Users can toggle among different language settings with just one click and produce/publish multilingual documents and web content. Users can make use of the internalization feature to define the specific site in a localized language. Workflow, staging, scheduling, and publishing You can use a workflow to manage definitions, instances, and predetermined sequences of connected steps. Workflow can be used for web content management, assets, and so on. Liferay's built-in workflow engine is called Kaleo. It allows users to set up the review and publishing process on the web content article of any document that needs to end up on the live site. Liferay 6.2 integrates with the powerful features of the workflow and data capabilities of dynamic data lists in Kaleo Forms; it's only available in Liferay Enterprise Edition. Staging environments are integrated with Liferay's workflow engine. To have a review process for staged pages, you need to make sure you have a workflow engine configured and you have a staging setup in the workflow. As a content creator, you can update what you've created and publish it in a staging workflow. Other users can then review and modify it. Moreover, content editors can make a decision on whether to publish web content from staging to live, that is, you can easily create and manage everything from a simple article of text and images to fully functional websites in staging and then publish them live. Before going live, you can schedule web content as well. For instance, you can publish web content immediately or schedule it for publishing on a specific date. Social networks and Social Office Liferay Portal supports social networks—you can easily manage your Google Plus, Facebook, MySpace, Twitter, and other social network accounts in Liferay. In addition, you can manage your instant messenger accounts, such as AIM, ICQ, Jabber, MSN, Skype, YM, and so on smoothly from inside Liferay. Liferay Social Office gives us a social collaboration on top of the portal—a fully virtual workspace that streamlines communication and builds up group cohesion. It provides holistic enhancement to the way you and your colleagues work together. All components in Social Office are tied together seamlessly, getting everyone on the same page by sharing the same look and feel. More importantly, the dynamic activity tracking gives us a bird's-eye view of who has been doing what and when within each individual site. Using Liferay Social Office, you can enhance your existing personal workflow with social tools, keep your team up to date, and turn collective knowledge into collective action. Note that Liferay 6.2 supports the Liferay Social Office 3.0 current version. Liferay Sync and Marketplace Liferay Sync is Liferay's newest product, designed to make file sharing as easy as a simple drag and drop! Liferay Sync is an add-on product for Liferay 6.1 CE, EE, and later versions, which makes it a more raw boost product and enables the end user to publish and access documents and files from multiple environments and devices, including Windows and MacOS systems, and iOS-based mobile platforms. Liferay Sync is one of the best features, and it is fully integrated into the Liferay platform. Liferay 6.1 introduced the new concept of the marketplace, which leverages the developers to develop any components or functionality and release and share it with other users. It's a user-friendly and one-stop place to share apps. Liferay Marketplace provides the portal product with add-on features with a new hub to share, browse, and download Liferay-compatible applications. In Liferay 6.2, Marketplace comes under App Manager, where all the app-related controls can be possible. More features The intranet also arranges staff members into teams and sites, provides a way of real-time IM and chatting, and gives each user an appropriate level of access. This means that they can get all the information they need and edit and add content as necessary but won't be able to mess with sensitive information that they have no reason to see. In particular, the portal provides an integrating framework so that you can integrate external applications easily. For example, you can integrate external applications with the portal, such as Alfresco, OpenX, LDAP, SSO CAS, Orbeon Forms, Konakart, PayPal, Solr, and so on. In a word, the portal offers compelling benefits to today's enterprises—reduced operational costs, improved customer satisfaction, and streamlined business processes. Everything in one place All of these features are useful on their own. However, it gets better when you consider that all of these features will be combined into one easy-to-use searchable portal. A user of the intranet, for example, can search for a topic—let's say financial report—and find the following in one go: Any group discussions about financial reports Blog entries within the intranet concerning financial reports Documents and files—perhaps the financial reports themselves Wiki entries with guidelines on preparing financial reports Calendar entries for meetings to discuss the financial report Of course, users can also restrict their search to just one area if they already know exactly what they are looking for. Liferay provides other features, such as tagging, in order to make it even easier to organize information across the whole intranet. We will do all of this and more. Introducing Palm Tree Publications We are going to build an intranet for a fictional company as an example, focusing on how to install, configure, and integrate it with other applications and also implement portals and plugins (portlets, themes, layout templates, hooks, and webs) within Liferay. By applying the instructions to your own business, you will be able to build an intranet to meet your own company's needs. "Palm Tree Publications" needs an intranet of its own, which we will call bookpub.com. The enterprise's global headquarters are in the United States. It has several departments—editorial, website, engineering, marketing, executive, and human resources. Each department has staff in the U.S., Germany, and India or in all three places. The intranet site provides a site called "Book Street and Book Workshop" consisting of users who have an interest in reading books. The enterprise needs to integrate collaboration tools, such as wikis, discussion forums, blogs, instant messaging, mail, RSS, shared calendars, tagging, and so on. Palm Tree Publications has more advanced needs too: a workflow to edit, approve, and publish books. Furthermore, the enterprise has a lot of content, such as books stored and managed alfresco currently. In order to build the intranet site, the following functionality should be considered: Installing the portal, experiencing the portal and portlets, and customizing the portal and personal web pages Bringing the features of enabling document sharing, calendar sharing, and other collaboration within a business to the users of the portal Discussion forums—employees should be able to discuss book ideas and proposals Wikis—keeping track of information about editorial guidance and other resources that require frequent editing Dissemination of information via blogs—small teams working on specific projects share files and blogs about a project process Sharing a calendar among employees Web content management creation by the content author and getting approved by the publisher Document repository—using effective content management systems (CMSes), a natural fit for a portal for secure access, permissions, and distinct roles (such as writers, editors, designers, administrators, and so on) Collaborative chat and instant messaging, social network, Social Office, and knowledge management tools Managing a site named Book Street and Book Workshop that consists of users who have the same interest in reading books as staging, scheduling, and publishing web content related to books Federated search for discussion forum entries, blog posts, wiki articles, users in the directory, and content in both the Document and Media libraries; search by tags Integrating back-of-the-house software applications, such as Alfresco, Orbeon Forms, the Drools rule server, Jasper Server, and BI/Reporting Pentaho; strong authentication and authorization with LDAP; and single authentication to access various company sites besides the intranet site The enterprise can have the following groups of people: Admin: This group installs systems, manages membership, users, user groups, organizations, roles and permissions, security on resources, workflow, servers and instances, and integrates with third-party systems Executives: Executive management handles approvals Marketing: This group handles websites, company brochures, marketing campaigns, projects, and digital assets Sales: This group makes presentations, contracts, documents, and reports Website editors: This group manages pages of the intranet—writes articles, reviews articles, designs the layout of articles, and publishes articles Book editors: This group writes, reviews, and publishes books and approves and rejects the publishing of books Human resources: This group manages corporate policy documents Finance: This group manages accounts documents, scanned invoices and checks accounts Corporate communications: This group manages external public relations, internal news releases, and syndication Engineering: This group sets up the development environment and collaborates on engineering projects and presentation templates Introducing Liferay Portal's architecture and framework Liferay Portal's architecture supports high availability for mission-critical applications using clustering and the fully distributed cache and replication support across multiple servers. The following diagram has been taken from the Liferay forum written by Jorge Ferrer. This diagram depicts the various architectural layers and functionalities of portlets: Figure 1.1: The Liferay architecture The preceding image was taken from https://www.liferay.com/web/jorge.ferrer/blog/-/blogs/liferay-s-architecture-the-beginning-of-a-blog-series site blog. The Liferay Portal architecture is designed in such a way that it provides tons of features at one place: Frontend layer: This layer is the end user's interface Service layer: This contains the great majority of the business logic for the portal platform and all of the portlets included out of the box Persistence layer: Liferay relies on Hibernate to do most of its database access Web services API layer: This handles web services, such as JSON and SOAP In Liferay, the service layer, persistence layer, and web services API layer are built automatically by that wonderful tool called Service Builder. Service Builder is the tool that glues together all of Liferay's layers and that hides the complexities of using Spring or Hibernate under the hood. Service-oriented architecture Liferay Portal uses service-oriented architecture (SOA) design principles throughout and provides the tools and framework to extend SOA to other enterprise applications. Under the Liferay enterprise architecture, not only can the users access the portal from traditional and wireless devices, but developers can also access it from the exposed APIs via REST, SOAP, RMI, XML-RPC, XML, JSON, Hessian, and Burlap. Liferay Portal is designed to deploy portlets that adhere to the portlet API compliant with both JSR-168 and JSR-286. A set of useful portlets are bundled with the portal, including Documents and Media, Calendar, Message Boards, Blogs, Wikis, and so on. They can be used as examples to add custom portlets. In a word, the key features of Liferay include using SOA design principles throughout, such as reliable security, integrating the portal with SSO and LDAP, multitier and limitless clustering, high availability, caching pages, dynamic virtual hosting, and so on. Understanding Enterprise Service Bus Enterprise Service Bus (ESB) is a central connection manager that allows applications and services to be added quickly to an enterprise infrastructure. When an application needs to be replaced, it can easily be disconnected from the bus at a single point. Liferay Portal uses Mule or ServiceMix as ESB. Through ESB, the portal can integrate with SharePoint, BPM (such as the jBPM workflow engine and Intalio | BPMS engine), BI Xforms reporting, JCR repository, and so on. It supports JSR 170 for content management systems with the integration of JCR repositories, such as Jackrabbit. It also uses Hibernate and JDBC to connect to any database. Furthermore, it supports an event system with synchronous and asynchronous messaging and a lightweight message bus. Liferay Portal uses the Spring framework for its business and data services layers. It also uses the Spring framework for its transaction management. Based on service interfaces, portal-impl is implemented and exposed only for internal usage—for example, they are used for the extension environment. portal-kernel and portal-service are provided for external usage (or for internal usage)—for example, they are used for the Plugins SDK environment. Custom portlets, both JSR-168 and JSR-286, and web services can be built based on portal-kernel and portal-service. In addition, the Web 2.0 Mail portlet and the Web 2.0 Chat portlet are supported as well. More interestingly, scheduled staging and remote staging and publishing serve as a foundation through the tunnel web for web content management and publishing. Liferay Portal supports web services to make it easy for different applications in an enterprise to communicate with each other. Java, .NET, and proprietary applications can work together easily because web services use XML standards. It also supports REST-style JSON web services for lightweight, maintainable code and supports AJAX-based user interfaces. Liferay Portal uses industry-standard, government-grade encryption technologies, including advanced algorithms, such as DES, MD5, and RSA. Liferay was benchmarked as one of the most secure portal platforms using LogicLibrary's Logiscan suite. Liferay offers customizable single sign-on (SSO) that integrates into Yale CAS, JAAS, LDAP, NTLM, CA Siteminder, Novell Identity Manager, OpenSSO, and more. Open ID, OpenAuth, Yale CAS, Siteminder, and OpenAM integration are offered by it out of the box. In short, Liferay Portal uses ESB in general with an abstraction layer on top of an enterprise messaging system. It allows integration architects to exploit the value of messaging systems, such as reporting, e-commerce, and advertisements. Understanding the advantages of using Liferay to build an intranet Of course, there are lots of ways to build a company intranet. What makes Liferay such a good choice to create an intranet portal? It has got the features we need All of the features we outlined for our intranet come built into Liferay: discussions, wikis, calendars, blogs, and so on are part of what Liferay is designed to do. It is also designed to tie all of these features together into one searchable portal, so we won't be dealing with lots of separate components when we build and use our intranet. Every part will work together with others. Easy to set up and use Liferay has an intuitive interface that uses icons, clear labels, and drag and drop to make it easy to configure and use the intranet. Setting up the intranet will require a bit more work than using it, of course. However, you will be pleasantly surprised by how simple it is—no programming is required to get your intranet up and running. Free and open source How much does Liferay cost? Nothing! It's a free, open source tool. Here, being free means that you can go to Liferay's website and download it without paying anything. You can then go ahead and install it and use it. Liferay comes with an enterprise edition too, for which users need to pay. In addition, Liferay provides full support and access to additional enterprise edition plugins/applications. Liferay makes its money by providing additional services, including training. However, the standard use of Liferay is completely free. Now you probably won't have to pay another penny to get your intranet working. Being open source means that the program code that makes Liferay work is available to anybody to look at and change. Even if you're not a programmer, this is still good for you: If you need Liferay to do something new, then you can hire a programmer to modify Liferay to do it. There are lots of developers studying the source code, looking for ways to make it better. Lots of improvements get incorporated into Liferay's main code. Developers are always working to create plugins—programs that work together with Liferay to add new features. Probably, for now, the big deal here is that it doesn't cost any money. However, as you use Liferay more, you will come to understand the other benefits of open source software for you. Grows with you Liferay is designed in a way that means it can work with thousands and thousands of users at once. No matter how big your business is or how much it grows, Liferay will still work and handle all of the information you throw at it. It also has features especially suited to large, international businesses. Are you opening offices in non-English speaking countries? No problem! Liferay has internationalization features tailored to many of the world's popular languages. Works with other tools Liferay is designed to work with other software tools—the ones that you're already using and the ones that you might use in the future—for instance: You can hook up Liferay to your LDAP directory server and SSO so that user details and login credentials are added to Liferay automatically Liferay can work with Alfresco—a popular and powerful Enterprise CMS (used to provide extremely advanced document management capabilities, which are far beyond what Liferay does on its own) Based on "standards" This is a more technical benefit; however, it is a very useful one if you ever want to use Liferay in a more specialized way. Liferay is based on standard technologies that are popular with developers and other IT experts and that confer the following benefits on users: Built using Java: Java is a popular programming language that can run on just about any computer. There are millions of Java programmers in the world, so it won't be too hard to find developers who can customize Liferay. Based on tried and tested components: With any tool, there's a danger of bugs. Liferay uses lots of well-known, widely tested components to minimize the likelihood of bugs creeping in. If you are interested, here are some of the well-known components and technologies Liferay uses—Apache ServiceMix, Mule, ehcache, Hibernate, ICEfaces, Java J2EE/JEE, jBPM, Activiti, JGroups, Alloy UI, Lucene, PHP, Ruby, Seam, Spring and AOP, Struts and Tiles, Tapestry, Velocity, and FreeMarker. Uses standard ways to communicate with other software: There are various standards established to share data between pieces of software. Liferay uses these so that you can easily get information from Liferay into other systems. The standards implemented by Liferay include AJAX, iCalendar and Microformat, JSR-168, JSR-127, JSR-170, JSR-286 (Portlet 2.0), JSR-314 (JSF 2.0), OpenSearch, the Open platform with support for web services, including JSON, Hessian, Burlap, REST, RMI, and WSRP, WebDAV, and CalDAV. Makes publication and collaboration tools Web Content Accessibility Guidelines 2.0 (WCAG 2.0) compliant: The new W3C recommendation is to make web content accessible to a wide range of people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, limited movement, speech disabilities, photosensitivity, and combinations of these. For example, the portal integrates CKEditor-standards support, such as W3C (WAI-AA and WCAG), 508 (Section 508). Alloy UI: The Liferay UI supports HTML 5, CSS 3, and Yahoo! User Interface Library 3 (YUI 3). Supports Apache Ant 1.8 and Maven 2: Liferay Portal can be built through Apache Ant by default, where you can build services; clean, compile, and build JavaScript CMD; build language native to ASCII, deploy, fast deploy; and so on. Moreover, Liferay supports Maven 2 SDK, providing Community Edition (CE) releases through public maven repositories as well as Enterprise Edition (EE) customers to install maven artifacts in their local maven repository. Bootstrap: Liferay 6.2 provides support for Twitter Bootstrap out of the box. With its fully responsive UI, the benefit of bootstrap is that it will support any device to render the content. Even content authors can use bootstrap markup and styles to make the content nicer. Many of these standards are things that you will never need to know much about, so don't worry if you've never heard of them. Liferay is better for using them, but mostly, you won't even know they are there. Other advantages of Liferay Liferay isn't just for intranets! Users and developers are building all kinds of different websites and systems based on Liferay. Corporate extranets An intranet is great for collaboration and information sharing within a company. An extranet extends this facility to suppliers and customers, who usually log in over the Internet. In many ways, this is similar to an intranet—however, there are a few technical differences. The main difference is that you create user accounts for people who are not part of your company. Collaborative websites Collaborative websites not only provide a secure and administrated framework, but they also empower users with collaborative tools, such as blogs, instant e-mail, message boards, instant messaging, shared calendars, and so on. Moreover, they encourage users to use other tools, such as tag administration, fine-grained permissions, delegable administrator privileges, enterprise taxonomy, and ad hoc user groups. By means of these tools, as an administrator, you can ultimately control what people can and cannot do in Liferay. In many ways, this is similar to an intranet too; however, there are a few technical differences. The main difference is that you use collaborative tools simply, such as blogs, instant e-mail, message boards, instant messaging, shared calendars, and so on. Content management and web publishing You can also use Liferay to run your public company website with content management and web publishing. Content management and web publishing are useful features in websites. It is a fact that the volume of digital content for any organization is increasing on a daily basis. Therefore, an effective CMS is a vital part of any organization. Meanwhile, document management is also useful and more effective when repositories have to be assigned to different departments and groups within the organization. Content management and document management are effective in Liferay. Moreover, when managing and publishing content, we may have to answer many questions, such as "who should be able to update and delete a document from the system?". Fortunately, Liferay's security and permissions model can satisfy the need for secure access and permissions and distinct roles (for example, writer, editor, designer, and administrator). Furthermore, Liferay integrates with the workflow engine. Thus, users can follow a flow to edit, approve, and publish content in the website. Content management and web publishing are similar to an intranet; however, there are a few technical differences. The main difference is that you can manage content and publish web content smoothly. Infrastructure portals Infrastructure portals integrate all possible functions, as we stated previously. This covers collaboration and information sharing within a company in the form of collaborative tools, content management, and web publishing. In infrastructure portals, users can create a unified interface to work with content, regardless of source via content interaction APIs. Furthermore, using the same API and the same interface as that of the built-in CMS, users can also manage content and publish web content from third-party systems, such as Alfresco, Vignette, Magnolia, FatWire, Microsoft SharePoint, and so on. Infrastructure portals are similar to an intranet; there are a few technical differences though. The main difference is that you can use collaborative tools, manage content, publish web content, and integrate other systems in one place. Why do you need a portal? The main reason is that a portal can serve as a framework to aggregate content and applications. A portal normally provides a secure and manageable framework where users can easily make new and existing enterprise applications available. In order to build an infrastructure portal smoothly, Liferay Portal provides an SOA-based framework to integrate third-party systems. Out-of-the-box portlets and features Liferay provides out-of-the-box (OOTB) portlets that have key features and can be used in the enterprise intranet very efficiently. These portlets are very scalable and powerful and provide the developer with the tools to customize it very easily. Let's see some of the most frequently used portlets in Liferay Portal. Content management Content management is a common feature in any web-based portal or website: The Web Content portlet has the features of full web publishing, office integration, and the asset library, which contains documents, images, and videos. This portlet also has the structure and templates that help with the designing of the web content's look and feel. Structure can be designed with the help of a visual editor with drag and drop. It has the integrated help feature with tooltips to name the attributes of the fields. The Asset Publisher portlet provides you with the feature to select any type of content/asset, such as wiki pages, web content, calendar events, message board messages, documents, media documents, and many more. It also allows us to use filter on them by types, categories, tags, and sources. The display settings provide configurable settings, which helps the content to be displayed to the end users perfectly. The Document and Media portlet is one of the most usable portlets to store any type of document. It allows you to store and manage your documents. It allows you to manage Liferay documents from your own machine's filesystem with the help of WebDAV integration. It has lots of new, built-in features, such as the inline document preview, image preview, and video player. Document metadata is displayed in document details, which makes it easier for you to review the metadata of the document. Also, Document and Media has features named checkin and checkout that helps editing the document in a group very easily. The Document and Media portlet has the multi-repository integration feature, which allows you to configure or mount any other repository very easily, such as SharePoint, Documentum, and Alfresco, utilizing the CMIS standard. Collaboration Collaboration features are generally ways in which users communicate with each other, such as the ones shown in the following list: The Dynamic data list portlet provides you with the facility of not writing a single line of code to create the form or data list. Say, for example, your corporate intranet needs the job posting done on a daily basis by the HR administrator. The administrator needs to develop the custom portlet to fulfill that requirement. Now, the dynamic data list portlet will allow the administrator to create a form for job posting. It's very easy to create and display new data types. The Blog portlet is one of the best features of Liferay. Blog portlets have two other related portlets, namely Recent Bloggers and Blogs Aggregator. The blog portlet provides the best possible ways for chronological publications of personal thoughts and web links in the intranet. Blog portlets can be placed for users of different sites/departments under the respective site//department page. The Calendar portlet provides the feature to create the event and schedule the event. It has many features that help the users in viewing the meeting schedule. The Message Board portlet is a full-featured forum solution with threaded views, categories, RSS capability, avatars, file attachments, previews, dynamic lists of recent posts, and forum statistics. Message Board portlets work with the fine-grained permissions and role-based access control model to give detailed levels of control to administrators and users. The Wiki portlet, like the Message Boards portlet, provides a straightforward wiki solution for both intranet and extranet portals that provides knowledge management among the users. It has all of the features you would expect in a state-of-the-art wiki. Again, it has the features of a file attachment preview, publishing the content, and versioning, and works with a fine-grained permission and role-based access control model. This again takes all the features of the Liferay platform. The Social Activity portlet allows you to tweak the measurements used to calculate user involvement within a site. The contribution and participation values determine the reward value of an action. It uses the blog entry, wiki, and message board points to calculate the user involvement in the site. The Marketplace portlet is placed inside the control panel. It's a hub for the applications provided by Liferay and other partners. You can find that many applications are free, and for certain applications, you need to pay an amount. It's more like an app store. This feature was introduced in Liferay Version 6.1. In the Liferay 6.2 control panel, under the Apps | Store link section, you will see apps that are stored in the Marketplace portlet. Liferay 6.2 comes with a new control panel that is very easy to manage for the portal's Admin users. Liferay Sync is not a portlet; it's a new feature of Liferay that allows you to synchronize documents of Liferay Document and Media with your local system. Liferay provide the Liferay Sync application, which has to be installed in your local system or mobile device. News RSS portlets provide RSS feeds. RSS portlets are used for the publishers by letting them syndicate content automatically. They benefit readers who want to subscribe to timely updates from their favorite websites or to aggregate feeds from many sites into one place. A Liferay RSS portlet is fully customizable, and it allows you to set the URL from which site you would like to get feeds. Social Activities portlets display portal-wide user activity, such as posting on message boards, creating wikis, and adding documents to Documents and Media. There are more portlets for social categories, such as User Statistics portlets, Group Statistics portlets, and Requests portlets. All these portlets are used for the social media. Tools The Search portlet provides faceted search features. When a search is performed, facet information will appear based on the results of the search. The number of each asset type and the most frequently occurring tags and categories as well as their frequency will all appear in the left-hand side column of the portlet. It searches through Bookmarks, Blogs Entries, Web Content Articles, Document Library Files, Users, Message Board, and Wiki. Finding more information on Liferay In this article, we looked at what Liferay can do for your corporate intranet and briefly saw why it's a good choice. If you want more background information on Liferay, the best place to start is the Liferay corporate website (http://www.liferay.com) itself. You can find the latest news and events, various training programs offered worldwide, presentations, demonstrations, and hosted trails. More interestingly, Liferay eats its own dog food; corporate websites within forums (called message boards), blogs, and wikis are built by Liferay using its own products. It is a real demo of Liferay Portal's software. Liferay is 100 percent open source and all downloads are available from the Liferay Portal website at http://www.liferay.com/web/guest/downloads/portal and the SourceForge website at http://sourceforge.net/projects/lportal/files. The source code repository is available at https://github.com/liferay. The Liferay website's wiki (http://www.liferay.com/web/guest/community/wiki) contains documentation, including a tutorial, user guide, developer guide, administrator guide, roadmap, and so on. The Liferay website's discussion forums can be accessed at http://www.liferay.com/web/guest/community/forums and the blogs at http://www.liferay.com/community/blogs/highlighted. The official plugins and the community plugins are available at http://www.liferay.com/marketplace and are the best place to share your thoughts, get tips and tricks about Liferay implementation, and use and contribute community plugins. If you would like to file a bug or know more about the fixes in a specific release, then you must visit the bug-tracking system at http://issues.liferay.com/. Summary In this article, we looked at what Liferay can offer your intranet and what we should consider while designing the company's enterprise site. We saw that our final intranet will provide shared documents, discussions, collaborative wikis, and more in a single, searchable portal. Well, Liferay is a great choice for an intranet because it provides so many features and is easy to use, free and open source, extensible, and well-integrated with other tools and standards. We also saw the other kinds of sites Liferay is good for, such as extranets, collaborative websites, content management, web publishing, and infrastructure portals. For the best example of an intranet and extranet, you can visit www.liferay.com. It will provide you with more background information. Resources for Article: Further resources on this subject: Working with a Liferay User / User Group / Organization[article] Liferay, its Installation and setup[article] Building your First Liferay Site [article]
Read more
  • 0
  • 6
  • 48713

article-image-8-programming-languages-to-learn-in-2019
Richard Gall
19 Dec 2018
9 min read
Save for later

8 programming languages to learn in 2019

Richard Gall
19 Dec 2018
9 min read
Learning new skills takes time - that's why, before learning something, you need to know that what you're learning is going to be worthwhile. This is particularly true when deciding which programming language to learn. As we approach the new year, it's a good time to reflect on our top learning priorities for 2019. But which programming should you learn in 2019? We’ve put together a list of the top programming languages to learn in the new year - as well as reasons you should learn them, and some suggestions for how you can get started. This will help you take steps to expand your skill set in 2019 in the way that’s right for you. Want to learn a new programming language? We have thousands of eBooks and videos in our $5 sale to help you get to grips with everything from Python to Rust. Python Python has been a growing programming language for some time and it shows no signs of disappearing. There are a number of reasons for this, but the biggest is the irresistible allure of artificial intelligence. Once you know Python, performing some relatively sophisticated deep learning tasks becomes relatively easy, not least because of the impressive ecosystem of tools that surround it, like TensorFlow. But Python’s importance isn’t just about machine learning. It’s flexibility means it has a diverse range of applications. If you’re a full-stack developer, for example, you might find Python useful for developing backend services and APIs; equally, if you’re in security or SRE, Python can be useful for automating aspects of your infrastructure to keep things secure and reliable. Put simply, Python is a useful addition to your skill set. Learn Python in 2019 if... You’re new to software development You want to try your hand at machine learning You want to write automation scripts Want to get started? Check out these titles: Clean Code in Python Learning Python Learn Programming in Python with Cody Jackson Python for Everyday Life [Video]       Go Go isn’t quite as popular as Python, but it is growing rapidly. And its fans are incredibly vocal about why they love it: it’s incredibly simple, yet also amazingly powerful. The reason for this is its creation: it was initially developed by Google that wanted a programming language that could handle the complexity of the systems they were developing, without adding to complexity in terms of knowledge and workflows. Combining the best aspects of functional and object oriented programming, as well as featuring a valuable set of in-built development tools, the language is likely to only go from strength to strength over the next 12 months. Learn Go in 2019 if… You’re a backend or full-stack developer looking to increase your language knowledge You’re working in ops or SRE Looking for an alternative to Python Learn Go with these eBooks and videos: Mastering Go Cloud Native programming with Golang Hands-On Go Programming Hands-On Full-Stack Development with Go       Rust In Stack Overflow’s 2018 developer survey Rust was revealed to be the best loved language among the developers using it. 80% of respondents said they loved using it or wanted to use it. Now, while Rust lacks the simplicity of Go and Python, it does do what it sets out to do very well - systems programming that’s fast, efficient, and secure. In fact, developers like to debate the merits of Rust and Go - it seems they occupy the minds of very similar developers. However, while they do have some similarities, there are key differences that should make it easier to decide which one you learn. At a basic level, Rust is better for lower level programming, while Go will allow you to get things done quickly. Rust does have a lot of rules, all of which will help you develop incredibly performant applications, but this does mean it has a steeper learning curve than something like Go. Ultimately it will depend on what you want to use the language for and how much time you have to learn something new. Learn Rust in 2019 if… You want to know why Rust developers love it so much You do systems programming You have a bit of time to tackle its learning curve Learn Rust with these titles: Rust Quick Start Guide Building Reusable Code with Rust [Video] Learning Rust [Video] Hands-On Concurrency with Rust       TypeScript TypeScript has quietly been gaining popularity over recent years. But it feels like 2018 has been the year that it has really broke through to capture the imagination of the wider developer community. Perhaps it’s just Satya Nadella’s magic... More likely, however, it’s because we’re now trying to do too much with plain old JavaScript. We simply can’t build applications of the complexity we want without drowning in lines of code. Essentially, TypeScript bulks up JavaScript, and makes it suitable for building applications of the future. It’s no surprise that TypeScript is now fundamental to core JavaScript frameworks - even Google decided to use it in Angular. But it’s not just for front end JavaScript developers - there are examples of Java and C# developers looking closely at TypeScript, as it shares many features with established statically typed languages. Learn TypeScript in 2019 if… You’re a JavaScript developer You’re a Java or C# developer looking to expand their horizons Learn TypeScript in 2019: TypeScript 3.0 Quick Start Guide TypeScript High Performance Introduction to TypeScript [Video]         Scala Scala has been around for some time, but its performance gains over Java have seen it growing in popularity in recent years. It isn’t the easiest language to learn - in comparison with other Java-related languages, like Kotlin, which haven’t strayed far from its originator, Scala is almost an attempt to rewrite the rule book. It’s a good multi-purpose programming language that brings together functional programming principles and the object oriented principles you find in Java. It’s also designed for concurrency, giving you a scale of power that just isn’t possible. The one drawback of Scala is that it doesn’t have the consistency in its ecosystem in the way that, say, Java does. This does mean, however, that Scala expertise can be really valuable if you have the time to dedicate time to really getting to know the language. Learn Scala in 2019 if… You’re looking for an alternative to Java that’s more scalable and handles concurrency much better You're working with big data Learn Scala: Learn Scala Programming Professional Scala [Video] Scala Machine Learning Projects Scala Design Patterns - Second Edition       Swift Swift started life as a replacement for Objective-C for iOS developers. While it’s still primarily used by those in the Apple development community, there are some signs that Swift could expand beyond its beginnings to become a language of choice for server and systems programming. The core development team have consistently demonstrated they have a sense of purpose is building a language fit for the future, with versions 3 and 4 both showing significant signs of evolution. Fast, relatively easy to learn, and secure, not only has Swift managed to deliver on its brief to offer a better alternative to Objective-C, it also looks well-suited to many of the challenges programmers will be facing in the years to come. Learn Swift in 2019 if… You want to build apps for Apple products You’re interested in a new way to write server code Learn Swift: Learn Swift by Building Applications Hands-On Full-Stack Development with Swift Hands-On Server-Side Web Development with Swift         Kotlin It makes sense for Kotlin to follow Swift. The parallels between the two are notable; it might be crude, but you could say that Kotlin is to Java what Swift is to Objective-C. There are, of course, some who feel that the comparison isn’t favorable, with accusations that one language is merely copying the other, but perhaps the similarities shouldn’t really be that surprising - they’re both trying to do the same things: provide a better alternative to what already exists. Regardless of the debates, Kotlin is a particularly compelling language if you’re a Java developer. It works extremely well, for example, with Spring Boot to develop web services. Certainly as monolithic Java applications shift into microservices, Kotlin is only going to become more popular. Learn Kotlin in 2019 if… You’re a Java developer that wants to build better apps, faster You want to see what all the fuss is about from the Android community Learn Kotlin: Kotlin Quick Start Guide Learning Kotlin by building Android Applications Learn Kotlin Programming [Video]         C Most of the languages on this list are pretty new, but I’m going to finish with a classic that refuses to go away. C has a reputation for being complicated and hard to learn, but it remains relevant because you can find it in so much of the software we take for granted. It’s the backbone of our operating systems, and used in everyday objects that have software embedded in them. Together, this means C is a language worth learning because it gives you an insight into how software actually works on machines. In a world where abstraction and accessibility rules the software landscape, getting beneath everything can be extremely valuable. Learn C in 2019 if… You’re looking for a new challenge You want to gain a deeper understanding of how software works on your machine You’re interested in developing embedded systems and virtual reality projects Learn C: Learn and Master C Programming For Absolute Beginners! [Video]
Read more
  • 0
  • 0
  • 48669

article-image-understanding-sql-server-recovery-models-to-effectively-backup-and-restore-your-database
Vijin Boricha
11 Apr 2018
9 min read
Save for later

Understanding SQL Server recovery models to effectively backup and restore your database

Vijin Boricha
11 Apr 2018
9 min read
Before you even think about your backups, you need to understand the SQL Server recovery model that is internally used while the database is in operational mode. In this regard, today we will learn about SQL Server recovery models, backup and restore. SQL Server recovery models A recovery model is about maintaining data in the event of a server failure. Also, it defines the amount of information that SQL Server writes to the log file for the purpose of recovery. SQL Server has three database recovery models: Simple recovery model Full recovery model Bulk-logged recovery model Simple recovery model This model is typically used for small databases and scenarios where data changes are infrequent. It is limited to restoring the database to the point when the last backup was created. It means that all changes made after the backup are lost. You will need to recreate all changes manually. The major benefit of this model is that the log file takes only a small amount of storage space. How and when to use it depends on the business scenario. Full recovery model This model is recommended when recovery from damaged storage is the highest priority and data loss should be minimal. SQL Server uses copies of database and log files to restore the database. The database engine logs all changes to the database, including bulk operation and most DDL statements. If the transaction log file is not damaged, SQL Server can recover all data except any transaction which are in process at the time of failure (that is, not committed in to the database file). All logged transactions give you the opportunity of point-in-time recovery, which is a really cool feature. A major limitation of this model is the large size of the log files which leads you to performance and storage issues. Use it only in scenarios where every insert is important and loss of data is not an option. Bulk-logged recovery model This model is somewhere between simple and full. It uses database and log backups to recreate the database. Compared to the full recovery model, it uses less log space for CREATE INDEX and bulk load operations, such as SELECT INTO. Let's look at this example. SELECT INTO can load a table with 1,000,000 records with a single statement. The log will only record the occurrence of these operations but not the details. This approach uses less storage space compared to the full recovery model. The bulk-logged recovery model is good for databases which are used for ETL process and data migrations. SQL Server has a system database model. This database is the template for each new one you create. If you use only the CREATE DATABASE statement without any additional parameters, it simply copies the model database with all the properties and metadata. It also inherits the default recovery model, which is full. So, the conclusion is that each new database will be in full recovery mode. This can be changed during and after the creation process. Here is a SQL statement to check recovery models of all your databases on SQL Server on Linux instance: 1> SELECT name, recovery_model, recovery_model_desc 2> FROM sys.databases 3> GO name recovery_model recovery_model_desc ------------------------ -------------- ------------------- master 3 SIMPLE tempdb 3 SIMPLE model 1 FULL msdb 3 SIMPLE AdventureWorks 3 SIMPLE WideWorldImporters 3 SIMPLE (6 rows affected) The following DDL statement will change the recovery model for the model database from full to simple: 1> USE master 2> ALTER DATABASE model 3> SET RECOVERY SIMPLE 4> GO If you now execute the SELECT statement again to check recovery models, you will notice that model now has different properties. Backup and restore Now it's time for SQL coding and implementing backup/restore operations in our own Environments. First let's create a full database backup of our University database: 1> BACKUP DATABASE University 2> TO DISK = '/var/opt/mssql/data/University.bak' 3> GO Processed 376 pages for database'University', file 'University' on file 1. Processed 7 pages for database 'University', file 'University_log' on file 1. BACKUP DATABASE successfully processed 383 pages in 0.562 seconds (5.324 MB/sec) 2. Now let's check the content of the table Students: 1> USE University 2> GO Changed database context to 'University' 1> SELECT LastName, FirstName 2> FROM Students 3> GO LastName FirstName --------------- ---------- Azemovic Imran Avdic Selver Azemovic Sara Doe John (4 rows affected) 3. As you can see there are four records. Let's now simulate a large import from the AdventureWorks database, Person.Person table. We will adjust the PhoneNumber data to fit our 13 nvarchar characters. But first we will drop unique index UQ_user_name so that we can quickly import a large amount of data. 1> DROP INDEX UQ_user_name 2> ON dbo.Students 3> GO 1> INSERT INTO Students (LastName, FirstName, Email, Phone, UserName) 2> SELECT T1.LastName, T1.FirstName, T2.PhoneNumber, NULL, 'user.name' 3> FROM AdventureWorks.Person.Person AS T1 4> INNER JOIN AdventureWorks.Person.PersonPhone AS T2 5> ON T1.BusinessEntityID = T2.BusinessEntityID 6> WHERE LEN (T2.PhoneNumber) < 13 7> AND LEN (T1.LastName) < 15 AND LEN (T1.FirstName)< 10 8> GO (10661 rows affected) 4. Let's check the new row numbers: 1> SELECT COUNT (*) FROM Students 2> GO ----------- 10665 (1 rows affected) Note: As you see the table now has 10,665 rows (10,661+4). But don't forget that we had created a full database backup before the import procedure. 5. Now, we will create a differential backup of the University database: 1> BACKUP DATABASE University 2> TO DISK = '/var/opt/mssql/data/University-diff.bak' 3> WITH DIFFERENTIAL 4> GO Processed 216 pages for database 'University', file 'University' on file 1. Processed 3 pages for database 'University', file 'University_log' on file 1. BACKUP DATABASE WITH DIFFERENTIAL successfully processed 219 pages in 0.365 seconds (4.676 MB/sec). 6. If you want to see the state of .bak files on the disk, follow this procedure. However, first enter superuser mode with sudo su. This is necessary because a regular user does not have access to the data folder: 7. Now let's test the transaction log backup of University database log file. However, first you will need to make some changes inside the Students table: 1> UPDATE Students 2> SET Phone = 'N/A' 3> WHERE Phone IS NULL 4> GO 1> BACKUP LOG University 2> TO DISK = '/var/opt/mssql/data/University-log.bak' 3> GO Processed 501 pages for database 'University', file 'University_log' on file 1. BACKUP LOG successfully processed 501 pages in 0.620 seconds (6.313 MB/sec) Note: Next steps are to test restore database options of full and differential backup procedures. 8. First, restore the full database backup of University database. Remember that the Students table had four records before the first backup, and it currently has 10,665 (as we checked in step 4): 1> ALTER DATABASE University 2> SET SINGLE_USER WITH ROLLBACK IMMEDIATE 3> RESTORE DATABASE University 4> FROM DISK = '/var/opt/mssql/data/University.bak' 5> WITH REPLACE 6> ALTER DATABASE University SET MULTI_USER 7> GO Nonqualified transactions are being rolled back. Estimated rollback completion: 0%. Nonqualified transactions are being rolled back. Estimated rollback completion: 100%. Processed 376 pages for database 'University', file 'University' on file 1. Processed 7 pages for database 'University', file 'University_log' on file 1. RESTORE DATABASE successfully processed 383 pages in 0.520 seconds (5.754 MB/sec). Note: Before the restore procedure, the database is switched to single user mode.This way we are closing all connections that could abort the restore procedure. In the last step, we are switching the database to multi-user mode again. 9. Let's check the number of rows again. You will see the database is restored to its initial state, before the import of more than 10,000 records from the AdventureWorks database: 1> SELECT COUNT (*) FROM Students 2> GO ------- 4 (1 rows affected) 10. Now it's time to restore the content of the differential backup and return the University database to its state after the import procedure: 1> USE master 2> ALTER DATABASE University 3> SET SINGLE_USER WITH ROLLBACK IMMEDIATE 4> RESTORE DATABASE University 5> FROM DISK = N'/var/opt/mssql/data/University.bak' 6> WITH FILE = 1, NORECOVERY, NOUNLOAD, REPLACE, STATS = 5 7> RESTORE DATABASE University 8> FROM DISK = N'/var/opt/mssql/data/University-diff.bak' 9> WITH FILE = 1, NOUNLOAD, STATS = 5 10> ALTER DATABASE University SET MULTI_USER 11> GO Processed 376 pages for database 'University', file 'University' on file 1. Processed 7 pages for database 'University', file 'University_log' on file 1. RESTORE DATABASE successfully processed 383 pages in 0.529 seconds (5.656 MB/sec). Processed 216 pages for database 'University', file 'University' on file 1. Processed 3 pages for database 'University', file 'University_log' on file 1. RESTORE DATABASE successfully processed 219 pages in 0.309 seconds (5.524 MB/sec). We'll look at a really cool feature of SQL Server: backup compression. A backup can be a very large file, and if companies create backups on daily basis, then you can do the math on the amount of storage required. Disk space is cheap today, but it is not free. As a database administrator on SQL Server on Linux, you should consider any possible option to optimize and save money. Backup compression is just that kind of feature. It provides you with a compression procedure (ZIP, RAR) after creating regular backups. So, you save time, space, and money. Let's consider a full database backup of the University database. The uncompressed file is about 3 MB. After we create a new one with compression, the size should be reduced. The compression ratio mostly depends on data types inside the database. It is not a magic stick but it can save space. The following SQL command will create a full database backup of the University database and compress it: 1> BACKUP DATABASE University 2> TO DISK = '/var/opt/mssql/data/University-compress.bak' 3> WITH NOFORMAT, INIT, SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10 4> GO Now exit to bash, enter superuser mode, and type the following ls command to compare the size of the backup files: tumbleweed:/home/dba # ls -lh /var/opt/mssql/data/U*.bak As you can see, the compression size is 676 KB and it is around five times smaller. That is a huge space saving without any additional tools. SQL Server on Linux has one security feature with backup. We learned about SQL Server recovery model, and how to efficiently backup and restore our database. You can know more about transaction logs and elements of backup strategy from this book SQL Server on Linux. Also, check out: Getting to know SQL Server options for disaster recovery Get SQL Server user management right How to integrate SharePoint with SQL Server Reporting Services
Read more
  • 0
  • 0
  • 48511
article-image-chatgpt-for-ladder-logic
M.T. White
22 Aug 2023
17 min read
Save for later

ChatGPT for Ladder Logic

M.T. White
22 Aug 2023
17 min read
IntroductionChatGPT is slowly becoming a pivotal player in software development.  It is being used by countless developers to help produce quality and robust code.  However, many of these developers are using ChatGPT for text-based programming languages like C++ or Java.  There are few, if any, tutorials on how ChatGPT can be utilized to write Ladder Logic code.  As such, this tutorial is going to be dedicated to exploring how and why ChatGPT can be used as a tool for traditional Ladder Logic programmers.Why use ChatGPT for Ladder Logic?The first step in learning how to leverage ChatGPT is to learn why to use the system.  First of all, ChatGPT is not a programmer, nor is it designed to replace programmers in any way, shape, or form.  However, it can be a handy tool for people that are not sure how to complete a task, need to produce some code in a crunch, and so on.  To effectively use ChatGPT, a person will have to know how to properly produce a statement, refine that statement, and, if necessary, write subsequent statements that have the right amount of information for ChatGPT to effectively produce a result.  In other words, a ChatGPT user still has to be competent, but when used correctly, the AI system can produce code much faster than a human can, especially if the human is inexperienced at a given task.In terms of industrial automation, ChatGPT can be an especially attractive tool.  It is no secret that many PLC programmers are not formally trained developers.  It is common for many PLC programmers to be maintenance technicians, electricians, or other types of engineers.  In any case, it is common for many people who are forced to write complex PLC software to have little more than previous experience guiding them.  As such, when faced with a complex situation with little to no resources available, the programmer can often be lost with no hope of finding a solution.  This is where ChatGPT can be utilized as a user can pose questions and task the system with finding solutions.  With that, how do we use ChatGPT at a basic level?  How to use ChatGPT?The key behind using ChatGPT is writing what are called prompts.  In a lay sense, a prompt is a query or command that ChatGPT runs.  In other words, a prompt can be thought of as a task you ask ChatGPT to do.  For example, a person could input the following prompt into ChatGPT:Tell me how to install a Beckhoff PLC.In this case, ChatGPT will give you a high-level overview of the basics of installing a Beckhoff PLC.  However, a prompt like this will usually not return any useful results unless you want general, high-level information.  Generally, to get a viable answer, you’ll need to hone the prompt with detailed information and exceptions.  Learning to write effective prompts is a skill that is in the early stages of development.  It is important to remember that ChatGPT is a novel tool. This means that the IT and automation industry as a whole is still learning how to use it effectively.  However, a general flow of writing and refining prompts can be summarized in Figure 1.Figure 1 – Prompt Refining FlowThis is just a general logic flow for massaging a prompt to get the desired result.  However, sometimes it is necessary and easier to run subsequent prompts instead of refining existing ones.  Either way, it may take a couple of tries to get what you need out of ChatGPT.  Regardless, to explore how to use ChatGPT, let’s start by creating a basic AND gate. AND ConfigurationAs everyone knows, an AND configuration is simply two contacts in a series.  For the coil to turn on, both contacts have to be on at the same time.  This a relatively simple Ladder Logic program to write and understand, so we’re going to use it as a test for ChatGPT. To begin, we’re going to use the following as a prompt:Write a ladder logic program for an AND gateWhen this prompt is run, it’ll generate the following network.  NETWORK--| |--[ ]---[ ]---( )                                                                                                                                  |          |     |----[ ]---( )Figure 2 – AND Gate: Attempt 1 With Description In this example, an and gate is produced, but it also generates an extra rung that wasn’t asked for or necessary for the task at hand.  In this case, it is not easy to determine what the extra run does or why it was included; as such, this is a prime example of needing to refine the prompts to only produce what is needed.  To do this, let’s refine the prompt a little more to match the following:Write only the necessary contacts and coils for an AND gate in Ladder LogicWhen this query is run, it will produce the network in Figure 2:NETWORK--| |--[ ]---[ ]---( ) Figure 3 – Refined AND Gate: Attempt 2 With DescriptionIn this refined attempt, ChatGPT produces a very simple network of contacts in series.  There are no extra rungs or ambiguous instructions. In short, the moral of this exercise is that we started with a general prompt that produced a working result but a result that needed to be refined. We then refined that prompt and produced a high-quality network with only the necessary instructions for a working AND gate.  With that, let’s look at something a little more complex, timer networks. Timer ProgramsAs every PLC programmer knows, most PLC programs will usually include a timer of some type.  So, with that in mind, let’s explore how ChatGPT handles timer programs.  To begin, let’s have ChatGPT generate a simple 10-minute timer.  To do this, we can use the following query:Write a ladder logic program for a 10 minute timer.Depending on your usage of ChatGPT, you may get a result without the variables labeled or even in Structured Text.  However, the prompt can be adjusted to produce the desired output by using subsequent prompts that are akin to the following:I need this in ladder logicor,assign the variables in the diagramAs stated before, you won’t have to re-write the initial prompt. All you have to do is run these statements as if they were your initial prompt.  However, depending on the prompts you run, your result should be something akin to the following:VAR    Timer: TON;         (* Timer for the 10-minute duration *)    StartButton: BOOL;  (* Input for starting the timer *)    Done: BOOL;         (* Output to indicate when the timer is done *) END_VAR NETWORK -----[StartButton]----[/Timer]-----   |                            |   |                            |   |                           | ---|----------------------------|---- NO                           |                               |                               |                              ---[Done]---In the above ladder logic program:StartButton represents the input contact that starts the timer./Timer represents the timer coil, indicating that the timer is active and counting.Done represents the output contact, which becomes closed (energized) when the timer reaches 10 minutes.When the StartButton input contact is activated, and the timer starts counting. Once the timer reaches 10 minutes, the Done output contact will close, indicating that the timer is done. When the StartButton input contact is deactivated, the timer stops counting, and the Done output contact opens.Please note that the specific syntax and representation of timers may vary depending on the PLC programming software or the manufacturer's conventions. Adjust the syntax and parameters as per the specific requirements of your PLC system.As anyone who has programmed in Ladder Logic before can see, this isn’t standard notation for a timer function.  However, you probably noticed that each prompt will generate a relatively short description of how the snippet is supposed to work.  If you read the subsequent output, you can piece together the symbolic nature of the snippet.  With that, let’s move on to something a little more advanced.  Let’s explore building a cascading timer network that will have one timer set for 10 minutes and another set for 15, and when the final timer turns on, it will power an LED.  To do this, we can input the following prompt,Write a cascading timer Ladder Logic Network that has one timer set for 10 minutes and the other set for 15 minutes. Turn on an LED when both timers turn on. Give the necessary variables as well.This prompt will produce the following:VAR    Timer1: TON;     // Timer 1 (10 minutes)    Timer2: TON;     // Timer 2 (15 minutes)    LED: BOOL;       // Output LED END_VAR NETWORK    // Timer 1 (10 minutes)    Timer1(IN:= TRUE, PT:= T#10M);  // Set Timer 1 to 10 minutes duration    // Timer 2 (15 minutes)    Timer2(IN:= Timer1.Q, PT:= T#15M);  // Start Timer 2 when Timer 1 completes    // LED turns on when both timers complete    LED := Timer1.Q AND Timer2.Q; END_NETWORKIn this ladder logic program:Timer1 is a timer variable of type TON set for 10 minutes duration.Timer2 is a timer variable of type TON set for 15 minutes duration.LED is a boolean variable representing the output LED.Here's a breakdown of the logic:Timer1 is triggered to start with the input IN:= TRUE and is set to a preset time of 10 minutes (PT:= T#10M).Timer2 is started when Timer1 completes its timing sequence (IN:= Timer1.Q), and it is set to a preset time of 15 minutes (PT:= T#15M).The LED is turned on (LED := Timer1.Q AND Timer2.Q) when both Timer1 and Timer2 have completed their timing sequences.You can adjust the timing durations of the timers according to your specific requirements in your PLC programming software. Additionally, you can connect the LED output variable to the appropriate LED indicator in your setup.As can be seen, this isn’t our desired output.  We want Ladder Logic, and this is Structured Text.  In short, ChatGPT got confused.  If you read the description, ChatGPT thinks its result is Ladder Logic.  In all, this isn’t what we were looking for, so we can add a subsequent prompt that reads the following:I need this program in Ladder LogicThis prompt will produce the following output:NETWORK --|----[ ]----[/]---[ ]----[/]---[ ]----[/]---[ ]--+ |              |              |              | +---[ ]----[/]---[ ]----[/]---[ ]---[/]---[ ]--+    Timer1         Timer2    +---[ ]---+    +---[ ]---+    |         |    |         |    +---[/]---+    +---[/]---+      LED            LEDIn this ladder logic program:Each [ ] represents a normally open (NO) contact./ represents a normally closed (NC) contact.The timers Timer1 and Timer2 are cascaded to achieve the desired timing sequence.The LED output is controlled based on the completion of both timers.Here's a breakdown of the logic:Timer1 is a timer with a preset time of 10 minutes.Timer2 is a timer with a preset time of 15 minutes.When Timer1 completes its timing sequence, it energizes the output coil [/], and the LED associated with Timer1 turns on.Timer2 is enabled when the output coil [/] of Timer1 is energized.When Timer2 completes its timing sequence, it energizes the output coil [/], and the LED associated with Timer2 turns on.The LED output is controlled by the parallel combination of Timer1's output coil [/] and Timer2's output coil [/]. The LED turns on only when both timers have been completed.Please note that the ladder logic diagram provided is a visual representation, and the actual implementation may vary depending on the PLC programming software or hardware you are using.This output is closer to what we’re looking for, but depending on a person's experience level, this probably won’t be much help.  This particular diagram isn’t very descriptive of how everything is supposed to fit together, so if a person had to translate this to a specific programming software, this diagram would probably take a while to fully digest and implement.  As such, we’re going to have to add another prompt that will be as follows,I need the variables assigned to the proper contacts.This subsequent prompt will produce the following:NETWORK --|----[StartButton]----[/Timer1.Enabled]---[Timer1.TimerDN]----[/Timer2.Enabled]---[Timer2.TimerDN]--+ |                                                                                             | +---[Timer1.Timer.Q]----[/LED1]---+---[Timer2.Timer.Q]----[/LED2]---+                                   |                                   |                                    +---[/LED]--------------------------+In this ladder logic program:StartButton is a boolean variable representing the input contact [StartButton] that starts the timer sequence.Timer1.Enabled and Timer2.Enabled are boolean variables representing the input contacts [/Timer1.Enabled] and [/Timer2.Enabled] that enable the respective timers.Timer1.TimerDN and Timer2.TimerDN are boolean variables representing the output contacts [Timer1.TimerDN] and [Timer2.TimerDN] that indicates the completion of the respective timers.Timer1.Timer.Q and Timer2.Timer.Q are boolean variables representing the output contacts [Timer1.Timer.Q] and [Timer2.Timer.Q] that indicates the running state of the respective timers.LED1 and LED2 are boolean variables representing the output contacts [/LED1] and [/LED2] that control the LEDs associated with Timer1 and Timer2.LED is a boolean variable representing the output contact [/LED] that controls the final LED indicating the completion of both timers.Please note that the actual variable names may vary depending on the PLC programming software or hardware you are using. You can assign these variables in your PLC programming software and connect them to the appropriate input and output terminals to implement the desired cascading timer logicThis result is much better as it shows how to wire the network together while the description gives a clear overview of how everything works.  With this, a person with a basic knowledge of Ladder Logic will be able to piece this together and port the rungs to other PLC programming programs. SummaryChatGPT can be leveraged as an excellent tool to help Ladder Logic developers with concocting specialized programs.  However, ChatGPT is NOT a replacement for programmers, and to effectively use the system, a person must be skilled enough to write descriptive prompts and interpret the results.  This means that though ChatGPT is an excellent tool, it does not have the intuition nor the skill to fully replace a programmer.A big part of using ChatGPT is learning to write and refine prompts as well as subsequent follow-up prompts.  These prompts are a developing art form that probably will be the next iteration of software development.  For now, the art of using ChatGPT and similar systems is novel, and there aren’t any definitive standards that govern how to effectively use these yet, especially when it comes to graphical programming such as Ladder Logic.  When used by a knowledgeable person that has a basic idea of PLC programming and ChatGPT, it can be a great way of getting over hurdles that could take hours or days to solve. Author BioM.T. White has been programming since the age of 12. His fascination with robotics flourished when he was a child programming microcontrollers such as Arduino. M.T. currently holds an undergraduate degree in mathematics, and a master's degree in software engineering, and is currently working on an MBA in IT project management. M.T. is currently working as a software developer for a major US defense contractor and is an adjunct CIS instructor at ECPI University. His background mostly stems from the automation industry where he programmed PLCs and HMIs for many different types of applications. M.T. has programmed many different brands of PLCs over the years and has developed HMIs using many different tools.Author of the book: Mastering PLC Programming
Read more
  • 0
  • 0
  • 48420

article-image-building-c-game-play-engines-in-finite-state-machine-pattern
Amarabha Banerjee
14 Jun 2018
20 min read
Save for later

Building C++ game play engines in finite state machine pattern [Tutorial]

Amarabha Banerjee
14 Jun 2018
20 min read
One of the most important aspect of game development is creating game states which helps in different tasks like controlling game flows, managing different game windows and so on. Here in this article, we are going to show you how you can create game play systems with C++ which will help you to manage game states and empower you to control different game functionalities efficiently. We use game states in many different ways. For example to control the game flow, handle the different ways characters can act and react, even for simple menu navigation. Needless to say, states are an important requirement for a strong and manageable code base. There are many different types of states machines; the one we will focus in this section is the Finite State Machine (FSM) pattern. We will be creating an FSM pattern that will help you to create a more generic and flexible state machine. This article is taken from the book Mastering C++ game Development written by Mickey Macdonald. This book shows you how you can create interesting and fun filled games with C++. There are a few ways we can implement a simple state machine in our game. One way would be to simply use a switch case set up to control the states and an enum structure for the state types. An example of this would be as follows: enum PlayerState { Idle, Walking } ... PlayerState currentState = PlayerState::Idle; //A holder variable for the state currently in ... // A simple function to change states void ChangeState(PlayState nextState) { currentState = nextState; } void Update(float deltaTime) { ... switch(currentState) { case PlayerState::Idle: ... //Do idle stuff //Change to next state ChangeState(PlayerState::Walking); break; case PlayerState::Walking: ... //Do walking stuff //Change to next state ChangeState(PlayerState::Idle); break; } ... } Using a switch/case like this can be effective for a lot of situations, but it does have some strong drawbacks. What if we decide to add a few more states? What if we decide to add branching and more if conditionals? The simple switch/case we started out with has suddenly become very large and undoubtedly unwieldy. Every time we want to make a change or add some functionality, we multiply the complexity and introduce more chances for bugs to creep in. We can help mitigate some of these issues and provide more flexibility by taking a slightly different approach and using classes to represent our states. Through the use of inheritance and polymorphism, we can build a structure that will allow us to chain together states and provide the flexibility to reuse them in many situations. Let's walk through how we can implement this in our demo examples, starting with the base class we will inherit from in the future, IState: ... namespace BookEngine { class IState { public: IState() {} virtual ~IState(){} // Called when a state enters and exits virtual void OnEntry() = 0; virtual void OnExit() = 0; // Called in the main game loop virtual void Update(float deltaTime) = 0; }; } As you can see, this is just a very simple class that has a constructor, a virtual destructor, and three completely virtual functions that each inherited state must override. OnEntry, which will be called as the state is first entered, will only execute once per state change. OnExit, like OnEntry, will only be executed once per state change and is called when the state is about to be exited. The last function is the Update function; this will be called once per game loop and will contain much of the state's logic. Although this seems very simple, it gives us a great starting point to build more complex states. Now let's implement this basic IState class in our examples and see how we can use it for one of the common needs of a state machine: creating game states. First, we will create a new class called GameState that will inherit from IState. This will be the new base class for all the states our game will need. The GameState.h file consists of the following: #pragma once #include <BookEngineIState.h> class GameState : BookEngine::IState { public: GameState(); ~GameState(); //Our overrides virtual void OnEntry() = 0; virtual void OnExit() = 0; virtual void Update(float deltaTime) = 0; //Added specialty function virtual void Draw() = 0; }; The GameState class is very much like the IState class it inherits from, except for one key difference. In this class, we add a new virtual method Draw() that all classes will now inherit from GameState will be implemented. Each time we use IState and create a new specialized base class, player state, menu state, and so on, we can add these new functions to customize it to the requirements of the state machine. This is how we use inheritance and polymorphism to create more complex states and state machines. Continuing with our example, let's now create a new GameState. We start by creating a new class called GameWaiting that inherits from GameState. To make it a little easier to follow, I have grouped all of the new GameState inherited classes into one set of files GameStates.h and GameStates.cpp. The GamStates.h file will look like the following: #pragma once #include "GameState.h" class GameWaiting: GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; class GameRunning: GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; class GameOver : GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; Nothing new here; we are just declaring the functions for each of our GameState classes. Now, in our GameStates.cpp file, we can implement each individual state's functions as described in the preceding code: #include "GameStates.h" void GameWaiting::OnEntry() { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::OnExit() { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::Update(float deltaTime) { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::Draw() { ... //Called when entering the GameWaiting state's OnEntry function ... } ... //Other GameState implementations ... For the sake of page space, I am only showing the GameWaiting implementation, but the same goes for the other states. Each one will have its own unique implementation of these functions, which allows you to control the code flow and implement more states as necessary without creating a hard-to-follow maze of code paths. Now that we have our states defined, we can implement them in our game. Of course, we could go about this in many different ways. We could follow the same pattern that we did with our screen system and implement a GameState list class, a definition of which could look like the following: class GameState; class GameStateList { public: GameStateList (IGame* game); ~ GameStateList (); GameState* GoToNext(); GameState * GoToPrevious(); void SetCurrentState(int nextState); void AddState(GameState * newState); void Destroy(); GameState* GetCurrent(); protected: IGame* m_game = nullptr; std::vector< GameState*> m_states; int m_currentStateIndex = -1; }; } Or we could simply use the GameState classes we created with a simple enum and a switch case. The use of the state pattern allows for this flexibility. Working with cameras At this point, we have discussed a good amount about the structure of systems and have now been able to move on to designing ways of interacting with our game and 3D environment. This brings us to an important topic: the design of virtual camera systems. A camera is what provides us with a visual representation of our 3D world. It is how we immerse ourselves and it provides us with feedback on our chosen interactions. In this section, we are going to cover the concept of a virtual camera in computer graphics. Before we dive into writing the code for our camera, it is important to have a strong understanding of how, exactly, it all works. Let's start with the idea of being able to navigate around the 3D world. In order to do this, we need to use what is referred to as a transformation pipeline. A transformation pipeline can be thought of as the steps that are taken to transform all objects and points relative to the position and orientation of a camera viewpoint. The following is a simple diagram that details the flow of a transformation pipeline: Beginning with the first step in the pipeline, local space, when a mesh is created it has a local origin 0 x, 0 y, 0 z. This local origin is typically located in either the center of the object or in the case of some player characters, the center of the feet. All points that make up that mesh are then based on that local origin. When talking about a mesh that has not been transformed, we refer to it as being in local space: The preceding image pictures the gnome mesh in a model editor. This is what we would consider local space. In the next step, we want to bring a mesh into our environment, the world space. In order to do this, we have to multiply our mesh points by what is referred to as a model matrix. This will then place the mesh in world space, which sets all the mesh points to be relative to a single world origin. It's easiest to think of world space as being the description of the layout of all the objects that make up your game's environment. Once meshes have been placed in world space, we can start to do things such as compare distances and angles. A great example of this step is when placing game objects in a world/level editor; this is creating a description of the model's mesh in relation to other objects and a single world origin (0,0,0). We will discuss editors in more detail in the next chapter. Next, in order to navigate this world space, we have to rearrange the points so that they are relative to the camera's position and orientations. To accomplish this, we perform a few simple operations. The first is to translate the objects to the origin. First, we would move the camera from its current world coordinates. In the following example figure, there is 20 on the x axis, 2 on the y axis, and -15 on the z axis, to the world origin or 0,0,0. We can then map the objects by subtracting the camera's position, the values used to translate the camera object, which in this case would be -20, -2, 15. So if our game object started out at 10.5 on the x axis, 1 on the y axis, and -20 on the z axis, the newly translated coordinates would be -9.5, -1, -5. The last operation is to rotate the camera to face the desired direction; in our current case, that would be pointing down the -z axis. For the following example, that would mean rotating the object points by -90 degrees, making the example game object's new position 5, -1, -9.5. These operations combine into what is referred to as the view matrix: Before we go any further, I want to briefly cover some important details when it comes to working with matrices, in particular, handling matrix multiplication and the order of operations. When working with OpenGL, all matrices are defined in a column-major layout. The opposite being row-major layout, found in other graphics libraries such as Microsoft's DirectX. The following is the layout for column-major view matrices, where U is the unit vector pointing up, F is our vector pointing forward, R is the right vector, and P is the position of the camera: When constructing a matrix with a combination of translations and rotations, such as the preceding view matrix, you cannot, generally, just stick the rotation and translation values into a single matrix. In order to create a proper view matrix, we need to use matrix multiplication to combine two or more matrices into a single final matrix. Remembering that we are working with column-major notations, the order of the operations is therefore right to left. This is important since, using the orientation (R) and translation (T) matrices, if we say V = T x R, this would produce an undesired effect because this would first rotate the points around the world origin and then move them to align to the camera position as the origin. What we want is V = R x T, where the points would first align to the camera as the origin and then apply the rotation. In a row-major layout, this is the other way around of course: The good news is that we do not necessarily need to handle the creation of the view matrix manually. Older versions of OpenGL and most modern math libraries, including GLM, have an implementation of a lookAt() function. Most take a version of camera position, target or look position, and the up direction as parameters, and return a fully created view matrix. We will be looking at how to use the GLM implementation of the lookAt() function shortly, but if you want to see the full code implementation of the ideas described just now, check out the source code of GLM which is included in the project source repository. Continuing through the transformation pipeline, the next step is to convert from eye space to homogeneous clip space. This stage will construct a projection matrix. The projection matrix is responsible for a few things. First is to define the near and far clipping planes. This is the visible range along the defined forward axis (usually z). Anything that falls in front of the near distance and anything that falls past the far distance is considered out of range. Any geometrical objects that are on the outside of this range will be clipped (removed) from the pipeline in a later step. Second is to define the Field of View (FOV). Despite the name, the field of view is not a field but an angle. For the FOV, we actually only specify the vertical range; most modern games use 66 or 67 degrees for this. The horizontal range will be calculated for us by the matrix once we provide the aspect ratio (how wide compared to how high). To demonstrate, a 67 degree vertical angle on a display with a 4:3 aspect ratio would have a FOV of 89.33 degrees (67 * 4/3 = 89.33). These two steps combine to create a volume that takes the shape of a pyramid with the top chopped off. This created volume is referred to as the view frustum. Any of the geometry that falls outside of this frustum is considered to be out of view. The following diagram illustrates what the view frustum looks like: You might note that there is more visible space available at the end of the frustum than in the front. In order to properly display this on a 2D screen, we need to tell the hardware how to calculate the perspective. This is the next step in the pipeline. The larger, far end of the frustum will be pushed together creating a box shape. The collection of objects visible at this wide end will also be squeezed together; this will provide us with a perspective view. To understand this, imagine the phenomenon of looking along a straight stretch of railway tracks. As the tracks continue into the distance, they appear to get smaller and closer together. The next step in the pipeline, after defining the clipping space, is to use what is called the perspective division to normalize the points into a box shape with the dimensions of (-1 to 1, -1 to 1, -1 to 1). This is referred to as the normalized device space. By normalizing the dimensions into unit size, we allow the points to be multiplied to scale up or down to any viewport dimensions. The last major step in the transformation pipeline is to create the 2D representation of the 3D that will be displayed. To do this, we flatten the normalized device space with the objects further away being drawn behind the objects that are closer to the camera (draw depth). The dimensions are scaled from the X and Y normalized values into actual pixel values of the viewport. After this step, we have a 2D space referred to as the Viewport space. That completes the transformation pipeline stages. With that theory covered, we can now shift to implementation and write some code. We are going to start by looking at the creation of a basic, first person 3D camera, which means we are looking through the eyes of the player's character. Let's start with the camera's header file, Camera3D.h in the source code repository: ... #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> ..., We start with the necessary includes. As I just mentioned, GLM includes support for working with matrices, so we include both glm.hpp and the matrix_transform.hpp to gain access to GLM's lookAt() function: ... public: Camera3D(); ~Camera3D(); void Init(glm::vec3 cameraPosition = glm::vec3(4,10,10), float horizontalAngle = -2.0f, float verticalAngle = 0.0f, float initialFoV = 45.0f); void Update(); Next, we have the public accessible functions for our Camera3D class. The first two are just the standard constructor and destructor. We then have the Init() function. We declare this function with a few defaults provided for the parameters; this way if no values are passed in, we will still have values to calculate our matrices in the first update call. That brings us to the next function declared, the Update() function. This is the function that the game engine will call each loop to keep the camera updated: glm::mat4 GetView() { return m_view; }; glm::mat4 GetProjection() { return m_projection; }; glm::vec3 GetForward() { return m_forward; }; glm::vec3 GetRight() { return m_right; }; glm::vec3 GetUp() { return m_up; }; After the Update() function, there is a set of five getter functions to return both the View and Projection matrices, as well as the camera's forward, up, and right vectors. To keep the implementation clean and tidy, we can simply declare and implement these getter functions right in the header file: void SetHorizontalAngle(float angle) { m_horizontalAngle = angle; }; void SetVerticalAngle(float angle) { m_verticalAngle = angle; }; Directly after the set of getter functions, we have two setter functions. The first will set the horizontal angle, the second will set the vertical angle. This is useful for when the screen size or aspect ratio changes: void MoveCamera(glm::vec3 movementVector) { m_position += movementVector; }; The last public function in the Camera3D class is the MoveCamera() function. This simple function takes in a vector 3, then cumulatively adds that vector to the m_position variable, which is the current camera position: ... private: glm::mat4 m_projection; glm::mat4 m_view; // Camera matrix For the private declarations of the class, we start with two glm::mat4 variables. A glm::mat4 is the datatype for a 4x4 matrix. We create one for the view or camera matrix and one for the projection matrix: glm::vec3 m_position; float m_horizontalAngle; float m_verticalAngle; float m_initialFoV; Next, we have a single vector 3 variable to hold the position of the camera, followed by three float values—one for the horizontal and one for the vertical angles, as well as a variable to hold the field of view: glm::vec3 m_right; glm::vec3 m_up; glm::vec3 m_forward; We then have three more vector 3 variable types that will hold the right, up, and forward values for the camera object. Now that we have the declarations for our 3D camera class, the next step is to implement any of the functions that have not already been implemented in the header file. There are only two functions that we need to provide, the Init() and the Update() functions. Let's begin with the Init() function, found in the Camera3D.cpp file: void Camera3D::Init(glm::vec3 cameraPosition, float horizontalAngle, float verticalAngle, float initialFoV) { m_position = cameraPosition; m_horizontalAngle = horizontalAngle; m_verticalAngle = verticalAngle; m_initialFoV = initialFoV; Update(); } ... Our Init() function is straightforward; all we are doing in the function is taking in the provided values and setting them to the corresponding variable we declared. Once we have set these values, we simply call the Update() function to handle the calculations for the newly created camera object: ... void Camera3D::Update() { m_forward = glm::vec3( glm::cos(m_verticalAngle) * glm::sin(m_horizontalAngle), glm::sin(m_verticalAngle), glm::cos(m_verticalAngle) * glm::cos(m_horizontalAngle) ); The Update() function is where all of the heavy lifting of the classes is done. It starts out by calculating the new forward for our camera. This is done with a simple formula leveraging GLM's cosine and sine functions. What is occurring is that we are converting from spherical coordinates to cartesian coordinates so that we can use the value in the creation of our view matrix. m_right = glm::vec3( glm::sin(m_horizontalAngle - 3.14f / 2.0f), 0, glm::cos(m_horizontalAngle - 3.14f / 2.0f) ); After we calculate the new forward, we then calculate the new right vector for our camera, again using a simple formula that leverages GLM's sine and cosine functions: m_up = glm::cross(m_right, m_forward); Now that we have the forward and up vectors calculated, we can use GLM's cross-product function to calculate the new up vector for our camera. It is important that these three steps happen every time the camera changes position or rotation, and before the creation of the camera's view matrix: float FoV = m_initialFoV; Next, we specify the FOV. Currently, I am just setting it back to the initial FOV specified when initializing the camera object. This would be the place to recalculate the FOV if the camera was, say, zoomed in or out (hint: mouse scroll could be useful here): m_projection = glm::perspective(glm::radians(FoV), 4.0f / 3.0f, 0.1f, 100.0f); Once we have the field of view specified, we can then calculate the projection matrix for our camera. Luckily for us, GLM has a very handy function called glm::perspective(), which takes in a field of view in radians, an aspect ratio, the near clipping distance, and a far clipping distance, which will then return a created projection matrix for us. Since this is an example, I have specified a 4:3 aspect ratio (4.0f/3.0f) and a clipping space of 0.1 units to 100 units directly. In production, you would ideally move these values to variables that could be changed during runtime: m_view = glm::lookAt( m_position, m_position + m_forward, m_up ); } Finally, the last thing we do in the Update() function is to create the view matrix. As I mentioned before, we are fortunate that the GLM library supplies a lookAt() function to abstract all the steps we discussed earlier in the section. This lookAt() function takes three parameters. The first is the position of the camera. The second is a vector value of where the camera is pointed, or looking at, which we provide by doing a simple addition of the camera's current position and it's calculated forward. The last parameter is the camera's current up vector which, again, we calculated previously. Once finished, this function will return the newly updated view matrix to use in our graphics pipeline. We learned to create a simple FSM game engine with C++. Checkout this book Mastering C++ game Development to learn high-end game development with advanced C++ 17 programming techniques. How AI is changing game development How to use arrays, lists, and dictionaries in Unity for 3D game development Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 48354
Modal Close icon
Modal Close icon