Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-why-mybatis
Packt
10 Jul 2013
8 min read
Save for later

Why MyBatis

Packt
10 Jul 2013
8 min read
(For more resources related to this topic, see here.) Eliminates a lot of JDBC boilerplate code Java has a Java DataBase Connectivity (JDBC) API to work with relational databases. But JDBC is a very low-level API, and we need to write a lot of code to perform database operations. Let us examine how we can implement simple insert and select operations on a STUDENTS table using plain JDBC. Assume that the STUDENTS table has STUD_ID, NAME, EMAIL, and DOB columns. The corresponding Student JavaBean is as follows: package com.mybatis3.domain; import java.util.Date; public class Student { private Integer studId; private String name; private String email; private Date dob; // setters and getters } The following StudentService.java program implements the SELECT and INSERT operations on the STUDENTS table using JDBC. public Student findStudentById(int studId) { Student student = null; Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "SELECT * FROM STUDENTS WHERE STUD_ID=?"; //create PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, studId); ResultSet rs = pstmt.executeQuery(); //fetch results from database and populate into Java objects if(rs.next()) { student = new Student(); student.setStudId(rs.getInt("stud_id")); student.setName(rs.getString("name")); student.setEmail(rs.getString("email")); student.setDob(rs.getDate("dob")); } } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } return student; } public void createStudent(Student student) { Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(?,?,?,?)"; //create a PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, student.getStudId()); pstmt.setString(2, student.getName()); pstmt.setString(3, student.getEmail()); pstmt.setDate(4, new java.sql.Date(student.getDob().getTime())); pstmt.executeUpdate(); } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } } protected Connection getDatabaseConnection() throws SQLException { try{ Class.forName("com.mysql.jdbc.Driver"); return DriverManager.getConnection ("jdbc:mysql://localhost:3306/test", "root", "admin"); } catch (SQLException e){ throw e; } catch (Exception e){ throw new RuntimeException(e); } } There is a lot of duplicate code in each of the preceding methods, for creating a connection, creating a statement, setting input parameters, and closing the resources, such as the connection, statement, and result set. MyBatis abstracts all these common tasks so that the developer can focus on the really important aspects, such as preparing the SQL statement that needs to be executed and passing the input data as Java objects. In addition to this, MyBatis automates the process of setting the query parameters from the input Java object properties and populates the Java objects with the SQL query results as well. Now let us see how we can implement the preceding methods using MyBatis: Configure the queries in a SQL Mapper config file, say StudentMapper.xml. <select id="findStudentById" parameterType="int" resultType=" Student"> SELECT STUD_ID AS studId, NAME, EMAIL, DOB FROM STUDENTS WHERE STUD_ID=#{Id} </select> <insert id="insertStudent" parameterType="Student"> INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(#{studId},#{name},#{email},#{dob}) </insert> Create a StudentMapper interface. public interface StudentMapper { Student findStudentById(Integer id); void insertStudent(Student student); } In Java code, you can invoke these statements as follows: SqlSession session = getSqlSessionFactory().openSession(); StudentMapper mapper = session.getMapper(StudentMapper.class); // Select Student by Id Student student = mapper.selectStudentById(1); //To insert a Student record mapper.insertStudent(student); That's it! You don't need to create the Connection, PrepareStatement, extract, and set parameters and close the connection by yourself for every database operation. Just configure the database connection properties and SQL statements, and MyBatis will take care of all the ground work. Don't worry about what SqlSessionFactory, SqlSession, and Mapper XML files are. Along with these, MyBatis provides many other features that simplify the implementation of persistence logic. It supports the mapping of complex SQL result set data to nested object graph structures It supports the mapping of one-to-one and one-to-many results to Java objects It supports building dynamic SQL queries based on the input data Low learning curve One of the primary reasons for MyBatis' popularity is that it is very simple to learn and use because it depends on your knowledge of Java and SQL. If developers are familiar with Java and SQL, they will fnd it fairly easy to get started with MyBatis. Works well with legacy databases Sometimes we may need to work with legacy databases that are not in a normalized form. It is possible, but diffcult, to work with these kinds of legacy databases with fully-fedged ORM frameworks such as Hibernate because they attempt to statically map Java objects to database tables. MyBatis works by mapping query results to Java objects; this makes it easy for MyBatis to work with legacy databases. You can create Java domain objects following the object-oriented model, execute queries Embraces SQL Full-fedged ORM frameworks such as Hibernate encourage working with entity objects and generate SQL queries under the hood. Because of this SQL generation, we may not be able to take advantage of database-specifc features. Hibernate allows to execute native SQLs, but that might defeat the promise of a database-independent persistence. The MyBatis framework embraces SQL instead of hiding it from developers. As MyBatis won't generate any SQLs and developers are responsible for preparing the queries, you can take advantage of database-specifc features and prepare optimized SQL queries. Also, working with stored procedures is supported by MyBatis. Supports integration with Spring and Guice frameworks MyBatis provides out-of-the-box integration support for the popular dependency injection frameworks Spring and Guice; this further simplifes working with MyBatis. Supports integration with third-party cache libraries MyBatis has inbuilt support for caching SELECT query results within the scope of SqlSession level ResultSets. In addition to this, MyBatis also provides integration support for various third-party cache libraries, such as EHCache, OSCache, and Hazelcast. Better performance Performance is one of the key factors for the success of any software application. There are lots of things to consider for better performance, but for many applications, the persistence layer is a key for overall system performance. MyBatis supports database connection pooling that eliminates the cost of creating a database connection on demand for every request. MyBatis has an in-built cache mechanism which caches the results of SQL queries at the SqlSession level. That is, if you invoke the same mapped select query, then MyBatis returns the cached result instead of querying the database again. MyBatis doesn't use proxying heavily and hence yields better performance compared to other ORM frameworks that use proxies extensively. There are no one-size-fits-all solutions in software development. Each application has a different set of requirements, and we should choose our tools and frameworks based on application needs. In the previous section, we have seen various advantages of using MyBatis. But there will be cases where MyBatis may not be the ideal or best solution.If your application is driven by an object model and wants to generate SQL dynamically, MyBatis may not be a good ft for you. Also, if you want to have a transitive persistence mechanism (saving the parent object should persist associated child objects as well) for your application, Hibernate will be better suited for it. Installing and configuring MyBatis We are assuming that the JDK 1.6+ and MySQL 5 database servers have been installed on your system. The installation process of JDK and MySQL is outside the scope of this article. At the time of writing this article, the latest version of MyBatis is MyBatis 3.2.2. Even though it is not mandatory to use IDEs, such as Eclipse, NetBeans IDE, or IntelliJ IDEA for coding, they greatly simplify development with features such as handy autocompletion, refactoring, and debugging. You can use any of your favorite IDEs for this purpose. This section explains how to develop a simple Java project using MyBatis: By creating a STUDENTS table and inserting sample data By creating a Java project and adding mybatis-3.2.2.jar to the classpath By creating the mybatis-config.xml and StudentMapper.xml configuration files By creating the MyBatisSqlSessionFactory singleton class By creating the StudentMapper interface and the StudentService classes By creating a JUnit test for testing StudentService Summary In this article, we discussed about MyBatis and the advantages of using MyBatis instead of plain JDBC for database access. Resources for Article : Further resources on this subject: Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] New Features in JPA 2.0 [Article] An Introduction to Hibernate and Spring: Part 1 [Article]
Read more
  • 0
  • 0
  • 12792

article-image-applying-themes-sails-applications-part-2
Luis Lobo
14 Oct 2016
4 min read
Save for later

Applying Themes to Sails Applications, Part 2

Luis Lobo
14 Oct 2016
4 min read
In Part 1 of this series covering themes in the Sails Framework, we bootstrapped our sample Sails app (step 1). Here in Part 2, we will complete steps 2 and 3, compiling our theme’s CSS and the necessary Less files and setting up the theme Sails hook to complete our application. Step 2 – Adding a task for compiling our theme's CSS and the necessary Less files Let’s pick things back up where we left of in Part 1. We now want to customize our page to have our burrito style. We need to add a task that compiles our themes. Edit your /tasks/config/less.js so that it looks like this one: module.exports = function (grunt) { grunt.config.set('less', { dev: { files: [{ expand: true, cwd: 'assets/styles/', src: ['importer.less'], dest: '.tmp/public/styles/', ext: '.css' }, { expand: true, cwd: 'assets/themes/export', src: ['*.less'], dest: '.tmp/public/themes/', ext: '.css' }] } }); grunt.loadNpmTasks('grunt-contrib-less'); }; Basically, we added a second object to the files section, which tells the Less compiler task to look for any Less file in assets/themes/export, compile it, and put the resulting CSS in the .tmp/public/themes folder. In case you were not aware of it, the .tmp/public folder is the one Sails uses to publish its assets. We now create two themes: one is default.less and the other is burrito.less, which is based on default.less. We also have two other Less files, each one holding the variables for each theme. This technique allows you to have one base theme and many other themes based on the default. /assets/themes/variables.less @app-navbar-background-color: red; @app-navbar-brand-color: white; /assets/themes/variablesBurrito.less @app-navbar-background-color: green; @app-navbar-brand-color: yellow; /assets/themes/export/default.less @import "../variables.less"; .navbar-inverse { background-color: @app-navbar-background-color; .navbar-brand { color: @app-navbar-brand-color; } } /assets/themes/export/burrito.less @import "default.less"; @import "../variablesBurrito.less"; So, burrito.less just inherits from default.less but overrides the variables with the ones on its own, creating a new theme based on the default. If you lift Sails now, you will notice that the Navigation bar has a red background on white. Step 3 – Setting up the theme Sails hook The last step involves creating a Hook, a Node module that adds functionality to the Sails corethat catches the hostname, and if it has burrito in it, sets the new theme. First, let’s create the folder for the hook: mkdir -p ./api/hooks/theme Now create a file named index.js in that folder with this content: /** * theme hook - Sets the correct CSS to be displayed */ module.exports = function (sails) { return { routes: { before: { 'all /*': function (req, res, next) { if (!req.isSocket) { // makes theme variable available in views res.locals.theme = sails.hooks.theme.getTheme(req); } returnnext(); } } }, /** * getTheme defines which css needs to be used for this request * In this case, we select the theme by pattern matching certain words from the hostname */ getTheme: function (req) { var hostname = 'default'; var theme = 'default'; try { hostname = req.get('host').toLowerCase(); } catch(e) { // host may not be available always (ie, socket calls. If you need that, add a Host header in your // sails socket configuration) } // if burrito is found on the hostname, change the theme if (hostname.indexOf('burrito') > -1) { theme = 'burrito'; } return theme; } }; }; Finally, to test our configuration, we need to add a host entry in our OS hosts file. In Linux/Unix-based operating systems, you have to edit /etc/hosts (with sudo or root). Add the following line: 127.0.0.1 burrito.smartdelivery.localwww.smartdelivery.local Now navigate using those host names, first to www.smartdelivery.local: And lastly, navigate to burrito.smartdelivery.local: You now have your Burrito Smart Delivery! And you have a Themed Sails Application! I hope you have enjoyed this series.  You can get the source code from here. Enjoy! About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing software products, solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 0
  • 12744

article-image-mono-micro-services-split-fat-application
Xavier Bruhiere
16 Oct 2015
7 min read
Save for later

Mono to Micro-Services: Splitting that fat application

Xavier Bruhiere
16 Oct 2015
7 min read
As articles state everywhere, we're living in a fast pace digital age. Project complexity, or business growth, challenges existing development patterns. That's why many developers are evolving from the monolithic application toward micro-services. Facebook is moving away from its big blue app. Soundcloud is embracing microservices. Yet this can be a daunting process, so what for? Scale. Better plugging new components than digging into an ocean of code. Split a complex problem into smaller ones, which is easier to solve and maintain. Distribute work through independent teams. Open technologies friendliness. Isolating a service into a container makes it straightforward to distribute and use. It also allows different, loosely coupled stacks to communicate. Once upon a time, there was a fat code block called Intuition, my algorithmic trading platform. In this post, we will engineer a simplified version, divided into well defined components. Code Components First, we're going to write the business logic, following the single responsibility principle, and one of my favorite code mantras: Prefer composition over inheritance The point is to identify key components of the problem, and code a specific solution for each of them. It will articulate our application around the collaboration of clear abstractions. As an illustration, start with the RandomAlgo class. Python tends to be the go-to language for data analysis and rapid prototyping. It is a great fit for our purpose. class RandomAlgo(object): """ Represent the algorithm flow. Heavily inspired from quantopian.com and processing.org """ def initialize(self, params): """ Called once to prepare the algo. """ self.threshold = params.get('threshold', 0.5) # As we will see later, we return here data channels we're interested in return ['quotes'] def event(self, data): """ This method is called every time a new batch of data is ready. :param data: {'sid': 'GOOG', 'quote': '345'} """ # randomly choose to invest or not if random.random() > self.threshold: print('buying {0} of {1}'.format(data['quote'], data['sid'])) This implementation focuses on a single thing: detecting buy signals. But once you get such a signal, how do you invest your portfolio? This is the responsibility of a new component. class Portfolio(object): def__init__(self, amount): """ Starting amount of cash we have. """ self.cash = amount def optimize(self, data): """ We have a buy signal on this data. Tell us how much cash we should bet. """ # We're still baby traders and we randomly choose what fraction of our cash available to invest to_invest = random.random() * self.cash self.cash = self.cash - to_invest return to_invest Then we can improve our previous algorithm's event method, taking advantage of composition. def initialize(self, params): # ... self.portfolio = Portfolio(params.get('starting_cash', 10000)) def event(self, data): # ... print('buying {0} of {1}'.format(portfolio.optimize(data), data['sid'])) Here are two simple components that produce readable and efficient code. Now we can develop more sophisticated portfolio optimizations without touching the algorithm internals. This is also a huge gain early in a project when we're not sure how things will evolve. Developers should only focus on this core logic. In the next section, we're going to unfold a separate part of the system. The communication layer will solve one question: how do we produce and consume events? Inter-components messaging Let's state the problem. We want each algorithm to receive interesting events and publish its own data. The kind of challenge Internet of Things (IoT) is tackling. We will find empirically that our modular approach allows us to pick the right tool, even within a-priori unrelated fields. The code below leverages MQTT to bring M2M messaging to the application. Notice we're diversifying our stack with node.js. Indeed it's one of the most convenient languages to deal with event-oriented systems (Javascript, in general, is gaining some traction in the IoT space). var mqtt = require('mqtt'); // connect to the broker, responsible to route messages // (thanks mosquitto) var conn = mqtt.connect('mqtt://test.mosquitto.org'); conn.on('connect', function () { // we're up ! Time to initialize the algorithm // and subscribe to interesting messages }); // triggered on topic we're listening to conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // Here, pass it to the algo for processing }); That's neat! But we still need to connect this messaging layer with the actual python algorithm. RPC (Remote Procedure Call) protocol comes in handy for the task, especially with zerorpc. Here is the full implementation with more explanations. // command-line interfaces made easy var program = require('commander'); // the MQTT client for Node.js and the browser var mqtt = require('mqtt'); // a communication layer for distributed systems var zerorpc = require('zerorpc'); // import project properties var pkg = require('./package.json') // define the cli program .version(pkg.version) .description(pkg.description) .option('-m, --mqtt [url]', 'mqtt broker address', 'mqtt://test.mosquitto.org') .option('-r, --rpc [url]', 'rpc server address', 'tcp://127.0.0.1:4242') .parse(process.argv); // connect to mqtt broker var conn = mqtt.connect(program.mqtt); // connect to rpc peer, the actual python algorithm var algo = new zerorpc.Client() algo.connect(program.rpc); conn.on('connect', function () { // connections are ready, initialize the algorithm var conf = { cash: 50000 }; algo.invoke('initialize', conf, function(err, channels, more) { // the method returns an array of data channels the algorithm needs for (var i = 0; i < channels.length; i++) { console.log('subscribing to channel', channels[i]); conn.subscribe(channels[i]); } }); }); conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // make the algorithm to process the incoming data algo.invoke('event', JSON.parse(message.toString()), function(err, res, more) { console.log('algo output:', res); // we're done algo.close(); conn.end(); }); }); The code above calls our algorithm's methods. Here is how to expose them over RPC. import click, zerorpc # ... algo code ... @click.command() @click.option('--addr', default='tcp://127.0.0.1:4242', help='address to bind rpc server') def serve(addr): server = zerorpc.Server(RandomAlgo()) server.bind(addr) click.echo(click.style('serving on {} ...'.format(addr), bold=True, fg='cyan')) # listen and serve server.run() if__name__ == '__main__': serve() At this point we are ready to run the app. Let's fire up 3 terminals, install requirements, and make the machines to trade. sudo apt-get install curl libpython-dev libzmq-dev # Install pip curl https://bootstrap.pypa.io/get-pip.py | python # Algorithm requirements pip install zerorpc click # Messaging requirements npm init npm install --save commander mqtt zerorpc # Activate backend python ma.py --addr tcp://127.0.0.1:4242 # Manipulate algorithm and serve messaging system node app.js --rpc tcp://127.0.0.1:4242 # Publish messages node_modules/.bin/mqtt pub -t 'quotes' -h 'test.mosquitto.org' -m '{"goog": 3.45}' In this state, our implementation is over-engineered. But we designed a sustainable architecture to wire up small components. And from here we can extend the system. One can focus on algorithms without worrying about events plumbing. The corollary: switching to a new messaging technology won't affect the way we develop algorithms. We can even swipe algorithms by changing the rpc address. A service discovery component could expose which backends are available and how to reach them. A project like octoblu adds devices authentification, data sharing, and more. We could implement data sources that connect to live market or databases, compute indicators like moving averages and publish them to algorithms. Conclusion Given our API definition, a contributor can hack on any component without breaking the project as a whole. In a fast pace environment, with constant iterations, this architecture can make or break products. This is especially true in the raising container world. Assuming we package each component into specialized containers, we smooth the way to a scalable infrastructure that we can test, distribute, deploy and grow. Not sure where to start when it comes to containers and microservices? Visit our Docker page!  About the Author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 12666

article-image-hello-small-world
Packt
07 Sep 2016
20 min read
Save for later

Hello, Small World!

Packt
07 Sep 2016
20 min read
In this article by Stefan Björnander, the author of the book C++ Windows Programming, we will see how to create Windows applications using C++. This article introduces Small Windows by presenting two small applications: The first application writes "Hello, Small Windows!" in a window The second application handles circles of different colors in a document window (For more resources related to this topic, see here.) Hello, Small Windows! In The C Programming Language by Brian Kernighan and Dennis Richie, the hello-world example was introduced. It was a small program that wrote hello, world on the screen. In this section, we shall write a similar program for Small Windows. In regular C++, the execution of the application starts with the main function. In Small Windows, however, main is hidden in the framework and has been replaced by MainWindow, which task is to define the application name and create the main window object. The argumentList parameter corresponds to argc and argv in main. The commandShow parameter forwards the system's request regarding the window's appearance. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Hello"); Application::MainWindowPtr() = new HelloWindow(windowShow); } In C++, there are to two character types: char and wchar_t, where char holds a regular character of one byte and wchar_t holds a wide character of larger size, usually two bytes. There is also the string class that holds a string of char values and the wstring class that holds a string of wchar_t values. However, in Windows there is also the generic character type TCHAR that is char or wchar_t, depending on system settings. There is also the String class holds a string of TCHAR values. Moreover, TEXT is a macro that translates a character value to TCHAR and a text value to an array of TCHAR values. To sum it up, following is a table with the character types and string classes: Regular character Wide character Generic character char wchar_t TCHAR string wstring String In the applications of this book, we always use the TCHAR type, the String class, and the TEXT macro. The only exception to that rule is the clipboard handling. Our version of the hello-world program writes Hello, Small Windows! in the center of the client area. The client area of the window is the part of the window where it is possible to draw graphical objects. In the following window, the client area is the white area. The HelloWindow class extends the Small Windows Window class. It holds a constructor and the Draw method. The constructor calls the Window constructor with suitable information regarding the appearance of the window. Draw is called every time the client area of the window needs to be redrawn. HelloWindow.h class HelloWindow : public Window { public: HelloWindow(WindowShow windowShow); void OnDraw(Graphics& graphics, DrawMode drawMode); }; The constructor of HelloWindow calls the constructor of Window with the following parameter: The first parameter of the HelloWindow constructor is the coordinate system. LogicalWithScroll indicates that each logical unit is one hundredth of a millimeter, regardless of the physical resolution of the screen. The current scroll bar settings are taken into consideration. The second parameter of the window constructor is the preferred size of the window. It indicates that a default size shall be used. The third parameter is a pointer to the parent window. It is null since the window has no parent window. The fourth and fifth parameters set the window's style, in this case overlapped windows. The last parameter is windowShow given by the surrounding system to MainWindow, which decide the window's initial appearance (minimized, normal, or maximized). Finally, the constructor sets the header of the window by calling the Window method SetHeader. HelloWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" HelloWindow::HelloWindow(WindowShow windowShow) :Window(LogicalWithScroll, ZeroSize, nullptr, OverlappedWindow, NoStyle, windowShow) { SetHeader(TEXT("Hello Window")); } The OnDraw method is called every time the client area of the window needs to be redrawn. It obtains the size of the client area and draws the text in its center with black text on white background. The SystemFont parameter will make the text appear in the default system font. The Small Windows Color class holds the constants Black and White. Point holds a 2-dimensional point. Size holds a width and a height. The Rect class holds a rectangle. More specifically, it holds the four corners of a rectangle. void HelloWindow::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { Size clientSize = GetClientSize(); Rect clientRect(Point(0, 0), clientSize); Font textFont("New Times Roman", 12, true); graphics.DrawText(clientRect, TEXT("Hello, Small Windows!"), textFont , Black, White); } The Circle application In this section, we look into a simple circle application. As the name implies, it provides the user the possibility to handle circles in a graphical application. The user can add a new circle by clicking the left mouse button. They can also move an existing circle by dragging it. Moreover, the user can change the color of a circle as well as save and open the document.   The main window As we will see thought out this book, MainWindow does always do the same thing: it sets the application name and creates the main window of the application. The name is used by the Save and Open standard dialogs, the About menu item, and the registry. The difference between the main window and other windows of the application is that when the user closes the main window, the application exits. Moreover, when the user selects the Exit menu item the main window is closed, and its destructor is called. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Circle"); Application::MainWindowPtr() = new CircleDocument(windowShow); } The CircleDocument class The CircleDocumentclass extends the Small Windows class StandardDocument, which in turn extends Document and Window. In fact, StandardDocument constitutes of a framework; that is, a base class with a set of virtual methods with functionality we can override and further specify. The OnMouseDown and OnMouseUp methods are overridden from Window and are called when the user presses or releases one of the mouse buttons. OnMouseMove is called when the user moves the mouse. The OnDraw method is also overridden from Window and is called every time the window needs to be redrawn. The ClearDocument, ReadDocumentFromStream, and WriteDocumentToStream methods are overridden from Standard­Document and are called when the user creates a new file, opens a file, or saves a file. CircleDocument.h class CircleDocument : public StandardDocument { public: CircleDocument(WindowShow windowShow); ~CircleDocument(); void OnMouseDown(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseUp(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseMove(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnDraw(Graphics& graphics, DrawMode drawMode); bool ReadDocumentFromStream(String name, istream& inStream); bool WriteDocumentToStream(String name, ostream& outStream) const; void ClearDocument(); The DEFINE_BOOL_LISTENER and DEFINE_VOID_LISTENER macros define listeners: methods without parameters that are called when the user selects a menu item. The only difference between the macros is the return type of the defined methods: bool or void. In the applications of this book, we use the common standard that the listeners called in response to user actions are prefixed with On, for instance OnRed. The methods that decide whether the menu item shall be enabled are suffixed with Enable, and the methods that decide whether the menu item shall be marked with a check mark or a radio button are suffixed with Check or Radio. In this application, we define menu items for the red, green, and blue colors. We also define a menu item for the Color standard dialog.     DEFINE_VOID_LISTENER(CircleDocument,OnRed);     DEFINE_VOID_LISTENER(CircleDocument,OnGreen);     DEFINE_VOID_LISTENER(CircleDocument,OnBlue);     DEFINE_VOID_LISTENER(CircleDocument,OnColorDialog); When the user has chosen one of the color red, green, or blue, its corresponding menu item shall be checked with a radio button. RedRadio, GreenRadio, and BlueRadio are called before the menu items become visible and return a Boolean value indicating whether the menu item shall be marked with a radio button.     DEFINE_BOOL_LISTENER(CircleDocument, RedRadio);     DEFINE_BOOL_LISTENER(CircleDocument, GreenRadio);     DEFINE_BOOL_LISTENER(CircleDocument, BlueRadio); The circle radius is always 500 units, which correspond to 5 millimeters.     static const int CircleRadius = 500; The circleList field holds the circles, where the topmost circle is located at the beginning of the list. The nextColor field holds the color of the next circle to be added by the user. It is initialized to minus one to indicate that no circle is being moved at the beginning. The moveIndex and movePoint fields are used by OnMouseDown and OnMouseMove to keep track of the circle being moved by the user. private: vector<Circle> circleList; Color nextColor; int moveIndex = -1; Point movePoint; }; In the StandardDocument constructor call, the first two parameters are LogicalWithScroll and USLetterPortrait. They indicate that the logical size is hundredths of millimeters and that the client area holds the logical size of a US letter: 215.9 * 279.4 millimeters (8.5 * 11 inches). If the window is resized so that the client area becomes smaller than a US letter, scroll bars are added to the window. The third parameter sets the file information used by the standard Save and Open dialogs, the text description is set to Circle Files and the file suffix is set to cle. The null pointer parameter indicates that the window does not have a parent window. The OverlappedWindow constant parameter indicates that the window shall overlap other windows and the windowShow parameter is the window's initial appearance passed on from the surrounding system by MainWindow. CircleDocument.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" CircleDocument::CircleDocument(WindowShow windowShow) :StandardDocument(LogicalWithScroll, USLetterPortrait, TEXT("Circle Files, cle"), nullptr, OverlappedWindow, windowShow) { The StandardDocument framework adds the standard File, Edit, and Help menus to the window menu bar. The File menu holds the New, Open, Save, Save As, Page Setup, Print Preview, and Exit items. The Page Setup and Print Preview items are optional. The seventh parameter of the StandardDocument constructor (default false) indicates their presence. The Edit menu holds the Cut, Copy, Paste, and Delete items. They are disabled by default; we will not use them in this application. The Help menu holds the About item, the application name set in MainWindow is used to display a message box with a standard message: Circle, version 1.0. We add the standard File and Edit menus to the menu bar. Then we add the Color menu, which is the application-specific menu of this application. Finally, we add the standard Help menu and set the menu bar of the document. The Color menu holds the menu items used to set the circle colors. The OnRed, OnGreen, and OnBlue methods are called when the user selects the menu item, and the RedRadio, GreenRadio, BlueRadio are called before the user selects the color menu in order to decide if the items shall be marked with a radio button. OnColorDialog opens a standard color dialog. In the text &RedtCtrl+R, the ampersand (&) indicates that the menu item has a mnemonic; that is, the letter R will be underlined and it is possible to select the menu item by pressing R after the menu has been opened. The tabulator character (t) indicates that the second part of the text defines an accelerator; that is, the text Ctrl+R will occur right-justified in the menu item and the item can be selected by pressing Ctrl+R. Menu menuBar(this); menuBar.AddMenu(StandardFileMenu(false)); The AddItem method in the Menu class also takes two more parameters for enabling the menu item and setting a check box. However, we do not use them in this application. Therefore, we send null pointers. Menu colorMenu(this, TEXT("&Color")); colorMenu.AddItem(TEXT("&RedtCtrl+R"), OnRed, nullptr, nullptr, RedRadio); colorMenu.AddItem(TEXT("&GreentCtrl+G"), OnGreen, nullptr, nullptr, GreenRadio); colorMenu.AddItem(TEXT("&BluetCtrl+B"), OnBlue, nullptr, nullptr, BlueRadio); colorMenu.AddSeparator(); colorMenu.AddItem(TEXT("&Dialog ..."), OnColorDialog); menuBar.AddMenu(colorMenu); menuBar.AddMenu(StandardHelpMenu()); SetMenuBar(menuBar); Finally, we read the current color (the color of the next circle to be added) from the registry; red is the default color in case there is no color stored in the registry. nextColor.ReadColorFromRegistry(TEXT("NextColor"), Red); } The destructor saves the current color in the registry. In this application, we do not need to perform the destructor's normal tasks, such as deallocate memory or closing files. CircleDocument::~CircleDocument() { nextColor.WriteColorToRegistry(TEXT("NextColor")); } The ClearDocument method is called when the user selects the New menu item. In this case, we just clear the circle list. Every other action, such as redrawing the window or changing its title, is taken care of by StandardDocument. void CircleDocument::ClearDocument() { circleList.clear(); } The WriteDocumentToStream method is called by StandardDocument when the user saves a file (by selecting Save or Save As). It writes the number of circles (the size of the circle list) to the output stream and calls WriteCircle for each circle in order to write their states to the stream. bool CircleDocument::WriteDocumentToStream(String name, ostream& outStream) const { int size = circleList.size(); outStream.write((char*) &size, sizeof size); for (Circle circle : circleList) { circle.WriteCircle(outStream); } return ((bool) outStream); } The ReadDocumentFromStream method is called by StandardDocument when the user opens a file by selecting the Open menu item. It reads the number of circles (the size of the circle list) and for each circle it creates a new object of the Circle class, calls ReadCircle in order to read the state of the circle, and adds the circle object to circleList. bool CircleDocument::ReadDocumentFromStream(String name, istream& inStream) { int size; inStream.read((char*) &size, sizeof size); for (int count = 0; count < size; ++count) { Circle circle; circle.ReadCircle(inStream); circleList.push_back(circle); } return ((bool) inStream); } The OnMouseDown method is called when the user presses one of the mouse buttons. First we need to check that they have pressed the left mouse button. If they have, we loop through the circle list and call IsClick for each circle in order to decide whether they have clicked at a circle. Note that the top-most circle is located at the beginning of the list; therefore, we loop from the beginning of the list. If we find a clicked circle, we break the loop. If the user has clicked at a circle, we store its index moveIndex and the current mouse position in movePoint. Both values are needed by OnMouseMove method that will be called when the user moves the mouse. void CircleDocument::OnMouseDown (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if (mouseButtons == LeftButton) { moveIndex = -1; int size = circleList.size(); for (int index = 0; index < size; ++index) { if (circleList[index].IsClick(mousePoint)) { moveIndex = index; movePoint = mousePoint; break; } } However, if the user has not clicked at a circle, we add a new circle. A circle is defined by its center position (mousePoint), radius (CircleRadius), and color (nextColor). An invalidated area is a part of the client area that needs to be redrawn. Remember that in Windows we normally do not draw figures directly. Instead, we call Invalidate to tell the system that an area needs to be redrawn and forces the actually redrawing by calling UpdateWindow, which eventually results in a call to OnDraw. The invalidated area is always a rectangle. Invalidate has a second parameter (default true) indicating that the invalidated area shall be cleared. Technically, it is painted in the window's client color, which in this case is white. In this way, the previous location of the circle becomes cleared and the circle is drawn at its new location. The SetDirty method tells the framework that the document has been altered (the document has become dirty), which causes the Save menu item to be enabled and the user to be warned if they try to close the window without saving it. if (moveIndex == -1) { Circle newCircle(mousePoint, CircleRadius, nextColor); circleList.push_back(newCircle); Invalidate(newCircle.Area()); UpdateWindow(); SetDirty(true); } } } The OnMouseMove method is called every time the user moves the mouse with at least one mouse button pressed. We first need to check whether the user is pressing the left mouse button and is clicking at a circle (whether moveIndex does not equal minus one). If they have, we calculate the distance from the previous mouse event (OnMouseDown or OnMouseMove) by comparing the previous mouse position movePoint by the current mouse position mousePoint. We update the circle position, invalidate both the old and new area, forcing a redrawing of the invalidated areas with UpdateWindow, and set the dirty flag. void CircleDocument::OnMouseMove (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if ((mouseButtons == LeftButton)&&(moveIndex != -1)) { Size distanceSize = mousePoint - movePoint; movePoint = mousePoint; Circle& movedCircle = circleList[moveIndex]; Invalidate(movedCircle.Area()); movedCircle.Center() += distanceSize; Invalidate(movedCircle.Area()); UpdateWindow(); SetDirty(true); } } Strictly speaking, OnMouseUp could be excluded since moveIndex is set to minus one in OnMouseDown, which is always called before OnMouseMove. However, it has been included for the sake of completeness. void CircleDocument::OnMouseUp (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { moveIndex = -1; } The OnDraw method is called every time the window needs to be (partly or completely) redrawn. The call can have been initialized by the system as a response to an event (for instance, the window has been resized) or by an earlier call to UpdateWindow. The Graphics reference parameter has been created by the framework and can be considered a toolbox for drawing lines, painting areas and writing text. However, in this application we do not write text. We iterate throw the circle list and, for each circle, call the Draw method. Note that we do not care about which circles are to be physically redrawn. We simple redraw all circles. However, only the circles located in an area that has been invalidated by a previous call to Invalidate will be physically redrawn. The Draw method has a second parameter indicating the draw mode, which can be Paint or Print. Paint indicates that OnDraw is called by OnPaint in Window and that the painting is performed in the windows' client area. The Print method indicates that OnDraw is called by OnPrint and that the painting is sent to a printer. However, in this application we do not use that parameter. void CircleDocument::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { for (Circle circle : circleList) { circle.Draw(graphics); } } The RedRadio, GreenRadio, and BlueRadio methods are called before the menu items are shown, and the items will be marked with a radio button in case they return true. The Red, Green, and Blue constants are defined in the Color class. bool CircleDocument::RedRadio() const { return (nextColor == Red); } bool CircleDocument::GreenRadio() const { return (nextColor == Green); } bool CircleDocument::BlueRadio() const { return (nextColor == Blue); } The OnRed, OnGreen, and OnBlue methods are called when the user selects the corresponding menu item. They all set the nextColor field to an appropriate value. void CircleDocument::OnRed() { nextColor = Red; } void CircleDocument::OnGreen() { nextColor = Green; } void CircleDocument::OnBlue() { nextColor = Blue; } The OnColorDialog method is called when the user selects the Color dialog menu item and displays the standard Color dialog. If the user choses a new color, nextcolor will be given the chosen color value. void CircleDocument::OnColorDialog() { ColorDialog(this, nextColor); } The Circle class The Circle class is a class holding the information about a single circle. The default constructor is used when reading a circle from a file. The second constructor is used when creating a new circle. The IsClick method returns true if the given point is located inside the circle (to check whether the user has clicked in the circle), Area returns the circle's surrounding rectangle (for invalidating), and Draw is called to redraw the circle. Circle.h class Circle { public: Circle(); Circle(Point center, int radius, Color color); bool WriteCircle(ostream& outStream) const; bool ReadCircle(istream& inStream); bool IsClick(Point point) const; Rect Area() const; void Draw(Graphics& graphics) const; Point Center() const {return center;} Point& Center() {return center;} Color GetColor() {return color;} As mentioned in the previous section, a circle is defined by its center position (center), radius (radius), and color (color). private: Point center; int radius; Color color; }; The default constructor does not need to initialize the fields, since it is called when the user opens a file and the values are read from the file. The second constructor, however, initializes the center point, radius, and color of the circle. Circle.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" Circle::Circle() { // Empty. } Circle::Circle(Point center, int radius, Color color) :color(color), center(center), radius(radius) { // Empty. } The WriteCircle method writes the color, center point, and radius to the stream. Since the radius is a regular integer, we simply use the C standard function write, while Color and Point have their own methods to write their values to a stream. In ReadCircle we read the color, center point, and radius from the stream in a similar manner. bool Circle::WriteCircle(ostream& outStream) const { color.WriteColorToStream(outStream); center.WritePointToStream(outStream); outStream.write((char*) &radius, sizeof radius); return ((bool) outStream); } bool Circle::ReadCircle(istream& inStream) { color.ReadColorFromStream(inStream); center.ReadPointFromStream(inStream); inStream.read((char*) &radius, sizeof radius); return ((bool) inStream); } The IsClick method uses the Pythagoras theorem to calculate the distance between the given point and the circle's center point, and return true if the point is located inside the circle (if the distance is less than or equal to the circle radius). Circle::IsClick(Point point) const { int width = point.X() - center.X(), height = point.Y() - center.Y(); int distance = (int) sqrt((width * width) + (height * height)); return (distance <= radius); } The top-left corner of the resulting rectangle is the center point minus the radius, and the bottom-right corner is the center point plus the radius. Rect Circle::Area() const { Point topLeft = center - radius, bottomRight = center + radius; return Rect(topLeft, bottomRight); } We use the FillEllipse method (there is no FillCircle method) of the Small Windows Graphics class to draw the circle. The circle's border is always black, while its interior color is given by the color field. void Circle::Draw(Graphics& graphics) const { Point topLeft = center - radius, bottomRight = center + radius; Rect circleRect(topLeft, bottomRight); graphics.FillEllipse(circleRect, Black, color); } Summary In this article, we have looked into two applications in Small Windows: a simple hello-world application and a slightly more advance circle application, which has introduced the framework. We have looked into menus, circle drawing, and mouse handling. Resources for Article: Further resources on this subject: C++, SFML, Visual Studio, and Starting the first game [article] Game Development Using C++ [article] Boost.Asio C++ Network Programming [article]
Read more
  • 0
  • 0
  • 12615

article-image-learning-basic-nature-f-code
Packt
02 Nov 2016
6 min read
Save for later

Learning the Basic Nature of F# Code

Packt
02 Nov 2016
6 min read
In this article by Eriawan Kusumawardhono, author of the book, F# High Performance explains why F# has been a first class citizen, a built in part of programming languages support in Visual Studio, starting from Visual Studio 2010. Though F# is a programming language that has its own unique trait: it is a functional programming language but at the same time it has OOP support. F# from the start has run on .NET, although we can also run F# on cross-platform, such as Android (using Mono). (For more resources related to this topic, see here.) Although F# mostly runs faster than C# or VB when doing computations, its own performance characteristics and some not so obvious bad practices and subtleties may have led to performance bottlenecks. The bottlenecks may or may not be faster than C#/VB counterparts, although some of the bottlenecks may share the same performance characteristics, such as the use of .NET APIs. The main goal of this book is to identify performance problems in F#, measuring and also optimizing F# code to run more efficiently while also maintaining the functional programming style as appropriately as possible. A basic knowledge of F# (including the functional programming concept and basic OOP) is required as a prerequisite to start understanding the performance problems and the optimization of F#. There are many ways and definitions to define F# performance characteristics and at the same time measure them, but understanding the mechanics of running F# code, especially on top of .NET, is crucial and it's also a part of the performance characteristic itself. This includes other aspects of approaches to identify concurrency problems and language constructs. Understanding the nature of F# code Understanding the nature of F# code is very crucial and it is a definitive prerequisite before we begin to measure how long it runs and its effectiveness. We can measure a running F# code by running time, but to fully understand why it may run slow or fast, there are some basic concepts we have to consider first. Before we dive more into this, we must meet the basic requirements and setup. After the requirements have been set, we need to put in place the environment setting of Visual Studio 2015. We have to set this, because we need to maintain the consistency of the default setting of Visual Studio. The setting should be set to General. These are the steps: Select the Tools menu from Visual Studio's main menu. Select Import and Export Settings... and the Import and Export Settings Wizard screen is displayed. Select Reset all Settings and then Next to proceed. Select No, just reset my settings overwriting my current setting and then Next to proceed. Select General and then Next to proceed After setting it up, we will have a consistent layout to be used throughout this book, including the menu locations and the look and feel of Visual Studio. Now we are going to scratch the surface of F# runtime with an introductory overview of common F# runtime, which will give us some insights into F# performance. F# runtime characteristics The release of Visual Studio 2015 occurred at the same time as the release of .NET 4.6 and the rest of the tools, including the F# compiler. The compiler version of F# in Visual Studio 2015 is F# 4.0. F# 4.0 has no large differences or notable new features compared to the previous version, F# 3.0 in Visual Studio 2013. Its runtime characteristic is essentially the same as F# 4.0, although there are some subtle performance improvements and bug fixes. For more information on what's new in F# 4.0 (described as release notes) visit: https://github.com/Microsoft/visualfsharp/blob/fsharp4/CHANGELOG.md. At the time of writing this book, the online and offline MSDN Library of F# in Visual Studio does not have F# 4.0 release notes documentation, but can always go to the GitHub repository of F# to check the latest update. These are the common characteristics of F# as part of managed programming language: F# must conform to .NET CLR. This includes the compatibilities, the IL emitted after compile, and support for .NET BCL (the basic class library). Therefore, F# functions and libraries can be used by other CLR compliant languages such as C#, VB, and managed C++. The debug symbols (PDB) have the same format and semantic as other CLR compliant languages. This is important, because F# code must be able to be debugged from other CLR compliant languages as well. From the managed languages perspective, measuring performance of F# is similar when measured by tools such as the CLR profiler. But from a F# unique perspective, these are F#-only unique characteristics: By default, all types in F# are immutable. Therefore, it's safe to assume it is intrinsically thread safe. F# has a distinctive collection library, and it is immutable by default. It is also safe to assume it is intrinsically thread safe. F# has a strong type inference model, and when a generic type is inferred without any concrete type, it automatically performs generalizations. Default functions in F# are implemented internally by creating an internal class derived from F#’s FastFunc. This FastFunc is essentially a delegate that is used by F# to apply functional language constructs such as currying and partial application. With tail call recursive optimization in the IL, the F# compiler may emit .tail IL, and then the CLR will recognize this and perform optimization at runtime. F# has inline functions as option F# has a computation workflow that is used to compose functions F# async computation doesn't need Task<T> to implement it. Although F# async doesn't need the Task<T> object, it can operate well with the async-await model in C# and VB. The async-await model in C# and VB is inspired by F# async, but behaves semantically differently based on more things than just the usage of Task<T>. All of those characteristics are not only unique, but they can also have performance implications when used to interoperate with C# and VB. Summary This article explained the basic introduction to F# IDE, along with runtime characteristics of F#. Resources for Article: Further resources on this subject: Creating an F# Project [article] Unit Testing [article] Working with Windows Phone Controls [article]
Read more
  • 0
  • 0
  • 12609

article-image-python-multimedia-fun-animations-using-pyglet
Packt
31 Aug 2010
8 min read
Save for later

Python Multimedia: Fun with Animations using Pyglet

Packt
31 Aug 2010
8 min read
(For more resources on Python, see here.) So let's get on with it. Installation prerequisites We will cover the prerequisites for the installation of Pyglet in this section. Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library, which works on multiple platforms. It is primarily used for developing gaming applications and other graphically-rich applications. Pyglet can be downloaded from http://www.pyglet.org/download.html. Install Pyglet version 1.1.4 or later. The Pyglet installation is pretty straightforward. Windows platform For Windows users, the Pyglet installation is straightforward—use the binary distribution Pyglet 1.1.4.msi or later. You should have Python 2.6 installed. For Python 2.4, there are some more dependencies. We won't discuss them in this article, because we are using Python 2.6 to build multimedia applications. If you install Pyglet from the source, see the instructions under the next sub-section, Other platforms. Other platforms The Pyglet website provides a binary distribution file for Mac OS X. Download and install pyglet-1.1.4.dmg or later. On Linux, install Pyglet 1.1.4 or later if it is available in the package repository of your operating system. Otherwise, it can be installed from source tarball as follows: Download and extractthetarballextractthetarball the tarball pyglet-1.1.4.tar.gz or a later version. Make sure that python is a recognizable command in shell. Otherwise, set the PYTHONPATH environment variable to the correct Python executable path. In a shell window, change to the mentioned extracted directory and then run the following command: python setup.py install Review the succeeding installation instructions using the readme/install instruction files in the Pyglet source tarball. If you have the package setuptools (http://pypi.python.org/pypi/setuptools) the Pyglet installation should be very easy. However, for this, you will need a runtime egg of Pyglet. But the egg file for Pyglet is not available at http://pypi.python.org. If you get hold of a Pyglet egg file, it can be installed by running the following command on Linux or Mac OS X. You will need administrator access to install the package: $sudo easy_install -U pyglet Summary of installation prerequisites Package Download location Version Windows platform Linux/Unix/OS X platforms Python http://python.org/download/releases/ 2.6.4 (or any 2.6.x) Install using binary distribution Install from binary; also install additional developer packages (For example, with python-devel in the package name in a rpm-based Linux distribution).   Build and install from the source tarball. Pyglet http://www.pyglet.org/download.html 1.1.4 or later Install using binary distribution (the .msi file) Mac: Install using disk image file (.dmg file). Linux: Build and install using the source tarball. Testing the installation Before proceeding further, ensure that Pyglet is installed properly. To test this, just start Python from the command line and type the following: >>>import pyglet If this import is successful, we are all set to go! A primer on Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library that works on multiple platforms. It is primarily used for developing gaming and other graphically-rich applications. We will cover some important aspects of Pyglet framework. Important components We will briefly discuss some of the important modules and packages of Pyglet that we will use. Note that this is just a tiny chunk of the Pyglet framework. Please review the Pyglet documentation to know more about its capabilities, as this is beyond the scope of this article. Window The pyglet.window.Window module provides the user interface. It is used to create a window with an OpenGL context. The Window class has API methods to handle various events such as mouse and keyboard events. The window can be viewed in normal or full screen mode. Here is a simple example of creating a Window instance. You can define a size by specifying width and height arguments in the constructor. win = pyglet.window.Window() The background color for the image can be set using OpenGL call glClearColor, as follows: pyglet.gl.glClearColor(1, 1, 1, 1) This sets a white background color. The first three arguments are the red, green, and blue color values. Whereas, the last value represents the alpha. The following code will set up a gray background color. pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1) The following illustration shows a screenshot of an empty window with a gray background color. Image The pyglet.image module enables the drawing of images on the screen. The following code snippet shows a way to create an image and display it at a specified position within the Pyglet window. img = pyglet.image.load('my_image.bmp')x, y, z = 0, 0, 0img.blit(x, y, z) A later section will cover some important operations supported by the pyglet.image module. Sprite This is another important module. It is used to display an image or an animation frame within a Pyglet window discussed earlier. It is an image instance that allows us to position an image anywhere within the Pyglet window. A sprite can also be rotated and scaled. It is possible to create multiple sprites of the same image and place them at different locations and with different orientations inside the window. Animation Animation module is a part of pyglet.image package. As the name indicates, pyglet.image.Animation is used to create an animation from one or more image frames. There are different ways to create an animation. For example, it can be created from a sequence of images or using AnimationFrame objects. An animation sprite can be created and displayed within the Pyglet window. AnimationFrame This creates a single frame of an animation from a given image. An animation can be created from such AnimationFrame objects. The following line of code shows an example. animation = pyglet.image.Animation(anim_frames) anim_frames is a list containing instances of AnimationFrame. Clock Among many other things, this module is used for scheduling functions to be called at a specified time. For example, the following code calls a method moveObjects ten times every second. pyglet.clock.schedule_interval(moveObjects, 1.0/10) Displaying an image In the Image sub-section, we learned how to load an image using image.blit. However, image blitting is a less efficient way of drawing images. There is a better and preferred way to display the image by creating an instance of Sprite. Multiple Sprite objects can be created for drawing the same image. For example, the same image might need to be displayed at various locations within the window. Each of these images should be represented by separate Sprite instances. The following simple program just loads an image and displays the Sprite instance representing this image on the screen. 1 import pyglet23 car_img= pyglet.image.load('images/car.png')4 carSprite = pyglet.sprite.Sprite(car_img)5 window = pyglet.window.Window()6 pyglet.gl.glClearColor(1, 1, 1, 1)78 @window.event9 def on_draw():10 window.clear()11 carSprite.draw()1213 pyglet.app.run() On line 3, the image is opened using pyglet.image.load call. A Sprite instance corresponding to this image is created on line 4. The code on line 6 sets white background for the window. The on_draw is an API method that is called when the window needs to be redrawn. Here, the image sprite is drawn on the screen. The next illustration shows a loaded image within a Pyglet window. In various examples in this article, the file path strings are hardcoded. We have used forward slashes for the file path. Although this works on Windows platform, the convention is to use backward slashes. For example, images/car.png is represented as imagescar.png. Additionally, you can also specify a complete path to the file by using the os.path.join method in Python. Regardless of what slashes you use, the os.path.normpath will make sure it modifies the slashes to fit to the ones used for the platform. The use of oos.path.normpath is illustrated in the following snippet: import osoriginal_path = 'C:/images/car.png"new_path = os.path.normpath(original_path) The preceding image illustrates Pyglet window showing a still image. Mouse and keyboard controls The Window module of Pyglet implements some API methods that enable user input to a playing animation. The API methods such as on_mouse_press and on_key_press are used to capture mouse and keyboard events during the animation. These methods can be overridden to perform a specific operation. Adding sound effects The media module of Pyglet supports audio and video playback. The following code loads a media file and plays it during the animation. 1 background_sound = pyglet.media.load(2 'C:/AudioFiles/background.mp3',3 streaming=False)4 background_sound.play() The second optional argument provided on line 3 decodes the media file completely in the memory at the time the media is loaded. This is important if the media needs to be played several times during the animation. The API method play() starts streaming the specified media file.
Read more
  • 0
  • 0
  • 12514
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-embedding-doctests-python-docstrings
Packt
29 Jan 2010
12 min read
Save for later

Embedding Doctests in Python Docstrings

Packt
29 Jan 2010
12 min read
Doctests aren't confined to simple text files. You can put doctests into Python's docstrings. Why would you want to do that? There are a couple of reasons. First of all, docstrings are an important part of the usability of Python code (but only if they tell the truth). If the behavior of a function, method, or module changes and the docstring doesn't get updated, then the docstring becomes misinformation, and a hindrance rather than a help. If the docstring contains a couple of doctest examples, then the out-of-date docstrings can be located automatically. Another reason for placing doctest examples into docstrings is simply that it can be very convenient. This practice keeps the tests, documentation and code all in the same place, where it can all be located easily. If the docstring becomes home to too many tests, this can destroy its utility as documentation. This should be avoided; if you find yourself with so many tests in the docstrings that they aren't useful as a quick reference, move most of them to a separate file. Time for action – embedding a doctest in a docstring We'll embed a test right inside the Python source file that it tests, by placing it inside a docstring. Create a file called test.py with the following contents: def testable(x): r""" The `testable` function returns the square root of its parameter, or 3, whichever is larger. >>> testable(7) 3.0 >>> testable(16) 4.0 >>> testable(9) 3.0 >>> testable(10) == 10 ** 0.5 True """ if x < 9: return 3.0 return x ** 0.5 At the command prompt, change to the directory where you saved test.py and then run the tests by typing: $ python -m doctest test.py As mentioned earlier before, if you have an older version of Python, this isn't going to work for you. Instead, you need to type python -c "__import__('doctest').testmod(__import__('test'))" If everything worked, you shouldn't see anything at all. If you want some confirmation that doctest is doing something, turn on verbose reporting by changing the command to: python -m doctest -v test.py For older versions of Python, instead use python -c "__import__('doctest').testmod(__import__('test'), verbose=True)" What just happened You put the doctest right inside the docstring of the function it was testing. This is a good place for tests that also show a user how to do something. It's not a good place for detailed, low-level tests (the above example, which was quite detailed for illustrative purposes, is skirting the edge of being too detailed), because docstrings need to serve as API documentation. You can see the reason for this just by looking back at the example, where the doctests take up most of the room in the docstring, without telling the readers any more than they would have learned from a single test. Any test that will serve as good API documentation is a good candidate for including in the docstrings. Notice the use of a raw string for the docstring (denoted by the r character before the first triple-quote). Using raw strings for your docstrings is a good habit to get into, because you usually don't want escape sequences—e.g. n for newline—to be interpreted by the Python interpreter. You want them to be treated as text, so that they are correctly passed on to doctest. Doctest directives Embedded doctests can accept exactly the same directives as doctests in text files can, using exactly the same syntax. Because of this, all of the doctest directives that we discussed before can also be used to aff ect the way embedded doctests are evaluated. Execution scope Doctests embedded in docstrings have a somewhat different execution scope than doctests in text files do. Instead of having a single scope for all of the tests in the file, doctest creates a single scope for each docstring. All of the tests that share a docstring, also share an execution scope, but they're isolated from tests in other docstrings. The separation of each docstring into its own execution scope often means that we don't need to put much thought into isolating doctests, when they're embedded in docstrings. That is fortunate, since docstrings are primarily intended for documentation, and the tricks needed to isolate the tests might obscure the meaning. Putting it in practice: an AVL tree We'll walk step-by-step through the process of using doctest to create a testable specification for a data structure called an AVL Tree. An AVL tree is a way to organize key-value pairs, so that they can be quickly located by key. In other words, it's a lot like Python's built-in dictionary type. The name AVL references the initials of the people who invented this data structure. As its name suggests, an AVL tree organizes the keys that are stored in it into a tree structure, with each key having up to two child keys—one child key that is less than the parent key by comparison, and one that is more. In the following picture, the key Elephant has two child keys, Goose has one, and Aardvark and Frog both have none. The AVL tree is special, because it keeps one side of the tree from getting much taller than the other, which means that users can expect it to perform reliably and efficiently no matter what. In the previous image, an AVL tree would reorganize to stay balanced if Frog gained a child. We'll write tests for an AVL tree implementation here, rather than writing the implementation itself. Therefore, we'll elaborate over the details of how an AVL tree works, in favor of looking at what it should do when it works right If you want to know more about AVL Trees, you will find many good references on the Internet. Wikipedia's entry on the subject is a good place to start with:http://en.wikipedia.org/wiki/AVL_tree. We'll start with a plain language specification, and then interject tests between the paragraphs. You don't have to actually type all of this into a text file; it is here for you to read and to think about. English specification The first step is to describe what the desired result should be, in normal language. This might be something that you do for yourself, or something that somebody else does for you. If you're working for somebody, hopefully you and your employer can sit down together and work this part out. In this case, there's not much to work out, because AVL Trees have been fully described for decades. Even so, the description here isn't quite like one you'd find anywhere else. This capacity for ambiguity is exactly the reason why a plain language specification isn't good enough. We need an unambiguous specification, and that's exactly what the tests in a doctest file can give us. The following text goes in a file called AVL.txt, (which you can find in its final form in the accompanying code archive. At this stage of the process, the file contains only the normal language specification.): An AVL Tree consists of a collection of nodes organized in a binarytree structure. Each node has left and right children, each of whichmay be either None or another tree node. Each node has a key, whichmust be comparable via the less-than operator. Each node has a value.Each node also has a height number, measuring how far the node is frombeing a leaf of the tree -- a node with height 0 is a leaf.The binary tree structure is maintained in ordered form, meaning thatof a node's two children, the left child has a key that comparesless than the node's key and the right child has a key that comparesgreater than the node's key.The binary tree structure is maintained in a balanced form, meaningthat for any given node, the heights of its children are either thesame or only differ by 1.The node constructor takes either a pair of parameters representinga key and a value, or a dict object representing the key-value pairswith which to initialize a new tree.The following methods target the node on which they are called, andcan be considered part of the internal mechanism of the tree:Each node has a recalculate_height method, which correctly sets theheight number.Each node has a make_deletable method, which exchanges the positionsof the node and one of its leaf descendants, such that the the treeordering of the nodes remains correct.Each node has rotate_clockwise and rotate_counterclockwise methods.Rotate_clockwise takes the node's right child and places it wherethe node was, making the node into the left child of its own formerchild. Other nodes in the vicinity are moved so as to maintainthe tree ordering. The opposite operation is performed by rotate_counterclockwise.Each node has a locate method, taking a key as a parameter, whichsearches the node and its descendants for a node with the specifiedkey, and either returns that node or raises a KeyError.The following methods target the whole tree rooted at the currentnode. The intent is that they will be called on the root node:Each node has a get method taking a key as a parameter, which locatesthe value associated with the specified key and returns it, or raisesKeyError if the key is not associated with any value in the tree.Each node has a set method taking a key and a value as parameters, andassociating the key and value within the tree.Each node has a remove method taking a key as a parameter, andremoving the key and its associated value from the tree. It raisesKeyError if no values was associated with that key. Node data The first three paragraphs of the specification describe the member variables of a AVL tree node, and tell us what the valid values for the variables are. They also tell us how tree height should be measured and define what a balanced tree means. It's our job now to take up those ideas, and encode them into tests that the computer can eventually use to check our code. We could check these specifications by creating a node and then testing the values, but that would really just be a test of the constructor. It's important to test the constructor, but what we really want to do is to incorporate checks that the node variables are left in a valid state into our tests of each member function. To that end, we'll define a function that our tests can call to check that the state of a node is valid. We'll define that function just after the third paragraph: Notice that this test is written as if the AVL tree implementation already existed. It tries to import an avl_tree module containing an AVL class, and it tries to use the AVL class is specific ways. Of course, at the moment there is no avl_tree module, so the test will fail. That's as it should be. All that the failure means is that, when the ti me comes to implement the tree, we should do so in a module called avl_tree, with contents that function as our test assumes. Part of the benefit of testing like this is being able to test-drive your code before you even write it. >>> from avl_tree import AVL>>> def valid_state(node):... if node is None:... return... if node.left is not None:... assert isinstance(node.left, AVL)... assert node.left.key < node.key... left_height = node.left.height + 1... else:... left_height = 0...... if node.right is not None:... assert isinstance(node.right, AVL)... assert node.right.key > node.key... right_height = node.right.height + 1... else:... right_height = 0...... assert abs(left_height - right_height) < 2... node.key < node.key... node.value>>> def valid_tree(node):... if node is None:... return... valid_state(node)... valid_tree(node.left)... valid_tree(node.right) Notice that we didn't actually call those functions yet. They aren't tests, per se, but tools that we'll use to simplify writing tests. We define them here, rather than in the Python module that we're going to test, because they aren't conceptually part of the tested code, and because anyone who reads the tests will need to be able to see what the helper functions do. Constructor The fourth paragraph describes the constructor for an AVL node: The node constructor takes either a pair of parameters representing a key and a value, or a dict object representing the key-value pairs with which to initialize a new tree. The constructor has two possible modes of operation: it can either create a single initialized node or it can create and initialize a whole tree of nodes. The test for the single node mode is easy: >>> valid_state(AVL(2, 'Testing is fun')) The other mode of the constructor is a problem, because it is almost certain that it will be implemented by creating an initial tree node and then calling its set method to add the rest of the nodes. Why is that a problem? Because we don't want to test the set method here: this test should be focused entirely on whether the constructor works correctly, when everything it depends on works. In other words, the tests should be able to assume that everything outside of the specific chunk of code being tested works correctly. However, that's not always a valid assumption. So, how can we write tests for things that call on code outside of what's being tested? There is a solution for this problem. For now, we'll just leave the second mode of operation of the constructor untested.
Read more
  • 0
  • 0
  • 12494

article-image-distributed-resource-scheduler
Packt
07 Jan 2016
14 min read
Save for later

Distributed resource scheduler

Packt
07 Jan 2016
14 min read
In this article written by Christian Stankowic, author of the book vSphere High Performance Essentials In cluster setups, Distributed Resource Scheduler (DRS) can assist you with automatic balancing CPU and storage load (Storage DRS). DRS monitors the ESXi hosts in a cluster and migrates the running VMs using vMotion, primarily, to ensure that all the VMs get the resources they need. Secondarily, it tries to balance the cluster. In addition to this, Storage DRS monitors the shared storage for information about latency and capacity consumption. In this case, Storage DRS recognizes the potential to optimize storage resources; it will make use of Storage vMotion to balance the load. We will cover Storage DRS in detail later. (For more resources related to this topic, see here.) Working of DRS DRS primarily uses two metrics to determine the cluster balance: Active host CPU: it includes the usage (CPU task time in ms) and ready (wait times in ms per VMs to get scheduled on physical cores) metrics. Active host Memory: It describes the amount of memory pages that are predicted to have changed in the last 20 seconds. A math-sampling algorithm calculates this amount; however, it is quite inaccurate. Active host memory is often used for resource capacity purposes. Be careful with using this value as an indicator as it only describes how aggressively a workload changes the memory. Depending on your application architecture, it may not measure how much memory a particular VM really needs. Think about applications that allocate a lot of memory for the purpose of caching. Using the active host memory metric for the purpose of capacity might lead to inappropriate settings. The migration threshold controls DRS's aggressiveness and defines how much a cluster can be imbalanced. Refer to the following table for detailed explanation: DRS level Priorities Effect Most conservative 1 Only affinity/anti-affinity constraints are applied More conservative 1–2 This will also apply recommendations addressing significant improvements Balanced (default) 1–3 Recommendations that, at least, promise good improvements are applied More aggressive 1–4 DRS applies recommendations that only promise a moderate improvement Most aggressive 1–5 Recommendations only addressing smaller improvements are applied Apart from the migration threshold, two other metrics—Target Host Load Standard Deviation (THLSD) and Current host load standard deviation (CHLSD)—are calculated. THLSD defines how much a cluster node's load can differ from others in order to be still balanced. The migration threshold and the particular ESXi host's active CPU and memory values heavily influence this metric. CHLSD calculates whether the cluster is currently balanced. If this value differs from the THLSD, the cluster is imbalanced and DRS will calculate the recommendations in order to balance it. In addition to this, DRS also calculates the vMotion overhead that is needed for the migration. If a migration's overhead is deemed higher than the benefit, vMotion will not be executed. DRS also evaluates the migration recommendations multiple times in order to avoid ping pong migrations. By default, once enabled, DRS is polled every five minutes (300 seconds). Depending on your landscape, it might be required to change this behavior. To do so, you need to alter the vpxd.cfg configuration file on the vCenter Server machine. Search for the following lines and alter the period (in seconds): <config>   <drm>     <pollPeriodSec>       300[SR1]      </pollPeriodSec>   </drm> </config> Refer to the following table for configuration file location, depending on your vCenter implementation: vCenter Server type File location vCenter Server Appliance /etc/vmware-vpx/vpxd.cfg vCenter Server C:ProgramDataVMwareVMware VirtualCentervpxd.cfg Check-list – performance tuning There are a couple of things to be considered when optimizing DRS for high-performance setups, as shown in the following: Make sure to use the hosts with homogenous CPU and memory configuration. Having different nodes will make DRS less effective. Use at least 1 Gbps network connection for vMotion. For better performance, it is recommended to use 10 Gbps instead. For virtual machines, it is a common procedure not to oversize them. Only configure as much as the CPU and memory resources need. Migrating workloads with unneeded resources takes more time. Make sure not to exceed the ESXi host and cluster limits that are mentioned in the VMware vSphere Configuration Maximums document. For vSphere 5.5, refer to https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf. For vSphere 6.0, refer to https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf. Configure DRS To configure DRS for your cluster, proceed with the following steps: Select your cluster from the inventory tab and click Manage and Settings. Under Services, select vSphere DRS. Click Edit. Select whether DRS should act in the Partially Automated or Fully Automated mode. In partially automated mode, DRS will place VMs in appropriate hosts, once powered on; however, it wil not migrate the running workloads. In fully automated mode, DRS will also migrate the running workloads in order to balance the cluster load. The Manual mode only gives you recommendations and the administrator can select the recommendations to apply. To create resource pools at cluster level, you will need to have at least the manual mode enabled. Select the DRS aggressiveness. Refer to the preceding table for a short explanation. Using more aggressive DRS levels is only recommended when having homogenous CPU and memory setups! When creating VMware support calls regarding DRS issues, a DRS dump file called drmdump is important. This file contains various metrics that DRS uses to calculate the possible migration benefits. On the vCenter Server Appliance, this file is located in /var/log/vmware/vpx/drmdump/clusterName. On the Windows variant, the file is located in %ALLUSERSPROFILE%VMwareVMware VirtualCenterLogsdrmdumpclusterName. VMware also offers an online tool called VM Resource and Availability Service (http://hasimulator.vmware.com), telling you which VMs can be restarted during the ESXi host failures. It requires you to upload this metric file in order to give you the results. This can be helpful when simulating the failure scenarios. Enhanced vMotion capability Enhanced vMotion Compatibility (EVC) enables your cluster to migrate the workloads between ESXi hosts with different processor generations. Unfortunately, it is not possible to migrate workloads between Intel-based and AMD-based servers; EVC only enables migrations in different Intel or AMD CPU generations. Once enabled, all the ESXi hosts are configured to provide the same set of CPU functions. In other words, the functions of newer CPU generations are disabled to match those of the older ESXi hosts in the cluster in order to create a common baseline. Configuring EVC To enable EVC, perform the following steps: Select the affected cluster from the inventory tab. Click on Manage, Settings, VMware EVC, and Edit. Choose Enable EVC for AMD Hosts or Enable EVC for Intel Hosts. Select the appropriate CPU generation for the cluster (the oldest). Make sure that Compatibility acknowledges your configuration. Save the changes, as follows: As mixing older hosts in high-performance clusters is not recommended, you should also avoid using EVC. To sum it up, keep the following steps in mind when planning the use of DRS: Enable DRS if you plan to have automatic load balancing; this is highly recommended for high-performance setups. Adjust the DRS aggressiveness level to match your requirements. Too aggressive migration thresholds may result in too many migrations, therefore, play with this setting to find the best for you. Make sure to have a separated vMotion network. Using the same logical network components as for the VM traffic is not recommended and might result in poor workload performance. Don't overload ESXi hosts to spare some CPU resources for vMotion processes in order to avoid performance bottlenecks during migrations. In high-performance setups, mixing various CPU and memory configurations is not recommended to achieve better performance. Try not to use EVC. Also, keep license constraints in mind when configuring DRS. Some software products might require additional licenses if it runs on multiple servers. We will focus on this later. Affinity and anti-affinity rules Sometimes, it is necessary to separate workloads or stick them together. To name some examples, think about the classical multi-tier applications such as the following: Frontend layer Database layer Backend layer One possibility would be to separate the particular VMs on multiple ESXi hosts to increase resilience. If a single ESXi host that is serving all the workloads crashes, all application components are affected by this fault. Moving all the participating application VMs to one single ESXi can result in higher performance as network traffic does not need to leave the ESXi host. However, there are more use cases to create affinity and anti-affinity rules, as shown in the following: Diving into production, development, and test workloads. For example, it would possible to separate production from the development and test workloads. This is a common procedure that many application vendors require. Licensing reasons (for example, license bound to the USB dongle, per core licensing, software assurance denying vMotion, and so on.) Application interoperability incompatibility (for example, applications need to run on separated hosts). As VMware vSphere has no knowledge about the license conditions of the workloads running virtualized, it is very important to check your software vendor's license agreements. You, as a virtual infrastructure administrator, are responsible to ensure that your software is fully licensed. Some software vendors require special licenses when running virtualized/on multiple hosts. There are two kinds of affinity/anti-affinity rules: VM-Host (relationship between VMs and ESXi hosts) and VM-VM (intra-relationship between particular VMs). Each rule consists of at least one VM and host DRS group. These groups also contain at least one entry. Every rule has a designation, where the administrator can choose between must or should. Implementing a rule with the should designation results in a preference on hosts satisfying all the configured rules. If no applicable host is found, the VM is put on another host in order to ensure at least the workload is running. If the must designation is selected, a VM is only running on hosts that are satisfying the configured rules. If no applicable host is found, the VM cannot be moved or started. This configuration approach is strict and requires excessive testing in order to avoid unplanned effects. DRS rules are rather combined than ranked. Therefore, if multiple rules are defined for a particular VM/host or VM/VM combination, the power-on process is only granted if all the rules apply to the requested action. If two rules are conflicting for a particular VM/host or VM/VM combination, the first rule is chosen and the other rule is automatically disabled. Especially, the use of the must rules should be evaluated very carefully as HA might not restart some workloads if these rules cannot be followed in case of a host crash. Configuring affinity/anti-affinity rules In this example, we will have a look at two use cases that affinity/anti-affinity rules can apply to. Example 1: VM-VM relationship This example consists of two VMs serving a two-tier application: db001 (database VM) and web001 (frontend VM). It is advisable to have both VMs running on the same physical host in order to reduce networking hops to connect the frontend server to its database. To configure the VM-VM affinity rule, proceed with the following steps: Select your cluster from the inventory tab and click Manage and VM/Host Rule underneath Configuration. Click Add. Enter a readable rule name (for example, db001-web001-bundle) and select Enable rule. Select the Keep Virtual Machines Together type and select the affected VMs. Click OK to save the rule, as shown in the following: When migrating one of the virtual machines using vMotion, the other VM will also migrate. Example 2: VM-Host relationship In this example, a VM (vcsa) is pinned to a particular ESXi host of a two-node cluster designated for production workloads. To configure the VM-Host affinity rule, proceed with the following steps: Select your cluster from the inventory tab and click Manage and VM/Host Groups underneath Configuration. Click Add. Enter a group name for the VM; make sure to select the VM Group type. Also, click Add to add the affected VM. Click Add once again. Enter a group name for the ESXi host; make sure to select the Host Group type. Later, click Add to add the ESXi host. Select VM/Host Rule underneath Configuration and click Add. Enter a readable rule name (for example, vcsa-to-esxi02) and select Enable rule. Select the Virtual Machines to Hosts type and select the previously created VM and host groups. Make sure to select Must run on hosts in group or Should run on hosts in group before clicking OK, as follows: Migrating the virtual machine to another host will fail with the following error message if Must run on hosts in group was selected earlier: Keep the following in mind when designing affinity and anti-affinity rules: Enable DRS. Double-check your software vendor's licensing agreements. Make sure to test your affinity/anti-affinity rules by simulating vMotion processes. Also, simulate host failures by using maintenance mode to ensure that your rules are working as expected. Note that the created rules also apply to HA and DPM. KISS – Keep it simple, stupid. Try to avoid utilizing too many or multiple rules for one VM/host combination. Distributed power management High performance setups are often the opposite of efficient, green infrastructures; however, high-performing virtual infrastructure setups can be efficient as well. Distributed Power Management (DPM) can help you with reducing the power costs and consumption of your virtual infrastructure. It is part of DRS and monitors the CPU and memory usage of all workloads running in the cluster. If it is possible to run all VMs on fewer hosts, DPM will put one or more ESXi hosts in standby mode (they will be powered off) after migrating the VMs using vMotion. DPM tries to keep the CPU and memory usage between 45% and 81% for all the cluster nodes by default. If this range is exceeded, the hosts will be powered on/off. Setting two advanced parameters can change this behaviour, as follows:DemandCapacityRatioTarget: Utilization target for the ESXi hosts (default: 63%)DemandCapacityRatioToleranceHost: Utilization range around target utilization (default 18%)The range is calculated as follows: (DemandCapacityRatioTarget - DemandCapacityRatioToleranceHost) to (DemandCapacityRatioTarget + DemandCapacityRatioToleranceHost)in this example we can calculate range: (63% - 18%) to (63% + 18%) To control a server's power state, DPM makes use of these three protocols in the following order: Intelligent Platform Management Interface (IPMI) Hewlett Packard Integrated Lights-Out (HP iLO) Wake-on-LAN (WoL) To enable IPMI/HP iLO management, you will need to configure the Baseboard Management Controller (BMC) IP address and other access information. To configure them, follow the given steps: Log in to vSphere Web Client and select the host that you want to configure for power management. Click on Configuration and select the Power Management tab. Select Properties and enter an IP address, MAC address, username, and password for the server's BMC. Note that entering hostnames will not work, as shown in the following: To enable DPM for a cluster, perform the following steps: Select the cluster from the inventory tab and select Manage. From the Services tab, select vSphere DRS and click Edit. Expand the Power Management tab and select Manual or Automatic. Also, select the threshold, DPM will choose to make power decisions. The higher the value, the faster DPM will put the ESXi hosts in standby mode, as follows: It is also possible to disable DPM for a particular host (for example, the strongest in your cluster). To do so; select the cluster and select Manage and Host Options. Check the host and click Edit. Make sure to select Disabled for the Power Management option. Consider giving a thought to the following when planning to utilize DPM: Make sure your server's have a supported BMC, such as HP iLO or IPMI. Evaluate the right DPM threshold. Also, keep your server's boot time (including firmware initialization) in mind and test your configuration before running in production. Keep in mind that the DPM also uses Active Memory and CPU usage for its decisions. Booting VMs might claim all memory; however, not use many active memory resources. If hosts are powered down while plenty VMs are booting, this might result in extensive swapping. Summary In this article, you learned how to implement the affinity and anti-affinity rules. You have also learned how to save power, while still achieving our workload requirements. Resources for Article: Further resources on this subject: Monitoring and Troubleshooting Networking [article] Storage Scalability [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 12447

article-image-how-change-org-uses-flow-elixirs-library-to-build-concurrent-data-pipelines-that-can-handle-a-trillion-messages
Sugandha Lahoti
22 May 2019
7 min read
Save for later

How Change.org uses Flow, Elixir’s library to build concurrent data pipelines that can handle a trillion messages

Sugandha Lahoti
22 May 2019
7 min read
Last month, at the ElixirConf EU 2019, John Mertens, Principal Engineer at Change.org conducted a session - Lessons From Our First Trillion Messages with Flow for developers interested in using Elixir for building data pipelines in the real-world system. For many Elixir converts, the attraction of Elixir is rooted in the promise of the BEAM concurrency model. The Flow library has made it easy to build concurrent data pipelines utilizing the BEAM (Originally BEAM was short for Bogdan's Erlang Abstract Machine, named after Bogumil "Bogdan" Hausman, who wrote the original version, but the name may also be referred to as Björn's Erlang Abstract Machine, after Björn Gustavsson, who wrote and maintains the current version). The problem is, that while the docs are great, there are not many resources on running Flow-based systems in production. In his talk, John shares some lessons his team learned from processing their first trillion messages through Flow. Using Flow at Change.org Change.org is a platform for social change where people from all over the world come to start movements on all topics and of all sizes. Technologically, change.org is primarily built in Ruby and JavaScript but they started using Elixir in early 2018 to build a high volume mission-critical data processing pipeline. They used Elixir for building this new system because of its library, Flow. Flow is a library for computational parallel flows in Elixir. It is built on top of GenStage.  GenStage is “specification and computational flow for Elixir”, meaning it provides a way for developers to define a pipeline of work to be carried out by independent steps (or stages) in separate processes. Flow allows developers to express computations on collections, similar to the Enum and Stream modules, although computations will be executed in parallel using multiple GenStages. At Change.org, the developers built some proofs of concept and a few different languages and put them against each other with the two main criteria being performance and developer happiness. Elixir came out as the clear winner. Whenever on change.org an event gets added to a queue, their elixir system pulls these messages off the queue, then prep and transforms them to some business logic and generate some side effects. Next, depending on a few parameters, the messages are either passed on to another system, discarded or retried. So far things have gone smoothly for them, which brought John to discuss lessons from processing their first trillion messages with Flow. Lesson 1: Let Flow do the work Flow and GenStage both are great libraries which provide a few game-changing features by default. The first being parallelism. Parallelism is beneficial for large-scale data processing pipelines and Elixir flows abstractions make utilizing Parallelism easier. It is as easy as writing code that looks essentially like standard Elixir pipeline but that utilizes all of your CPU cores. The second feature of Flow is Backpressure. GenStage specifies how Elixir processes should communicate with back-pressure. Simply put, Backpressure is when your system asks for more data to process instead of data being pushed on to it. With Flow your data processing is in charge of requesting more events. This means that if your process is dependent on some other service and that service becomes slow your whole flow just slows down accordingly and no service gets overloaded with requests, making the whole system stay up. Lesson 2: Organize your Flow The next lesson is on how to set up your code to take advantage of Flow. These organizational tactics help Change.org keep their Flow system manageable in practice. Keep the Flow Simple The golden rule according to John, is to keep your flow simple. Start simple and then increase the complexity depending on the constraints of your system. He discusses a quote from the Flow Docs, which states that: [box type="shadow" align="" class="" width=""]If you can solve a problem without using partition at all, that is preferred. Those are typically called embarrassingly parallel problems.[/box] If you can shape your problem into an embarrassingly parallel problem, he says, flow can really shine. Know your code and your system He also advises that developers should know their code and understand their systems. He then proceeds to give an example of how SQS is used in Flow. Amazon SQS (Simple Queue System) is a message-queuing system (also used at Change. org) that allows you to write distributed applications by exposing a message pipeline that can be processed in the background by workers. It’s two main features are the visibility window and acknowledgments. In acknowledgments, when you pull a message off a queue you have a set amount of time to acknowledge that you've received and processed that message and that amount of time is called the visibility window and that's configurable. If you don't acknowledge the message within the visibility window, it goes back into the queue. If a message is pulled and not acknowledged, a configured number of times then it is either discarded or sent to a dead letter queue. He then proceeds to use an example of a Flow they use in production. Maintain a consistent data structure You should also use a consistent data structure or a token throughout the data pipeline. The data structure most essential to their flow at Change.org is message struct -  %Message{}. When a message comes in from SQS, they create a message struct based on it. The consistency of having the same data structure at every step and the flow is how they can keep their system simple. He then explains an example code on how they can handle different types of data while keeping the flow simple. Isolate the side effects The next organizational tactic that helps Change.org employ to keep their Flow system manageable in practice is to isolate the side effects. Side effects are mutations; if something goes wrong in a mutation you need to be able to roll it back. In the spirit of keeping the flow simple, at Change.org, they batch all the side-effects together and put them at the ends so that a nothing gets lost if they need to roll it back. However, there are certain cases where you can’t put all side effects together and need a different strategy. These cases can be handled using Flow Sagas. Sagas pattern is a way to handle long live transactions providing rollback instructions for each step along the way so in case it goes bad it can just run that function. There is also an elixir implementation of sagas called Sage. Lesson 3: Tune the flow How you optimize your Flow is dependent upon the shape of your problem. This means tailoring the Flow to your own use case to squeeze all the throughput. However, there are three things which you can do to shape your Flow. This includes measuring flow performance, what are the things that we can actually do to tune it and then how can we help outside of the Flow. Apart from the three main lessons on data processing through Flow, John also mentions a few others, namely Graceful Producer Shutdowns Flow-level integration tests Complex batching Finally, John gave a glimpse of Broadway from change.org’s codebase. Broadway allows developers to build concurrent and multi-stage data ingestion and data processing pipelines with Elixir. It takes the burden of defining concurrent GenStage topologies and provides a simple configuration API that automatically defines concurrent producers, concurrent processing, batch handling, and more, leading to both time and cost efficient ingestion and processing of data. Some of its features include back-pressure automatic acknowledgments at the end of the pipeline, batching, automatic restarts in case of failures, graceful shutdown, built-in testing, and partitioning. José Valim’s keynote at the ElixirConf2019 also talked about streamlining data processing pipelines using Broadway. You can watch the full video of John Mertens’ talk here. John is the principal scientist at Change.org using Elixir to empower social action in his organization. Why Ruby developers like Elixir Introducing Mint, a new HTTP client for Elixir Developer community mourns the loss of Joe Armstrong, co-creator of Erlang
Read more
  • 0
  • 0
  • 12439

article-image-basics-python-absolute-beginners
Packt
19 Jun 2017
5 min read
Save for later

Basics of Python for Absolute Beginners

Packt
19 Jun 2017
5 min read
In this article by Bhaskar Das and Mohit Raj, authors of the book, Learn Python in 7 days, we will learn basics of Python. The Python language had a humble beginning in the late 1980s when a Dutchman, Guido Von Rossum, started working on a fun project that would be a successor to the ABC language with better exception handling and capability to interface with OS Amoeba at Centrum Wiskunde and Informatica. It first appeared in 1991. Python 2.0 was released in the year 2000 and Python 3.0 was released in the year 2008. The language was named Python after the famous British television comedy show Monty Python's Flying Circus, which was one of the favorite television programmes of Guido. Here, we will see why Python has suddenly influenced our lives, various applications that use Python, and Python's implementations. In this article, you will be learning the basic installation steps required to perform on different platforms (that is Windows, Linux and Mac), about environment variables, setting up environment variables, file formats, Python interactive shell, basic syntaxes, and, finally, printing out the formatted output. (For more resources related to this topic, see here.) Why Python? Now you might be suddenly bogged with the question, why Python? According to the Institute of Electrical and Electronics Engineers (IEEE) 2016 ranking, Python ranked third after C and Java. As per Indeed.com's data of 2016, Python job market search ranked fifth. Clearly, all the data points to the ever-rising demand in the job market for Python. It's a cool language if you want to learn it just for fun. Also, you will adore the language if you want to build your career around Python. At the school level, many schools have started including Python programming for kids. With new technologies taking the market by surprise, Python has been playing a dominant role. Whether it's cloud platform, mobile app development, BigData, IoT with Raspberry Pi, or the new Blockchain technology, Python is being seen as a niche language platform to develop and deliver scalable and robust applications. Some key features of the language are: Python programs can run on any platform, you can carry code created in a Windows machine and run it on Mac or Linux Python has a large inbuilt library with prebuilt and portable functionality, known as the standard library Python is an expressive language Python is free and open source Python code is about one third of the size of equivalent C++ and Java code. Python can be both dynamically and strongly typed In dynamically typed, the type of a variable is interpreted at runtime, which means that there is no need to define the type (int, float) of a variable in Python Python applications One of the most famous platform where Python is extensively used is YouTube. Other places where you will find Python being extensively used are special effects in Hollywood movies, drug evolution and discovery, traffic control systems, ERP systems, cloud hosting, e-commerce platform, CRM systems, and whichever field you can think of. Versions At the time of writing this book, the two main versions of the Python programming language available in the market were Python 2.x and Python 3.x. The stable releases at the time of writing this book were Python 2.7.13 and Python 3.6.0. Implementations of Python Major implementations include CPython, Jython, IronPython, MicroPython and PyPy. Installation Here, we will look forward to the installation of Python on three different OS platforms, namely Windows, Linux, and Mac OS. Let's begin with the Windows platform. Installation on Windows platform Python 2.x can be downloaded from https://www.python.org/downloads. The installer is simple and easy to install. Follow these steps to install the setup: Once you click on the setup installer, you will get a small window on your desktop screen as shown. Click onNext: Provide a suitable installation folder to install Python. If you don't provide the installation folder, then the installer will automatically create an installation folder for you as shown in the screenshot shown. Click on Next: After the completion of Step 2, you will get a window to customize Python as shown in the following screenshot. Note that theAdd python.exe to Path option has been markedx. Select this option to add it to system path variable. Click onNext: Finally, clickFinish to complete the installation: Summary So far, we did a walk through on the beginning and brief history of Python. We looked at the various implementations and flavors of Python. You also learned about installing on Windows OS. Hope this article has incited enough interest in Python and serves as your first step in the kingdom of Python, with enormous possibilities! Resources for Article: Further resources on this subject: Layout Management for Python GUI [article] Putting the Fun in Functional Python [article] Basics of Jupyter Notebook and Python [article]
Read more
  • 0
  • 0
  • 12415
article-image-designing-user-interface
Packt
23 Nov 2016
7 min read
Save for later

Designing a User Interface

Packt
23 Nov 2016
7 min read
In this article by Marcin Jamro, the author of the book Windows Application Development Cookbook, we will see how to add a button in your application. (For more resources related to this topic, see here.) Introduction You know how to start your adventure by developing universal applications for smartphones, tablets, and desktops running on the Windows 10 operating system. In the next step, it is crucial to get to know how to design particular pages within the application to provide the user with a convenient user interface that works smoothly on screens with various resolutions. Fortunately, designing the user interface is really simple using the XAML language, as well as Microsoft Visual Studio Community 2015. A designer can use a set of predefined controls, such as textboxes, checkboxes, images, or buttons. What's more, one can easily arrange controls in various variants, either vertically, horizontally, or in a grid. This is not all; developers could prepare their own controls as well. Such controls could be configured and placed on many pages within the application. It is also possible to prepare dedicated versions of particular pages for various types of devices, such as smartphones and desktops. You have already learned how to place a new control on a page by dragging it from the Toolbox window. In this article, you will see how to add a control as well as how to programmatically handle controls. Thus, some controls could either change their appearance, or the new controls could be added to the page when some specific conditions are met. Another important question is how to provide the user with a consistent user interface within the whole application. While developing solutions for the Windows 10 operating system, such a task could be easily accomplished by applying styles. In this article, you will learn how to specify both page-limited and application-limited styles that could be applied to either particular controls or to all the controls of a given type. At the end, you could ask yourself a simple question, "Why should I restrict access to my new awesome application only to people who know a particular language in which the user interface is prepared?" You should not! And in this article, you will also learn how to localize content and present it in various languages. Of course, the localization will use additional resource files, so translations could be prepared not by a developer, but by a specialist who knows the given language well. Adding a button When developing applications, you can use a set of predefined controls among which a button exists. It allows you to handle the event of pressing the button by a user. Of course, the appearance of the button could be easily adjusted, for instance, by choosing a proper background or border, as you could see in this recipe. The button can present textual content; however, it can also be adjusted to the user's needs, for instance, by choosing a proper color or font size. This is not all because the content shown on the button could not be only textual. For instance, you can prepare a button that presents an image instead of a text, a text over an image, or a text located next to the small icon that visually informs about the operation. Such modifications are presented in the following part of this recipe as well. Getting ready To step through this recipe, you only need the automatically generated project. How to do it… Add a button to the page by modifying the content of the MainPage.xaml file, as follows: <Page (...)> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Button Content="Click me!" Foreground="#0a0a0a" FontWeight="SemiBold" FontSize="20" FontStyle="Italic" Background="LightBlue" BorderBrush="RoyalBlue" BorderThickness="5" Padding="20 10" VerticalAlignment="Center" HorizontalAlignment="Center" /> </Grid> </Page> Generate a method for handling the event of clicking the button by pressing the button (either in a graphical designer or in the XAML code) and double-clicking on the Click field in the Properties window with the Event handlers for the selected element option (the lightning icon) selected. The automatically generated method is as follows: private void Button_Click(object sender, RoutedEventArgs e) { } How it works… In the preceding example, the Button control is placed within a grid. It is centered both vertically and horizontally, as specified by the VerticalAlignment and HorizontalAlignment properties that are set to Center. The background color (Background) is set to LightBlue. The border is specified by two properties, namely BorderBrush and BorderThickness. The first property chooses its color (RoyalBlue), while the other represents its thickness (5 pixels). What's more, the padding (Padding) is set to 20 pixels on the left- and right-hand side and 10 pixels at the top and bottom. The button presents the Click me! text defined as a value of the Content property. The text is shown in the color #0a0a0a with semi-bold italic font with size 20, as specified by the Foreground, FontWeight, FontStyle, and FontSize properties, respectively. If you run the application on a local machine, you should see the following result: It is worth mentioning that the IDE supports a live preview of the designed page. So, you can modify the values of particular properties and have real-time feedback regarding the target appearance directly in the graphical designer. It is a really great feature that does not require you to run the application to see an impact of each introduced change. There's more… As already mentioned, even the Button control has many advanced features. For example, you could place an image instead of a text, present a text over an image, or show an icon next to the text. Such scenarios are presented and explained now. First, let's focus on replacing the textual content with an image by modifying the XAML code that represents the Button control, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Image Source="/Assets/Image.jpg" /> </Button> Of course, you should also add the Image.jpg file to the Assets directory. To do so, navigate to Add | Existing Item… from the context menu of the Assets node in the Solution Explorer window, shown as follows: In the Add Existing Item window, choose the Image.jpg file and click on the Add button. As you could see, the previous example uses the Image control. In this recipe, no more information about such a control is presented because it is the topic of one of the next recipes, namely Adding an image. If you run the application now, you should see a result similar to the following: The second additional example presents a button with a text over an image. To do so, let's modify the XAML code, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Grid> <Image Source="/Assets/Image.jpg" /> <TextBlock Text="Click me!" Foreground="White" FontWeight="Bold" FontSize="28" VerticalAlignment="Bottom" HorizontalAlignment="Center" Margin="10" /> </Grid> </Button> You'll find more information about the Grid, Image, and TextBlock controls in the next recipes, namely Arranging controls in a grid, Adding an image, and Adding a label. For this reason, the usage of such controls is not explained in the current recipe. If you run the application now, you should see a result similar to the following: As the last example, you will see a button that contains both a textual label and an icon. Such a solution could be accomplished using the StackPanel, TextBlock, and Image controls, as you could see in the following code snippet: <Button Background="#353535" VerticalAlignment="Center" HorizontalAlignment="Center" Padding="20"> <StackPanel Orientation="Horizontal"> <Image Source="/Assets/Icon.png" MaxHeight="32" /> <TextBlock Text="Accept" Foreground="White" FontSize="28" Margin="20 0 0 0" /> </StackPanel> </Button> Of course, you should not forget to add the Icon.png file to the Assets directory, as already explained in this recipe. The result should be similar to the following: Resources for Article: Further resources on this subject: Deployment and DevOps [article] Introduction to C# and .NET [article] Customizing Kernel and Boot Sequence [article]
Read more
  • 0
  • 0
  • 12388

article-image-introduction-c-and-net
Packt
11 Nov 2016
17 min read
Save for later

Introduction to C# and .NET

Packt
11 Nov 2016
17 min read
In this article by Marino Posadas, the author of the book, Mastering C# and .NET Programming, we will cover the core concepts of C# and .NET, starting from the initial version and principal motivations behind its creation, and covering also the new aspects of the language, that appeared in version 2.0 and 3.0. (For more resources related to this topic, see here.) We'll illustrate all the main concepts with small code snippets, short enough to facilitate its understanding and easy reproduction. We will cover the following topics: C# and its role in the Microsoft Development ecosystem Difference between strongly typed and weakly typed languages The evolution in versions 2.0 and 3.0 Generics Extension methods C#: what's different in the language I had the chance to chat with Hejlsberg a couple of times about the C # language and what the initial purposes and requirements imposed in its creation were and which other languages inspired him or contributed to his ideas. The first time we talked, in Tech-Ed 2001 (at Barcelona, Spain), I asked him about the principles of his language and what makes it different from others. He first said that it was not only him who created the language, but also a group of people, especially Scott Wiltamuth, Peter Golde, Peter Sollich, and Eric Gunnerson. One of the first books ever published on the subject was, A Programmer's Introduction to C#, Gunnerson's.E., APress, 2000). About the principles, he mentioned this: One of the key differences between C# and these other languages, particularly Java, is that we tried to stay much closer to C++ in our design. C# borrows most of its operators, keywords, and statements directly from C++. But beyond these more traditional language issues, one of our key design goals was to make the C# language component-oriented, to add to the language itself all of the concepts that you need when you write components. Concepts such as properties, methods, events, attributes, and documentation are all first-class language constructs. He stated also this: When you write code in C#, you write everything in one place. There is no need for header files, IDL files (Interface Definition Language), GUIDs and complicated interfaces. This means that you can write code that is self-descriptive in this way given that you're dealing with a self-contained unit (let's remember the role of the manifest, optionally embedded in assemblies). In this mode, you can also extend existing technologies in a variety of ways, as we'll see in the examples. Languages: strongly typed, weakly typed, dynamic, and static The C# language is a strongly typed language: this means that any attempt to pass a wrong kind of parameter as an argument, or to assign a value to a variable that is not implicitly convertible, will generate a compilation error. This avoids many errors that only happen at runtime in other languages. In addition, by dynamic, we mean those languages whose rules are applied at runtime, while static languages apply their rules at compile time. JavaScript or PHP are good examples of the former case, and C/C++ of the latter. If we make a graphic representation of this situation, we might come up with something like what is shown in the following figure: In the figure, we can see that C# is clearly strongly typed, but it's much more dynamic than C++ or Scala, to mention a few. Of course, there are several criteria to catalog languages for their typing (weak versus strong) and for their dynamism (dynamic versus static). Note that this has implications in the IDE as well. Editors can tell us which type is expected in every case, and if you use a dynamic declaration such as var, the right side of the equality (if any) will be evaluated, and we will be shown the calculated value for every declaration: Even outside of the .NET world, Visual Studio's IDE is now able to provide strongly typed and Intellisense experiences when using languages such as TypeScript, a superset of JavaScript that transpiles (converts into) pure JavaScript but can be written using the same coding experience as what we would have in C# or any other .NET language. It's available as a separate type of project, if you're curious about it, and the latest up-to-date version is TypeScript 1.8, and it was recently published (you can take a look at a detailed description of its new capabilities at https://blogs.msdn.microsoft.com/typescript/2016/02/22/announcing-typescript-1-8-2/). The main differences So, going back to the title, what made C# different? I'll point out five core points: Everything is an object. Other languages, such as Smalltalk, Lisp, among others, have done this earlier, but due to different reasons, the performance penalty was pretty hard. As you know, it's enough to take a look at the Object Explorer to be able to check where an object comes from. It's a good practice to check the very basic values, such as int or String, which are nothing but aliases of System.Int32 and System.String, and both come from object, as shown in the following screenshot: Using the Boxing and Unboxing techniques, any value type can be converted into an object, and the value of an object can be converted into a simple value type. These conversions are made by simply casting the type to an object (and vice versa) in this manner: // Boxing and Unboxing int y = 3; // this is declared in the stack // Boxing y in a Heap reference z // If we change z, y remains the same. object z = y; // Unboxing y into h (the value of // z is copied to the stack) int h = (int)z; Using Reflection (the technique that allows you to read a component's metadata), an application can call itself or other applications, creating new instances of their containing classes. As a short demo, this simple code launches another instance of a WPF application (a very simple one with just one button, but that doesn't matter): static short counter = 1; private void btnLaunch_Click(object sender, RoutedEventArgs e) { // Establish a reference to this window Type windowType = this.GetType(); // Creates an instance of the Window object objWindow = Activator.CreateInstance(windowType); // cast to a MainWindow type MainWindow aWindow = (MainWindow)objWindow; aWindow.Title = "Reflected Window No: " + (++counter).ToString(); aWindow.Show(); } Now, every time we click on the button, a new instance of the window is created and launched, indicating its creation order in the title's window: You can have access to other components through a technology called Platform Invoke, which means you can call operating systems' functions by importing the existing DLLs using the DllImport attribute: For instance, you can make an external program's window the child of your own window using the SetParent API, which is part of User32.dll, or you can control operating system events, such as trying to shut down the system while our application is still active. Actually, once the permissions are given, your application can call any function located in any of the system's DLL if you need access to native resources. The schema that gives us access to these resources looks like what is shown in the following figure: If you want to try out some of these possibilities, the mandatory resource to keep in mind is http://www.PInvoke.net, where you have most of the useful system APIs, with examples of how to use them in C#. These interoperation capabilities are extended to interactions with applications that admit Automation, such as those in the Microsoft Office Suite, AutoCAD, and so on. Finally, unsafe code allows you to write inline C code with pointers, perform unsafe casts, and even pin down memory in order to avoid accidental garbage collection. However, unsafe does not mean that it is unmanaged. Unsafe code is deeply tied into the security system. There are many situations in which this is very useful. It might be an algorithm that's difficult to implement or a method whose execution is so CPU-intensive that performance penalties become unacceptable. While all this is important, I was surprised by the fact that every event handler in C# (as also in other .NET languages) would have two and only two arguments. So, I asked Anders about it, and his answer was one of the most clear and logical ones that I've ever heard. The evolution in versions 2.0 and 3.0 As we see, even from the very beginning, the Hejlsberg's team started with a complete, flexible, and modern platform, capable to be extended in many ways as technology evolves. This intention became clear since version 2.0. The first actual fundamental change that took place in the language was the incorporation of Generics. Don Syme, who would later on lead the team that created the F# language, was very active and led this team as well, so it was ready for version 2.0 of the .NET Framework (not just in C# but in C++ and VB.NET as well). Generics The purpose of generics was mainly to facilitate the creation of more reusable code (one of the principles of OOP, by the way). The name refers to a set of language features that allow classes, structures, interfaces, methods, and delegates to be declared and defined with unspecified or generic type parameters instead of specific types (see https://msdn.microsoft.com/en-us/library/ms379564(v=vs.80).aspx, for more details). So, you can define members in a sort of abstract definition, and later on, at the time of using it, a real, concrete type will be applied. The basic .NET classes (BCL) were enhanced in the System namespace and a new System.Collections.Generic namespace was created to support this new feature in depth. In addition, new support methods were added to ease the use of this new type, such as Type.IsGenericType (obviously, to check types), Type.GetGenericArguments (self-descriptive), and the very useful Type.MakeGenericType, which can create a generic type of any kind from a previous nonspecified declaration. The following code uses the generic type definition for a Dictionary (Dictionary<,>) and creates an actual (build) type using this technique. The relevant code is the following (the rest, including the output to the console is included in Demo_02_03): // Define a generic Dictionary (the // comma is enough for the compiler to infer number of // parameters, but we didn't decide the types yet. Type generic = typeof(Dictionary<,>); ShowTypeData(generic); // We define an array of types for the Dictionary (Key, Value) // Key is of type string, and Value is of -this- type (Program) // Notice that types could be -in this case- of any kind Type[] typeArgs = { typeof(string), typeof(Program) }; // Now we use MakeGenericType to create a Type representing // the actualType generic type. Type actualType = generic.MakeGenericType(typeArgs); ShowTypeData(actualType); As you see, MakeGenericType expects an array of (concrete) types. Later on (not in the preceding code), we use GetGenericTypeDefinition, IsGenericType, and GetGenericArguments in order to introspect the resulting types and present the following output in the console: So, we have different ways to declare generics with identical results as far as the operations in the code are concerned. Obviously, manipulating already constructed generic types is not the only possibility, since one of the main goals of generics is to avoid casting operations by simplifying the work with collections. Up until version 2.0, collections could only hold basic types: integers, longs, strings, and so on, along with emulating different types of data structures, such as stacks, queues, linked lists, and so on. Besides this, Generics have another big advantage: you can write methods that support working with different types of arguments (and return values) as long as you provide a correct way to handle all possible cases. Once again, the notion of contract will be crucial here. Creating custom generic types and methods Other useful feature is the possibility to use custom generic types. Generic types and the support for optional values through the System.Nullable<T> type were, for many developers, two of the most important features included in version 2.0 of the language. Imagine you have a Customer class, which your application manages. So, in different use cases, you will read collections of customers and perform operations with them. Now, what if you need an operation such as Compare_Customers? What would be the criteria to use in this case? Even worse, what if we would like to use the same criteria with different types of entities, such as Customer and Provider? In these cases, some characteristics of generics come in handy. To start with, we can build a class that has an implementation of the IComparer interface, so we establish out of any uncertainty what the criteria to be used is in order to consider customer C1 bigger or smaller than customer C2. For instance, if the criteria is only Balance, we can start with a basic Customer class, to which we add a static method in order to generate a list of random customers: public class Customer { public string Name { get; set; } public string Country { get; set; } public int Balance { get; set; } public static string[] Countries = { "US", "UK", "India", "Canada", "China" }; public static List<Customer> customersList(int number) { List<Customer> list = new List<Customer>(); Random rnd = new Random(System.DateTime.Now.Millisecond); for (int i = 1; i <= number; i++) { Customer c = new Customer(); c.Name = Path.GetRandomFileName().Replace(".", ""); c.Country = Countries[rnd.Next(0, 4)]; c.Balance = rnd.Next(0, 100000); list.Add(c); } return list; } } Then, we build another CustomerComparer class, which implements the IComparer interface. The difference is that this comparison method is a generic instantiation customized for the Customer objects, so we have the freedom of implementing this scenario just in the way that seems convenient for our logic. In this case, we're using Balance as an ordering criteria, so that we would have the following: public class CustomerComparer : IComparer<Customer> { public int Compare(Customer x, Customer y) { // Implementation of IComparer returns an int // indicating if object x is less than, equal to or // greater than y. if (x.Balance < y.Balance) { return -1; } else if (x.Balance > y.Balance) return 1; else { return 0; } // they're equal } } We can see that the criteria used to compare is just the one we decided for our business logic. Finally, another class, GenericCustomer, which implements an entry point of the application, uses both classes in this manner: public class GenericCustomers { public static void Main() { List<Customer> theList = Customer.customersList(25); CustomerComparer cc = new CustomerComparer(); // Sort now uses our own definition of comparison theList.Sort(cc); Console.WriteLine(" List of customers ordered by Balance"); Console.WriteLine(" " + string.Concat(Enumerable.Repeat("-", 36))); foreach (var item in theList) { Console.WriteLine(" Name: {0}, Country: {1}, t Balance: {2}", item.Name, item.Country, item.Balance); } Console.ReadKey(); } } This produces an output of random customers order by their balance: This is even better: we can change the method so that it supports both customers and providers indistinctly. To do this, we need to abstract a common property of both entities that we can use for comparison. If our implementation of Provider has different or similar fields (but they're not the same), it doesn't matter as long as we have the common factor: a Balance field. So we begin with a simple definition of this common factor, an interface called IPersonBalance: public interface IPersonBalance { int Balance { get; set; } } As long as our Provider class implements this interface, we can later create a common method that's able to compare both objects, so, let's assume our Provider class looks like this: public class Provider : IPersonBalance { public string ProviderName { get; set; } public string ShipCountry { get; set; } public int Balance { get; set; } public static string[] Countries = { "US", "Spain", "India", "France", "Italy" }; public static List<Provider> providersList(int number) { List<Provider> list = new List<Provider>(); Random rnd = new Random(System.DateTime.Now.Millisecond); for (int i = 1; i <= number; i++) { Provider p = new Provider(); p.ProviderName = Path.GetRandomFileName().Replace(".", ""); p.ShipCountry = Countries[rnd.Next(0, 4)]; p.Balance = rnd.Next(0, 100000); list.Add(p); } return list; } } Now, we rewrite the Comparer method to be a GenericComparer class, capable of dealing with both types of entities: public class GenericComparer : IComparer<IPersonBalance> { public int Compare(IPersonBalance x, IPersonBalance y) { if (x.Balance < y.Balance) { return -1; } else if (x.Balance > y.Balance) return 1; else { return 0; } } } Note that in this implementation, IComparer depends on an interface, not on an actual class, and that this interface simply defines the common factor of these entities. Now, our new entry point will put everything together in order to obtain an ordered list of random Provider classes that uses the common comparison method just created: public static void Main() { List<Provider> providerList = Provider.providersList(25); GenericComparer gc = new GenericComparer(); // Sort now uses our own definition of comparison providerList.Sort(gc); Console.WriteLine(" List of providers ordered by Balance"); Console.WriteLine(" " + ("").PadRight(36, '-')); foreach (var item in providerList) { Console.WriteLine(" ProviderName: {0}, S.Country: {1}, t Balance: {2}", item.ProviderName, item.ShipCountry, item.Balance); } Console.ReadKey(); } In this way, we obtain an output like what is shown in the following figure (note that we didn't take much care of formatting in order to focus on the process): The example shows how generics (and interfaces: also generic) come to our rescue in these type of situations, and—as we'll have the opportunity to prove when talking about implementations of design patterns—this is key to facilitating good practices. So far, some of the most critical concepts behind generics have been discussed. However, the real power comes from joining these capabilities with two new features of the language: lambda expressions and the LINQ syntax. Extension methods Finally, we can extend existing classes' functionality. This means extending even the .NET Framework base types, such as int or String. This is a very useful feature, and it's performed in the way it is recommended by the documentation; no violation of basic principles of OOP occur. The procedure is fairly simple. We need to create a new public static top level (not nested) class containing a public static method with an initial argument declaration especially suited for the compiler to assume that the compiled code will be appended to the actual functionality of the type. The procedure can be used with any class, either belonging to the .NET framework or a customized user or class. Once we have the declaration, its usage is fairly simple, as shown in this code: public static class StringExtension { public static string ExtendedString(this string s) { return "{{ " + s + " }}"; } } Note that the first argument, referred with the this keyword, references the string to be used; so, in this example, we will call the method without any extra arguments (although we can pass as many arguments as we need for other extensions). To put it to work, we just have to add something like this: Console.WriteLine("The word " + "evaluate".ExtendedString() + " is extended"); We will get the extended output with the word enclosed in double brackets: Summary So in this article we saw some of the most relevant enhancements made to the C# language in versions 2 and 3. We started by reviewing the main differences between C# and other languages and understanding the meaning of strongly typed, in this case, together with the concepts of static and dynamic. We followed this up with an examination of the generics feature that appeared in version 2.0 of the framework and analyzed some samples to illustrate some typical use cases, including the creation of custom generic methods. Finally, we covered the extension methods. Resources for Article: Further resources on this subject: Debugging Your .NET Application [article] Why we need Design Patterns? [article] Creating a NHibernate session to access database within ASP.NET [article]
Read more
  • 0
  • 0
  • 12375

article-image-getting-started-zeromq
Packt
04 Apr 2013
5 min read
Save for later

Getting Started with ZeroMQ

Packt
04 Apr 2013
5 min read
(For more resources related to this topic, see here.) The message queue A message queue, or technically a FIFO (First In First Out) queue is a fundamental and well-studied data structure. There are different queue implementations such as priority queues or double-ended queues that have different features, but the general idea is that the data is added in a queue and fetched when the data or the caller is ready. Imagine we are using a basic in-memory queue. In case of an issue, such as power outage or a hardware failure, the entire queue could be lost. Hence, another program that expects to receive a message will not receive any messages. However, adopting a message queue guarantees that messages will be delivered to the destination no matter what happens. Message queuing enables asynchronous communication between loosely-coupled components and also provides solid queuing consistency. In case of insufficient resources, which prevent you from immediately processing the data that is sent, you can queue them up in the message queue server that would store the data until the destination is ready to accept the messages. Message queuing has an important role in large-scaled distributed systems and enables asynchronous communication. Let's have a quick overview on the difference between synchronous and asynchronous systems. In ordinary synchronous systems, tasks are processed one at a time. A task is not processed until the task in-process is finished. This is the simplest way to get the job done. Synchronous system We could also implement this system with threads. In this case threads process each task in parallel. Threaded synchronous system In the threading model, threads are managed by the operating system itself on a single processor or multiple processors/cores. Asynchronous Input/Output (AIO) allows a program to continue its execution while processing input/output requests. AIO is mandatory in real-time applications. By using AIO, we could map several tasks to a single thread. Asynchronous system The traditional way of programming is to start a process and wait for it to complete. The downside of this approach is that it blocks the execution of the program while there is a task in progress. However, AIO has a different approach. In AIO, a task that does not depend on the process can still continue. You may wonder why you would use message queue instead of handling all processes with a single-threaded queue approach or multi-threaded queue approach. Let's consider a scenario where you have a web application similar to Google Images in which you let users type some URLs. Once they submit the form, your application fetches all the images from the given URLs. However: If you use a single-threaded queue, your application would not be able to process all the given URLs if there are too many users If you use a multi-threaded queue approach, your application would be vulnerable to a distributed denial of service attack (DDoS) You would lose all the given URLs in case of a hardware failure In this scenario, you know that you need to add the given URLs into a queue and process them. So, you would need a message queuing system. Introduction to ZeroMQ Until now we have covered what a message queue is, which brings us to the purpose of this article, that is, ZeroMQ. The community identifies ZeroMQ as "sockets on steroids". The formal definition of ZeroMQ is it is a messaging library that helps developers to design distributed and concurrent applications. The first thing we need to know about ZeroMQ is that it is not a traditional message queuing system, such as ActiveMQ, WebSphereMQ, or RabbitMQ. ZeroMQ is different. It gives us the tools to build our own message queuing system. It is a library. It runs on different architectures from ARM to Itanium, and has support for more than 20 programming languages. Simplicity ZeroMQ is simple. We can do some asynchronous I/O operations and ZeroMQ could queue the message in an I/O thread. ZeroMQ I/O threads are asynchronous when handling network traffic, so it can do the rest of the job for us. If you have worked on sockets before, you will know that it is quite painful to work on. However, ZeroMQ makes it easy to work on sockets. Performance ZeroMQ is fast. The website Second Life managed to get 13.4 microseconds end-to-end latencies and up to 4,100,000 messages per second. ZeroMQ can use multicast transport protocol, which is an efficient method to transmit data to multiple destinations. The brokerless design Unlike other traditional message queuing systems, ZeroMQ is brokerless. In traditional message queuing systems, there is a central message server (broker) in the middle of the network and every node is connected to this central node, and each node communicates with other nodes via the central broker. They do not directly communicate with each other. However, ZeroMQ is brokerless. In a brokerless design, applications can directly communicate with each other without any broker in the middle. ZeroMQ does not store messages on disk. Please do not even think about it. However, it is possible to use a local swap file to store messages if you set zmq.SWAP. Summary This article explained what a message queuing system is, discussed the importance of message queuing, and introduced ZeroMQ to the reader. Resources for Article : Further resources on this subject: RESTful Web Service Implementation with RESTEasy [Article] BizTalk Server: Standard Message Exchange Patterns and Types of Service [Article] AJAX Chat Implementation: Part 1 [Article]  
Read more
  • 0
  • 0
  • 12325
article-image-exploring-functions
Packt
16 Jun 2017
12 min read
Save for later

Exploring Functions

Packt
16 Jun 2017
12 min read
In this article by Marius Bancila, author of the book Modern C++ Programming Cookbook covers the following recipes: Defaulted and deleted functions Using lambdas with standard algorithms (For more resources related to this topic, see here.) Defaulted and deleted functions In C++, classes have special members (constructors, destructor and operators) that may be either implemented by default by the compiler or supplied by the developer. However, the rules for what can be default implemented are a bit complicated and can lead to problems. On the other hand, developers sometimes want to prevent objects to be copied, moved or constructed in a particular way. That is possible by implementing different tricks using these special members. The C++11 standard has simplified many of these by allowing functions to be deleted or defaulted in the manner we will see below. Getting started For this recipe, you need to know what special member functions are, and what copyable and moveable means. How to do it... Use the following syntax to specify how functions should be handled: To default a function use =default instead of the function body. Only special class member functions that have defaults can be defaulted. struct foo { foo() = default; }; To delete a function use =delete instead of the function body. Any function, including non-member functions, can be deleted. struct foo { foo(foo const &) = delete; }; void func(int) = delete; Use defaulted and deleted functions to achieve various design goals such as the following examples: To implement a class that is not copyable, and implicitly not movable, declare the copy operations as deleted. class foo_not_copiable { public: foo_not_copiable() = default; foo_not_copiable(foo_not_copiable const &) = delete; foo_not_copiable& operator=(foo_not_copiable const&) = delete; }; To implement a class that is not copyable, but it is movable, declare the copy operations as deleted and explicitly implement the move operations (and provide any additional constructors that are needed). class data_wrapper { Data* data; public: data_wrapper(Data* d = nullptr) : data(d) {} ~data_wrapper() { delete data; } data_wrapper(data_wrapper const&) = delete; data_wrapper& operator=(data_wrapper const &) = delete; data_wrapper(data_wrapper&& o) :data(std::move(o.data)) { o.data = nullptr; } data_wrapper& operator=(data_wrapper&& o) { if (this != &o) { delete data; data = std::move(o.data); o.data = nullptr; } return *this; } }; To ensure a function is called only with objects of a specific type, and perhaps prevent type promotion, provide deleted overloads for the function (the example below with free functions can also be applied to any class member functions). template <typename T> void run(T val) = delete; void run(long val) {} // can only be called with long integers How it works... A class has several special members that can be implemented by default by the compiler. These are the default constructor, copy constructor, move constructor, copy assignment, move assignment and destructor. If you don't implement them, then the compiler does it, so that instances of a class can be created, moved, copied and destructed. However, if you explicitly provide one or more, then the compiler will not generate the others according to the following rules: If a user defined constructor exists, the default constructor is not generated by default. If a user defined virtual destructor exists, the default constructor is not generated by default. If a user-defined move constructor or move assignment operator exist, then the copy constructor and copy assignment operator are not generated by default. If a user defined copy constructor, move constructor, copy assignment operator, move assignment operator or destructor exist, then the move constructor and move assignment operator are not generated by default. If a user defined copy constructor or destructor exists, then the copy assignment operator is generated by default. If a user-defined copy assignment operator or destructor exists, then the copy constructor is generated by default. Note that the last two are deprecated rules and may no longer be supported by your compiler. Sometimes developers need to provide empty implementations of these special members or hide them in order to prevent the instances of the class to be constructed in a specific manner. A typical example is a class that is not supposed to be copyable. The classical pattern for this is to provide a default constructor and hide the copy constructor and copy assignment operators. While this works, the explicitly defined default constructor makes the class to no longer be considered trivial and therefore a POD type (that can be constructed with reinterpret_cast). The modern alternative to this is using deleted function as shown in the previous section. When the compiler encounters the =default in the definition of a function it will provide the default implementation. The rules for special member functions mentioned earlier still apply. Functions can be declared =default outside the body of a class if and only if they are inlined. class foo     {      public:      foo() = default;      inline foo& operator=(foo const &);     };     inline foo& foo::operator=(foo const &) = default;     When the compiler encounters the =delete in the definition of a function it will prevent the calling of the function. However, the function is still considered during overload resolution and only if the deleted function is the best match the compiler generates an error. For example, giving the previously defined overloads for function run() only calls with long integers are possible. Calls with arguments of any other type, including int, for which an automatic type promotion to long exists, would determine a deleted overload to be considered the best match and therefore the compiler will generate an error: run(42); // error, matches a deleted overload     run(42L); // OK, long integer arguments are allowed     Note that previously declared functions cannot be deleted, as the =delete definition must be the first declaration in a translation unit: void forward_declared_function();     // ...     void forward_declared_function() = delete; // error     The rule of thumb (also known as The Rule of Five) for class special member functions is: if you explicitly define any of copy constructor, move constructor, copy assignment, move assignment or destructor then you must either explicitly define or default all of them. Using lambdas with standard algorithms One of the most important modern features of C++ is lambda expressions, also referred as lambda functions or simply lambdas. Lambda expressions enable us to define anonymous function objects that can capture variables in the scope and be invoked or passed as arguments to functions. Lambdas are useful for many purposes and in this recipe, we will see how to use them with standard algorithms. Getting ready In this recipe, we discuss standard algorithms that take an argument that is a function or predicate that is applied to the elements it iterates through. You need to know what unary and binary functions are, and what are predicates and comparison functions. You also need to be familiar with function objects because lambda expressions are syntactic sugar for function objects. How to do it... Prefer to use lambda expressions to pass callbacks to standard algorithms instead of functions or function objects: Define anonymous lambda expressions in the place of the call if you only need to use the lambda in a single place. auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](int const n) {return n > 0; }); Define a named lambda, that is, assigned to a variable (usually with the auto specifier for the type), if you need to call the lambda in multiple places. auto ispositive = [](int const n) {return n > 0; }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), ispositive); Use generic lambda expressions if you need lambdas that only differ in their argument types (available since C++14). auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](auto const n) {return n > 0; }); How it works... The non-generic lambda expression shown above takes a constant integer and returns true if it is greater than 0, or false otherwise. The compiler defines an unnamed function object with the call operator having the signature of the lambda expression. struct __lambda_name__     {     bool operator()(int const n) const { return n > 0; }     };     The way the unnamed function object is defined by the compiler depends on the way we define the lambda expression, that can capture variables, use the mutable specifier or exception specifications or may have a trailing return type. The __lambda_name__ function object shown earlier is actually a simplification of what the compiler generates because it also defines a default copy and move constructor, a default destructor, and a deleted assignment operator. It must be well understood that the lambda expression is actually a class. In order to call it, the compiler needs to instantiate an object of the class. The object instantiated from a lambda expression is called a lambda closure. In the next example, we want to count the number of elements in a range that are greater or equal to 5 and less or equal than 10. The lambda expression, in this case, will look like this: auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 };     auto start{ 5 };     auto end{ 10 };     auto inrange = std::count_if(      std::begin(numbers), std::end(numbers),      [start,end](int const n) {return start <= n && n <= end;});     This lambda captures two variables, start and end, by copy (that is, value). The result unnamed function object created by the compiler looks very much like the one we defined above. With the default and deleted special members mentioned earlier, the class looks like this: class __lambda_name_2__     {    int start_; int end_; public: explicit __lambda_name_2__(int const start, int const end) : start_(start), end_(end) {}    __lambda_name_2__(const __lambda_name_2__&) = default;    __lambda_name_2__(__lambda_name_2__&&) = default;    __lambda_name_2__& operator=(const __lambda_name_2__&)     = delete;    ~__lambda_name_2__() = default;      bool operator() (int const n) const    { return start_ <= n && n <= end_; }     };     The lambda expression can capture variables by copy (or value) or by reference, and different combinations of the two are possible. However, it is not possible to capture a variable multiple times and it is only possible to have & or = at the beginning of the capture list. A lambda can only capture variables from an enclosing function scope. It cannot capture variables with static storage duration (that means variables declared in namespace scope or with the static or external specifier). The following table shows various combinations for the lambda captures semantics. Lambda Description [](){} Does not capture anything [&](){} Captures everything by reference [=](){} Captures everything by copy [&x](){} Capture only x by reference [x](){} Capture only x by copy [&x...](){} Capture pack extension x by reference [x...](){} Capture pack extension x by copy [&, x](){} Captures everything by reference except for x that is captured by copy [=, &x](){} Captures everything by copy except for x that is captured by reference [&, this](){} Captures everything by reference except for pointer this that is captured by copy (this is always captured by copy) [x, x](){} Error, x is captured twice [&, &x](){} Error, everything is captured by reference, cannot specify again to capture x by reference [=, =x](){} Error, everything is captured by copy, cannot specify again to capture x by copy [&this](){} Error, pointer this is always captured by copy [&, =](){} Error, cannot capture everything both by copy and by reference The general form of a lambda expression, as of C++17, looks like this:  [capture-list](params) mutable constexpr exception attr -> ret { body }    All parts shown in this syntax are actually optional except for the capture list, that can, however, be empty, and the body, that can also be empty. The parameter list can actually be omitted if no parameters are needed. The return type does not need to be specified as the compiler can infer it from the type of the returned expression. The mutable specifier (that tells the compiler the lambda can actually modify variables captured by copy), the constexpr specifier (that tells the compiler to generate a constexpr call operator) and the exception specifiers and attributes are all optional. The simplest possible lambda expression is []{}, though it is often written as [](){}. There's more... There are cases when lambda expressions only differ in the type of their arguments. In this case, the lambdas can be written in a generic way, just like templates, but using the auto specifier for the type parameters (no template syntax is involved). Summary Functions are a fundamental concept in programming; regardless the topic we discussed we end up writing functions. This article contains recipes related to functions. This article, however, covers modern language features related to functions and callable objects. Resources for Article: Further resources on this subject: Understanding the Dependencies of a C++ Application [article] Boost.Asio C++ Network Programming [article] Application Development in Visual C++ - The Tetris Application [article]
Read more
  • 0
  • 0
  • 12317

article-image-interactive-documents
Packt
02 Nov 2015
5 min read
Save for later

Interactive Documents

Packt
02 Nov 2015
5 min read
This article by Julian Hillebrand and Maximilian H. Nierhoff authors of the book Mastering RStudio for R Development covers the following topics: The two main ways to create interactive R Markdown documents Creating R Markdown and Shiny documents and presentations Using the ggvis package with R Markdown Embedding different types of interactive charts in documents Deploying interactive R Markdown documents (For more resources related to this topic, see here.) Creating interactive documents with R Markdown In this article, we want to focus on the opportunities to create interactive documents with R Markdown and RStudio. This is, of course, particularly interesting for the readers of a document, since it enables them to interact with the document by changing chart types, parameters, values, or other similar things. In principle, there are two ways to make an R Markdown document interactive. Firstly, you can use the Shiny web application framework of RStudio, or secondly, there is the possibility of incorporating various interactive chart types by using corresponding packages. Using R Markdown and Shiny Besides building complete web applications, there is also the possibility of integrating entire Shiny applications into R Markdown documents and presentations. Since we have already learned all the basic functions of R Markdown, and the use and logic of Shiny, we will focus on the following lines of integrating a simple Shiny app into an R Markdown file. In order for Shiny and R Markdown to work together, the argument, runtime: shiny must be added to the YAML header of the file. Of course, the RStudio IDE offers a quick way to create a new Shiny document presentation. Click on the new file, choose R Markdown, and in the popup window, select Shiny from the left-hand side menu. In the Shiny menu, you can decide whether you want to start with a Shiny Document option or a Shiny Presentation option: Shiny Document After choosing the Shiny Document option, a prefilled .Rmd file opens. It is different from the known R Markdown interface in that there is the Run Document button instead of the knit button and icon. The prefilled .Rmd file produces an R Markdown document with a working and interactive Shiny application. You can change the number of bins in the plot and also adjust the bandwidth. All these changes get rendered in real time, directly in your document. Shiny Presentation Also, when you click on Shiny Presentation in the selection menu, a prefilled .Rmd file opens. Because it is a presentation, the output format is changed to ioslides_presentation in the YAML header. The button in the code pane is now called Run Presentation: Otherwise, Shiny Presentation looks just like the normal R Markdown presentations. The Shiny app gets embedded in a slide and you can again interact with the underlying data of the application: Dissembling a Shiny R Markdown document Of course, the questions arises that how is it possible to embed a whole Shiny application onto an R Markdown document without the two usual basic files, ui.R and server.R? In fact, the rmarkdown package creates an invisible server.R file by extracting the R code from the code chunks. Reactive elements get placed into the index.html file of the HTML output, while the whole R Markdown document acts as the ui.R file. Embedding interactive charts into R Markdown The next way is to embed interactive chart types into R Markdown documents by using various R packages that enable us to create interactive charts. Some packages are as follows: ggvis rCharts googleVis dygraphs Therefore, we will not introduce them again, but will introduce some more packages that enable us to build interactive charts. They are: threejs networkD3 metricsgraphics plotly Please keep in mind that the interactivity logically only works with the HTML output of R Markdown. Using ggvis for interactive R Markdown documents Broadly speaking, ggvis is the successor of the well-known graphic package, ggplot2. The interactivity options of ggvis, which are based on the reactive programming model of the Shiny framework, are also useful for creating interactive R Markdown documents. To create an interactive R markdown document with ggvis, you need to click on the new file, then on R Markdown..., choose Shiny in the left menu of the new window, and finally, click on OK to create the document. As told before, since ggvis uses the reactive model of Shiny, we need to create an R Markdown document with ggvis this way. If you want to include an interactive ggvis plot within a normal R Markdown file, make sure to include the runtime: shiny argument in the YAML header. As shown, readers of this R Markdown document can easily adjust the bandwidth, and also, the kernel model. The interactive controls are created with input_. In our example, we used the controls, input_slider() and input_select(). For example, some of the other controls are input_checkbox(), input_numeric(), and so on. These controls have different arguments depending on the type of input. For both controls in our example, we used the label argument, which is just a text label shown next to the controls. Other arguments are ID (a unique identifier for the assigned control) and map (a function that remaps the output). Summary In this article, we have learned the two main ways to create interactive R Markdown documents. On the one hand, there is the versatile, usable Shiny framework. This includes the inbuilt Shiny documents and presentations options in RStudio, and also the ggvis package, which takes the advantages of the Shiny framework to build its interactivity. On the other hand, we introduced several already known, and also some new, R packages that make it possible to create several different types of interactive charts. Most of them achieve this by binding R to Existing JavaScript libraries. Resources for Article: Further resources on this subject: Jenkins Continuous Integration [article] Aspects of Data Manipulation in R [article] Find Friends on Facebook [article]
Read more
  • 0
  • 0
  • 12269
Modal Close icon
Modal Close icon