Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-creating-jee-application-ejb
Packt
24 Sep 2015
11 min read
Save for later

Creating a JEE Application with EJB

Packt
24 Sep 2015
11 min read
In this article by Ram Kulkarni, author of Java EE Development with Eclipse (e2), we will be using EJBs (Enterprise Java Beans) to implement business logic. This is ideal in scenarios where you want components that process business logic to be distributed across different servers. But that is just one of the advantages of EJB. Even if you use EJBs on the same server as the web application, you may gain from a number of services that the EJB container provides to the applications through EJBs. You can specify security constraints for calling EJB methods declaratively (using annotations), and you can also easily specify transaction boundaries (specify which method calls from a part of one transaction) using annotations. In addition to this, the container handles the life cycle of EJBs, including pooling of certain types of EJB objects so that more objects can be created when the load on the application increases. (For more resources related to this topic, see here.) In this article, we will create the same application using EJBs and deploy it in a Glassfish 4 server. But before that, you need to understand some basic concepts of EJBs. Types of EJB EJBs can be of following types as per the EJB 3 specifications: Session bean: Stateful session bean Stateless session bean Singleton session bean Message-driven bean In this article, we will focus on session beans. Session beans In general, session beans are meant for containing methods used to execute the main business logic of enterprise applications. Any Plain Old Java Object (POJO) can be annotated with the appropriate EJB-3-specific annotations to make it a session bean. Session beans come in three types, as follows. Stateful session bean One stateful session bean serves requests for one client only. There is a one-to-one mapping between the stateful session bean and the client. Therefore, stateful beans can hold state data for the client between multiple method calls. In our CourseManagement application, we can use a stateful bean to hold the Student data (student profile and the courses taken by him/her) after a student logs-in. The state maintained by the Stateful bean is lost when the server restarts or when the session times out. Since there is one stateful bean per client, using a stateful bean might impact the scalability of the application. We use the @Stateful annotation to create a stateful session bean. Stateless session bean A stateless session bean does not hold any state information for any client. Therefore, one session bean can be shared across multiple clients. The EJB container maintains pools of stateless beans, and when a client request comes, it takes out a bean from the pool, executes methods, and returns the bean to the pool. Stateless session beans provide excellent scalability because they can be shared and need not be created for each client. We use the @Stateless annotation to create a stateless session bean. Singleton session bean As the name suggests, there is only one instance of a singleton bean class in the EJB container (this is true in the clustered environment too; each EJB container will have an instance of a singleton bean). This means that they are shared by multiple clients, and they are not pooled by EJB containers (because there can be only one instance). Since a singleton session bean is a shared resource, we need to manage concurrency in it. Java EE provides two concurrency management options for singleton session beans: container-managed concurrency and bean-managed concurrency. Container-managed concurrency can easily be specified by annotations. See https://docs.oracle.com/javaee/7/tutorial/ejb-basicexamples002.htm#GIPSZ for more information on managing concurrency in a singleton session bean. Using a singleton bean could have an impact on the scalability of the application if there are resource contentions in the code. We use the @Singleton annotation to create a singleton session bean Accessing a session bean from the client Session beans can be designed to be accessed locally (within the same application as a session bean) or remotely (from a client running in a different application or JVM) or both. In the case of remote access, session beans are required to implement a remote interface. For local access, session beans can implement a local interface or no interface (the no-interface view of a session bean). Remote and local interfaces that session beans implement are sometimes also called business interfaces, because they typically expose the primary business functionality. Creating a no-interface session bean To create a session bean with a no-interface view, create a POJO and annotate it with the appropriate EJB annotation type and @LocalBean. For example, we can create a local stateful Student bean as follows: import javax.ejb.LocalBean; import javax.ejb.Singleton; @Singleton @LocalBean public class Student { ... } Accessing a session bean using dependency injection You can access session beans by either using the @EJBannotation (for dependency injection) or performing a Java Naming and Directory Interface (JNDI) lookup. EJB containers are required to make the JNDI URLs of EJBs available to clients. Dependency injection of session beans using @EJB work only for managed components, that is, components of the application whose life cycle is managed by the EJB container. When a component is managed by the container, it is created (instantiated) by the container and also destroyed by the container. You do not create managed components using the new operator. JEE-managed components that support direct injection of EJBs are servlets, managed beans of JSF pages and EJBs themselves (one EJB can have other EJBs injected into it). Unfortunately, you cannot have a web container injecting EJBs into JSPs or JSP beans. Also, you cannot have EJBs injected into any custom classes that you create and are instantiated using the new operator. We can use the Student bean (created previously) from a managed bean of JSF, as follows: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private Student studentEJB; } Note that if you create an EJB with a no-interface view, then all the public methods in that EJB will be exposed to the clients. If you want to control which methods can be called by clients, then you should implement the business interface. Creating a session bean using a local business interface A business interface for EJB is a simple Java interface with either the @Remote or @Local annotation. So we can create a local interface for the Student bean as follows: import java.util.List; import javax.ejb.Local; @Local public interface StudentLocal { public List<Course> getCourses(); } We implement a session bean like this: import java.util.List; import javax.ejb.Local; import javax.ejb.Stateful; @Stateful @Local public class Student implements StudentLocal { @Override public List<CourseDTO> getCourses() { //get courses are return … } } Clients can access the Student EJB only through the local interface: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private StudentLocal student; } The session bean can implement multiple business interfaces. Accessing a session bean using a JNDI lookup Though accessing EJB using dependency injection is the easiest way, it works only if the container manages the class that accesses the EJB. If you want to access EJB from a POJO that is not a managed bean, then dependency injection will not work. Another scenario where dependency injection does not work is when EJB is deployed in a separate JVM (this could be on a remote server). In such cases, you will have to access EJB using a JNDI lookup (visit https://docs.oracle.com/javase/tutorial/jndi/ for more information on JNDI). JEE applications can be packaged in an Enterprise Application Archive (EAR), which contains a .jar file for EJBs and a WAR file for web applications (and the lib folder contains the libraries required for both). If, for example, the name of an EAR file is CourseManagement.ear and the name of an EJB JAR file in it is CourseManagementEJBs.jar, then the name of the application is CourseManagement (the name of the EAR file) and the module name is CourseManagementEJBs. The EJB container uses these names to create a JNDI URL for lookup EJBs. A global JNDI URL for EJB is created as follows: "java:global/<application_name>/<module_name>/<bean_name>![<bean_interface>]" java:global: Indicates that it is a global JNDI URL. <application_name>: The application name is typically the name of the EAR file. <module_name>: This is the name of the EJB JAR. <bean_name>: This is the name of the EJB bean class. <bean_interface>: This is optional if EJB has a no-interface view, or if it implements only one business interface. Otherwise, it is a fully qualified name of a business interface. EJB containers are also required to publish two more variations of JNDI URLs for each EJB. These are not global URLs, which means that they can't be used to access EJBs from clients that are not in the same JEE application (in the same EAR): "java:app/[<module_name>]/<bean_name>![<bean_interface>]" "java:module/<bean_name>![<bean_interface>]" The first URL can be used if the EJB client is in the same application, and the second URL can be used if the client is in the same module (the same JAR file as the EJB). Before you look up any URL in a JNDI server, you need to create an InitialContext that includes information, among other things such as the hostname of JNDI server and the port on which it is running. If you are creating InitialContext in the same server, then there is no need to specify these attributes: InitialContext initCtx = new InitialContext(); Object obj = initCtx.lookup("jndi_url"); We can use the following JNDI URLs to access a no-interface (LocalBean) Student EJB (assuming that the name of the EAR file is CourseManagement and the name of the JAR file for EJBs is CourseManagementEJBs): URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR file, because we are using a global URL. Note that we haven't specified the interface name because we are assuming that the Student bean provides a no-interface view in this example. java:app/CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the application name because the client is expected to be in the same application. This is because the namespace of the URL is java:app. java:module/Student The client must be in the same JAR file as EJB. We can use the following JNDI URLs to access the Student EJB that implemented a local interface, StudentLocal: URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal The client can be anywhere in the EAR file, because we are using a global URL. java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the interface name because the bean implements only one business interface. Note that the object returned from this call will be of the StudentLocal type, and not Student. java:app/CourseManagementEJBs/Student Or java:app/CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal   The client can be anywhere in the EAR. We skipped the application name because the JNDI namespace is java:app. java:module/Student Or java:module/Student!packt.jee.book.ch6.StudentLocal The client must be in the same EAR as the EJB. Here is an example of how we can call the Student bean with the local business interface from one of the objects (that is not managed by the web container) in our web application: InitialContext ctx = new InitialContext(); StudentLocal student = (StudentLocal) ctx.loopup ("java:app/CourseManagementEJBs/Student"); return student.getCourses(id) ; //get courses from Student EJB Creating EAR for Deployment outside Eclipse. Summary EJBs are ideal for writing business logic in web applications. They can act as the perfect bridge between web interface components, such as a JSF, servlet, or JSP, and data access objects, such as JDO. EJBs can be distributed across multiple JEE application servers (this could improve application scalability) and their life cycle is managed by the container. EJBs can easily be injected into managed objects or can be looked up using JNDI. The Eclipse JEE makes creating and consuming EJBs very easy. The JEE application server Glassfish can also be managed and applications can be deployed from within Eclipse. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans[article] WebSockets in Wildfly[article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 16346

article-image-getting-inside-c-plus-plus-multithreaded-application
Maya Posch
13 Feb 2018
8 min read
Save for later

Getting Inside a C++ Multithreaded Application

Maya Posch
13 Feb 2018
8 min read
This C++ programming tutorial is taken from Maya Posch's Mastering C++ Multithreading. In its most basic form, a multithreaded application consists of a singular process with two or more threads. These threads can be used in a variety of ways, for example, to allow the process to respond to events in an asynchronous manner by using one thread per incoming event or type of event, or to speed up the processing of data by splitting the work across multiple threads. Examples of asynchronous response to events include the processing of user interface (GUI) and network events on separate threads so that neither type of event has to wait on the other, or can block events from being responded to in time. Generally, a single thread performs a single task, such as the processing of GUI or network events, or the processing of data. For this basic example, the application will start with a singular thread, which will then launch a number of threads, and wait for them to finish. Each of these new threads will perform its own task before finishing. Let's start with the includes and global variables for our application: #include <iostream> #include <thread> #include <mutex> #include <vector> #include <random> using namespace std; // --- Globals mutex values_mtx; mutex cout_mtx; Both the I/O stream and vector headers should be familiar to anyone who has ever used C++: the former is here used for the standard output (cout), and vector for storing a sequence of values. The random header is new in c++11, and as the name suggests, it offers classes and methods for generating random sequences. We use it here to make our threads do something interesting. Finally, the thread and mutex includes are the core of our multithreaded application; they provide the basic means for creating threads, and allow for thread-safe interactions between them. Moving on, we create two mutexes: one for the global vector and one for cout, since the latter is not thread-safe. Next we create the main function as follows: int main() { values.push_back(42); We then push a fixed value onto the vector instance; this one will be used by the threads we create in a moment: thread tr1(threadFnc, 1); thread tr2(threadFnc, 2); thread tr3(threadFnc, 3); thread tr4(threadFnc, 4); We create new threads, and provide them with the name of the method to use, passing along any parameters--in this case, just a single integer: tr1.join(); tr2.join(); tr3.join(); tr4.join(); Next, we wait for each thread to finish before we continue by calling join() on each thread instance: cout << "Input: " << values[0] << ", Result 1: " << values[1] << ", Result 2: " << values[2] << ", Result 3: " << values[3] << ", Result 4: " << values[4] << "n"; return 1; } At this point, we expect that each thread has done whatever it's supposed to do, and added the result to the vector, which we then read out and show the user. Of course, this shows almost nothing of what really happens in the application, mostly just the essential simplicity of using threads. Next, let's see what happens inside this method that we pass to each thread instance: void threadFnc(int tid) { cout_mtx.lock(); cout << "Starting thread " << tid << ".n"; cout_mtx.unlock(); When we obtain the initial value set in the vector, we copy it to a local variable so that we can immediately release the mutex for the vector to enable other threads to use it: int rval = randGen(0, 10); val += rval; These last two lines contain the essence of what the threads created do: they take the initial value, and add a randomly generated value to it. The randGen() method takes two parameters, defining the range of the returned value: cout_mtx.lock(); cout << "Thread " << tid << " adding " << rval << ". New value: " << val << ".n"; cout_mtx.unlock(); values_mtx.lock(); values.push_back(val); values_mtx.unlock(); } Finally, we (safely) log a message informing the user of the result of this action before adding the new value to the vector. In both cases, we use the respective mutex to ensure that there can be no overlap with any of the other threads. Once the method reaches this point, the thread containing it will terminate, and the main thread will have one fewer thread to wait for to rejoin. Lastly, we'll take a look at the randGen() method. Here we can see some multithreaded specific additions as well: int randGen(const int& min, const int& max) { static thread_local mt19937 generator(hash<thread::id>()(this_thread::get_id())); uniform_int_distribution<int> distribution(min, max); return distribution(generator) } This preceding method takes a minimum and maximum value as explained earlier, which limit the range of the random numbers this method can return. At its core, it uses a mt19937-based generator, which employs a 32-bit Mersenne Twister algorithm with a state size of 19937 bits. This is a common and appropriate choice for most applications. Of note here is the use of the thread_local keyword. What this means is that even though it is defined as a static variable, its scope will be limited to the thread using it. Every thread will thus create its own generator instance, which is important when using the random number API in the STL. A hash of the internal thread identifier (not our own) is used as seed for the generator. This ensures that each thread gets a fairly unique seed for its generator instance, allowing for better random number sequences. Finally, we create a new uniform_int_distribution instance using the provided minimum and maximum limits, and use it together with the generator instance to generate the random number which we return. Makefile In order to compile the code described earlier, one could use an IDE, or type the command on the command line. As mentioned in the beginning of this chapter, we'll be using makefiles for the examples in this book. The big advantages of this are that one does not have to repeatedly type in the same extensive command, and it is portable to any system which supports make. The makefile for this example is rather basic: GCC := g++ OUTPUT := ch01_mt_example SOURCES := $(wildcard *.cpp) CCFLAGS := -std=c++11 all: $(OUTPUT) $(OUTPUT): clean: rm $(OUTPUT) .PHONY: all From the top down, we first define the compiler that we'll use (g++), set the name of the output binary (the .exe extension on Windows will be post-fixed automatically), followed by the gathering of the sources and any important compiler flags. The wildcard feature allows one to collect the names of all files matching the string following it in one go without having to define the name of each source file in the folder individually. For the compiler flags, we're only really interested in enabling the c++11 features, for which GCC still requires one to supply this compiler flag. For the all method, we just tell make to run g++ with the supplied information. Next we define a simple clean method which just removes the produced binary, and finally, we tell make to not interpret any folder or file named all in the folder, but to use the internal method with the .PHONY section. When we run this makefile, we see the following command-line output: $ make Afterwards, we find an executable file called ch01_mt_example (with the .exe extension attached on Windows) in the same folder. Executing this binary will result in a command-line output akin to the following: $ ./ch01_mt_example.exe Starting thread 1. Thread 1 adding 8. New value: 50. Starting thread 2. Thread 2 adding 2. New value: 44. Starting thread 3. Starting thread 4. Thread 3 adding 0. New value: 42. Thread 4 adding 8. New value: 50. Input: 42, Result 1: 50, Result 2: 44, Result 3: 42, Result 4: 50 What one can see here already is the somewhat asynchronous nature of threads and their output. While threads 1 and 2 appear to run synchronously, threads 3 and 4 clearly run asynchronously. For this reason, and especially in longer-running threads, it's virtually impossible to say in which order the log output and results will be returned. While we use a simple vector to collect the results of the threads, there is no saying whether Result 1 truly originates from the thread which we assigned ID 1 in the beginning. If we need this information, we need to extend the data we return by using an information structure with details on the processing thread or similar. One could, for example, use struct like this: struct result { int tid; int result; }; The vector would then be changed to contain result instances rather than integer instances. One could pass the initial integer value directly to the thread as part of its parameters, or pass it via some other way. Want to learn C++ multithreading in detail? You can find Mastering C++ Multithreading here, or explore all our latest C++ eBooks and videos here.
Read more
  • 0
  • 0
  • 16318

article-image-introducing-kafka
Packt
10 Oct 2013
6 min read
Save for later

Introducing Kafka

Packt
10 Oct 2013
6 min read
(For more resources related to this topic, see here.) In today's world, real-time information is continuously getting generated by applications (business, social, or any other type), and this information needs easy ways to be reliably and quickly routed to multiple types of receivers. Most of the time, applications that are producing information and applications that are consuming this information are well apart and inaccessible to each other. This, at times, leads to redevelopment of information of producers or consumers to provide an integration point between them. Therefore, a mechanism is required for seamless integration of information of producers and consumers to avoid any kind of rewriting of an application at either end. In the present era of big data, the first challenge is to collect the data and the second challenge is to analyze it. As it is a huge amount of data, the analysis typically includes the following and much more: User behavior data Application performance tracing Activity data in the form of logs Event messages Message publishing is a mechanism for connecting various applications with the help of messages that are routed between them, for example, by a message broker such as Kafka. Kafka is a solution to the real-time problems of any software solution, that is, to deal with real-time volumes of information and route it to multiple consumers quickly. Kafka provides seamless integration between information of producers and consumers without blocking the producers of the information, and without letting producers know who the final consumers are. Apache Kafka is an open source, distributed publish-subscribe messaging system, mainly designed with the following characteristics: Persistent messaging: To derive the real value from big data, any kind of information loss cannot be afforded. Apache Kafka is designed with O(1) disk structures that provide constant-time performance even with very large volumes of stored messages, which is in order of TB. High throughput: Keeping big data in mind, Kafka is designed to work on commodity hardware and to support millions of messages per second. Distributed: Apache Kafka explicitly supports messages partitioning over Kafka servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics. Multiple client support: Apache Kafka system supports easy integration of clients from different platforms such as Java, .NET, PHP, Ruby, and Python. Real time: Messages produced by the producer threads should be immediately visible to consumer threads; this feature is critical to event-based systems such as Complex Event Processing (CEP) systems. Kafka provides a real-time publish-subscribe solution, which overcomes the challenges of real-time data usage for consumption, for data volumes that may grow in order of magnitude, larger that the real data. Kafka also supports parallel data loading in the Hadoop systems. The following diagram shows a typical big data aggregation-and-analysis scenario supported by the Apache Kafka messaging system: At the production side, there are different kinds of producers, such as the following: Frontend web applications generating application logs Producer proxies generating web analytics logs Producer adapters generating transformation logs Producer services generating invocation trace logs At the consumption side, there are different kinds of consumers, such as the following: Offline consumers that are consuming messages and storing them in Hadoop or traditional data warehouse for offline analysis Near real-time consumers that are consuming messages and storing them in any NoSQL datastore such as HBase or Cassandra for near real-time analytics Real-time consumers that filter messages in the in-memory database and trigger alert events for related groups Need for Kafka A large amount of data is generated by companies having any form of web-based presence and activity. Data is one of the newer ingredients in these Internet-based systems. This data typically includes user-activity events corresponding to logins, page visits, clicks, social networking activities such as likes, sharing, and comments, and operational and system metrics. This data is typically handled by logging and traditional log aggregation solutions due to high throughput (millions of messages per second). These traditional solutions are the viable solutions for providing logging data to an offline analysis system such as Hadoop. However, the solutions are very limiting for building real-time processing systems. According to the new trends in Internet applications, activity data has become a part of production data and is used to run analytics at real time. These analytics can be: Search based on relevance Recommendations based on popularity, co-occurrence, or sentimental analysis Delivering advertisements to the masses Internet application security from spam or unauthorized data scraping Real-time usage of these multiple sets of data collected from production systems has become a challenge because of the volume of data collected and processed. Apache Kafka aims to unify offline and online processing by providing a mechanism for parallel load in Hadoop systems as well as the ability to partition real-time consumption over a cluster of machines. Kafka can be compared with Scribe or Flume as it is useful for processing activity stream data; but from the architecture perspective, it is closer to traditional messaging systems such as ActiveMQ or RabitMQ. Few Kafka usages Some of the companies that are using Apache Kafka in their respective use cases are as follows: LinkedIn (www.linkedin.com): Apache Kafka is used at LinkedIn for the streaming of activity data and operational metrics. This data powers various products such as LinkedIn news feed and LinkedIn Today in addition to offline analytics systems such as Hadoop. DataSift (www.datasift.com/): At DataSift, Kafka is used as a collector for monitoring events and as a tracker of users' consumption of data streams in real time. Twitter (www.twitter.com/): Twitter uses Kafka as a part of its Storm— a stream-processing infrastructure. Foursquare (www.foursquare.com/): Kafka powers online-to-online and online-to-offline messaging at Foursquare. It is used to integrate Foursquare monitoring and production systems with Foursquare, Hadoop-based offline infrastructures. Square (www.squareup.com/): Square uses Kafka as a bus to move all system events through Square's various datacenters. This includes metrics, logs, custom events, and so on. On the consumer side, it outputs into Splunk, Graphite, or Esper-like real-time alerting. The source of the above information is https: //cwiki. apache.org/confluence/display/KAFKA/Powered+By. Summary In this article, we have seen how companies are evolving the mechanism of collecting and processing application-generated data, and that of utilizing the real power of this data by running analytics over it. Resources for Article: Further resources on this subject: Apache Felix Gogo [Article] Hadoop and HDInsight in a Heartbeat [Article] Advanced Hadoop MapReduce Administration [Article]
Read more
  • 0
  • 0
  • 16074

article-image-article-design-patterns
Packt
21 Jul 2014
5 min read
Save for later

Design patterns

Packt
21 Jul 2014
5 min read
(For more resources related to this topic, see here.) Design patterns are ways to solve a problem and the way to get your intended result in the best possible manner. So, design patterns are not only ways to create a large and robust system, but they also provide great architectures in a friendly manner. In software engineering, a design pattern is a general repeatable and optimized solution to a commonly occurring problem within a given context in software design. It is a description or template for how to solve a problem, and the solution can be used in different instances. The following are some of the benefits of using design patterns: Maintenance Documentation Readability Ease in finding appropriate objects Ease in determining object granularity Ease in specifying object interfaces Ease in implementing even for large software projects Implements the code reusability concept If you are not familiar with design patterns, the best way to begin understanding is observing the solutions we use for commonly occurring, everyday life problems. Let's take a look at the following image: Many different types of power plugs exist in the world. So, we need a solution that is reusable, optimized, and cheaper than buying a new device for different power plug types. In simple words, we need an adapter. Have a look at the following image of an adapter: In this case, an adapter is the best solution that's reusable, optimized, and cheap. But an adapter does not provide us with a solution when our car's wheel blows out. In object-oriented languages, we the programmers use the objects to do whatever we want to have the outcome we desire. Hence, we have many types of objects, situations, and problems. That means we need more than just one approach to solving different kinds of problems. Elements of design patterns The following are the elements of design patterns: Name: This is a handle we can use to describe the problem Problem: This describes when to apply the pattern Solution: This describes the elements, relationships, responsibilities, and collaborations, in a way that we follow to solve a problem Consequences: This details the results and trade-offs of applying the pattern Classification of design patterns Design patterns are generally divided into three fundamental groups: Creational patterns Structural patterns Behavioral patterns Let's examine these in the following subsections. Creational patterns Creational patterns are a subset of design patterns in the field of software development; they serve to create objects. They decouple the design of an object from its representation. Object creation is encapsulated and outsourced (for example, in a factory) to keep the context of object creation independent from concrete implementation. This is in accordance with the rule: "Program on the interface, not the implementation." Some of the features of creational patterns are as follows: Generic instantiation: This allows objects to be created in a system without having to identify a specific class type in code (Abstract Factory and Factory pattern) Simplicity: Some of the patterns make object creation easier, so callers will not have to write large, complex code to instantiate an object (Builder (Manager) and Prototype pattern) Creation constraints: Creational patterns can put bounds on who can create objects, how they are created, and when they are created The following patterns are called creational patterns: The Abstract Factory pattern The Factory pattern The Builder (Manager) pattern The Prototype pattern The Singleton pattern Structural patterns In software engineering, design patterns structure patterns facilitate easy ways for communications between various entities. Some of the examples of structures of the samples are as follows: Composition: This composes objects into a tree structure (whole hierarchies). Composition allows customers to be uniformly treated as individual objects according to their composition. Decorator: This dynamically adds options to an object. A Decorator is a flexible alternative embodiment to extend functionality. Flies: This is a share of small objects (objects without conditions) that prevent overproduction. Adapter: This converts the interface of a class into another interface that the clients expect. Adapter lets those classes work together that would normally not be able to because of the different interfaces. Facade: This provides a unified interface meeting the various interfaces of a subsystem. Facade defines a higher-level interface to the subsystem, which is easier to use. Proxy: This implements the replacement (surrogate) of another object that controls access to the original object. Bridge: This separates an abstraction from its implementation, which can then be independently altered. Behavioral patterns Behavioral patterns are all about a class' objects' communication. Behavioral patterns are those patterns that are most specifically concerned with communication between objects. The following is a list of the behavioral patterns: Chain of Responsibility pattern Command pattern Interpreter pattern Iterator pattern Mediator pattern Memento pattern Observer pattern State pattern Strategy pattern Template pattern Visitor pattern If you want to check out the usage of some patterns in the Laravel core, have a look at the following list: The Builder (Manager) pattern: IlluminateAuthAuthManager and IlluminateSessionSessionManager The Factory pattern: IlluminateDatabaseDatabaseManager and IlluminateValidationFactory The Repository pattern: IlluminateConfigRepository and IlluminateCacheRepository The Strategy pattern: IIlluminateCacheStoreInterface and IlluminateConfigLoaderInterface The Provider pattern: IIlluminateAuthAuthServiceProvider and IlluminateHashHashServiceProvider Summary In this article, we have explained the fundamentals of design patterns. We've also introduced some design patterns that are used in the Laravel Framework. Resources for Article: Further resources on this subject: Laravel 4 - Creating a Simple CRUD Application in Hours [article] Your First Application [article] Creating and Using Composer Packages [article]
Read more
  • 0
  • 0
  • 15869

article-image-debugging-your-net-application
Packt
21 Jul 2016
13 min read
Save for later

Debugging Your .NET Application

Packt
21 Jul 2016
13 min read
In this article by Jeff Martin, author of the book Visual Studio 2015 Cookbook - Second Edition, we will discuss about how but modern software development still requires developers to identify and correct bugs in their code. The familiar edit-compile-test cycle is as familiar as a text editor, and now the rise of portable devices has added the need to measure for battery consumption and optimization for multiple architectures. Fortunately, our development tools continue to evolve to combat this rise in complexity, and Visual Studio continues to improve its arsenal. (For more resources related to this topic, see here.) Multi-threaded code and asynchronous code are probably the two most difficult areas for most developers to work with, and also the hardest to debug when you have a problem like a race condition. A race condition occurs when multiple threads perform an operation at the same time, and the order in which they execute makes a difference to how the software runs or the output is generated. Race conditions often result in deadlocks, incorrect data being used in other calculations, and random, unrepeatable crashes. The other painful area to debug involves code running on other machines, whether it is running locally on your development machine or running in production. Hooking up a remote debugger in previous versions of Visual Studio has been less than simple, and the experience of debugging code in production was similarly frustrating. In this article, we will cover the following sections: Putting Diagnostic Tools to work Maximizing everyday debugging Putting Diagnostic Tools to work In Visual Studio 2013, Microsoft debuted a new set of tools called the Performance and Diagnostics hub. With VS2015, these tools have revised further, and in the case of Diagnostic Tools, promoted to a central presence on the main IDE window, and is displayed, by default, during debugging sessions. This is great for us as developers, because now it is easier than ever to troubleshoot and improve our code. In this section, we will explore how Diagnostic Tools can be used to explore our code, identify bottlenecks, and analyze memory usage. Getting ready The changes didn't stop when VS2015 was released, and succeeding updates to VS2015 have further refined the capabilities of these tools. So for this section, ensure that Update 2 has been installed on your copy of VS2015. We will be using Visual Studio Community 2015, but of course, you may use one of the premium editions too. How to do it… For this section, we will put together a short program that will generate some activity for us to analyze: Create a new C# Console Application, and give it a name of your choice. In your project's new Program.cs file, add the following method that will generate a large quantity of strings: static List<string> makeStrings() { List<string> stringList = new List<string>(); Random random = new Random(); for (int i = 0; i < 1000000; i++) { string x = "String details: " + (random.Next(1000, 100000)); stringList.Add(x); } return stringList; } Next we will add a second static method that produces an SHA256-calculated hash of each string that we generated. This method reads in each string that was previously generated, creates an SHA256 hash for it, and returns the list of computed hashes in the hex format. static List<string> hashStrings(List<string> srcStrings) { List<string> hashedStrings = new List<string>(); SHA256 mySHA256 = SHA256Managed.Create(); StringBuilder hash = new StringBuilder(); foreach (string str in srcStrings) { byte[] srcBytes = mySHA256.ComputeHash(Encoding.UTF8.GetBytes(str), 0, Encoding.UTF8.GetByteCount(str)); foreach (byte theByte in srcBytes) { hash.Append(theByte.ToString("x2")); } hashedStrings.Add(hash.ToString()); hash.Clear(); } mySHA256.Clear(); return hashedStrings; } After adding these methods, you may be prompted to add using statements for System.Text and System.Security.Cryptography. These are definitely needed, so go ahead and take Visual Studio's recommendation to have them added. Now we need to update our Main method to bring this all together. Update your Main method to have the following: static void Main(string[] args) { Console.WriteLine("Ready to create strings"); Console.ReadKey(true); List<string> results = makeStrings(); Console.WriteLine("Ready to Hash " + results.Count() + " strings "); //Console.ReadKey(true); List<string> strings = hashStrings(results); Console.ReadKey(true); } Before proceeding, build your solution to ensure everything is in working order. Now run the application in the Debug mode (F5), and watch how our program operates. By default, the Diagnostic Tools window will only appear while debugging. Feel free to reposition your IDE windows to make their presence more visible or use Ctrl + Alt + F2 to recall it as needed. When you first launch the program, you will see the Diagnostic Tools window appear. Its initial display resembles the following screenshot. Thanks to the first ReadKey method, the program will wait for us to proceed, so we can easily see the initial state. Note that CPU usage is minimal, and memory usage holds constant. Before going any further, click on the Memory Usage tab, and then the Take Snapshot command as indicated in the preceding screenshot. This will record the current state of memory usage by our program, and will be a useful comparison point later on. Once a snapshot is taken, your Memory Usage tab should resemble the following screenshot: Having a forced pause through our ReadKey() method is nice, but when working with real-world programs, we will not always have this luxury. Breakpoints are typically used for situations where it is not always possible to wait for user input, so let's take advantage of the program's current state, and set two of them. We will put one to the second WriteLine method, and one to the last ReadKey method, as shown in the following screenshot: Now return to the open application window, and press a key so that execution continues. The program will stop at the first break point, which is right after it has generated a bunch of strings and added them to our List object. Let's take another snapshot of the memory usage using the same manner given in Step 9. You may also notice that the memory usage displayed in the Process Memory gauge has increased significantly, as shown in this screenshot: Now that we have completed our second snapshot, click on Continue in Visual Studio, and proceed to the next breakpoint. The program will then calculate hashes for all of the generated strings, and when this has finished, it will stop at our last breakpoint. Take another snapshot of the memory usage. Also take notice of how the CPU usage spiked as the hashes were being calculated: Now that we have these three memory snapshots, we will examine how they can help us. You may notice how memory usage increases during execution, especially from the initial snapshot to the second. Click on the second snapshot's object delta, as shown in the following screenshot: On clicking, this will open the snapshot details in a new editor window. Click on the Size (Bytes) column to sort by size, and as you may suspect, our List<String> object is indeed the largest object in our program. Of course, given the nature of our sample program, this is fairly obvious, but when dealing with more complex code bases, being able to utilize this type of investigation is very helpful. The following screenshot shows the results of our filter: If you would like to know more about the object itself (perhaps there are multiple objects of the same type), you can use the Referenced Types option as indicated in the preceding screenshot. If you would like to try this out on the sample program, be sure to set a smaller number in the makeStrings() loop, otherwise you will run the risk of overloading your system. Returning to the main Diagnostic Tools window, we will now examine CPU utilization. While the program is executing the hashes (feel free to restart the debugging session if necessary), you can observe where the program spends most of its time: Again, it is probably no surprise that most of the hard work was done in the hashStrings() method. But when dealing with real-world code, it will not always be so obvious where the slowdowns are, and having this type of insight into your program's execution will make it easier to find areas requiring further improvement. When using the CPU profiler in our example, you may find it easier to remove the first breakpoint and simply trigger a profiling by clicking on Break All as shown in this screenshot: How it works... Microsoft wanted more developers to be able to take advantage of their improved technology, so they have increased its availability beyond the Professional and Enterprise editions to also include Community. Running your program within VS2015 with the Diagnostic Tools window open lets you examine your program's performance in great detail. By using memory snapshots and breakpoints, VS2015 provides you with the tools needed to analyze your program's operation, and determine where you should spend your time making optimizations. There's more… Our sample program does not perform a wide variety of tasks, but of course, more complex programs usually perform well. To further assist with analyzing those programs, there is a third option available to you beyond CPU Usage and Memory Usage: the Events tab. As shown in the following screenshot, the Events tab also provides the ability to search events for interesting (or long-running) activities. Different event types include file activity, gestures (for touch-based apps), and program modules being loaded or unloaded. Maximizing everyday debugging Given the frequency of debugging, any refinement to these tools can pay immediate dividends. VS 2015 brings the popular Edit and Continue feature into the 21st century by supporting a 64-bit code. Added to that is the new ability to see the return value of functions in your debugger. The addition of these features combine to make debugging code easier, allowing to solve problems faster. Getting ready For this section, you can use VS 2015 Community or one of the premium editions. Be sure to run your choice on a machine using a 64-bit edition of Windows, as that is what we will be demonstrating in the section. Don't worry, you can still use Edit and Continue with 32-bit C# and Visual Basic code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU, and select Configuration Manager... When the Configuration Manager dialog opens, we can create a new project platform targeting a 64-bit code. To do this, click on the drop-down menu for Platform, and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to Configuration Manager. Verify that x64 remains active under Platform, and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: With the project settings out of the face, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border, or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method, and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use whatever method you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5, or by clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation, as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). (If you don't see Autos, it can be made visible by pressing Ctrl + D, A, or through Debug | Windows | Autos while debugging.) After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for the area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet in the following screenshot—Step Over once more if you would like to see that assigned): There's more… Programmers who write C++ have already had the ability to see the return values of functions—this just brings .NET developers into the fold. The result is that your development experience won't have to suffer based on the language you have chosen to use for your project. The Edit and Continue functionality is also available for ASP.NET projects. New projects created on VS2015 will have Edit and Continue enabled by default. Existing projects imported to VS2015 will usually need this to be enabled if it hasn't been done already. To do so, open the Options dialog via Tools | Options, and look for the Debugging | General section. The following screenshot shows where this option is located on the properties page: Whether you are working with an ASP.NET project or a regular C#/VB .NET application, you can verify Edit and Continue is set via this location. Summary In this article, we examine the improvements to the debugging experience in Visual Studio 2015, and how it can help you diagnose the root cause of a problem faster so that you can fix it properly, and not just patch over the symptoms. Resources for Article:   Further resources on this subject: Creating efficient reports with Visual Studio [article] Creating efficient reports with Visual Studio [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article]
Read more
  • 0
  • 0
  • 15849

Packt
02 Mar 2015
19 min read
Save for later

Entity Framework DB First – Inheritance Relationships between Entities

Packt
02 Mar 2015
19 min read
This article is written by Rahul Rajat Singh, the author of Mastering Entity Framework. So far, we have seen how we can use various approaches of Entity Framework, how we can manage database table relationships, and how to perform model validations using Entity Framework. In this article, we will see how we can implement the inheritance relationship between the entities. We will see how we can change the generated conceptual model to implement the inheritance relationship, and how it will benefit us in using the entities in an object-oriented manner and the database tables in a relational manner. (For more resources related to this topic, see here.) Domain modeling using inheritance in Entity Framework One of the major challenges while using a relational database is to manage the domain logic in an object-oriented manner when the database itself is implemented in a relational manner. ORMs like Entity Framework provide the strongly typed objects, that is, entities for the relational tables. However, it might be possible that the entities generated for the database tables are logically related to each other, and they can be better modeled using inheritance relationships rather than having independent entities. Entity Framework lets us create inheritance relationships between the entities, so that we can work with the entities in an object-oriented manner, and internally, the data will get persisted in the respective tables. Entity Framework provides us three ways of object relational domain modeling using the inheritance relationship: The Table per Type (TPT) inheritance The Table per Class Hierarchy (TPH) inheritance The Table per Concrete Class (TPC) inheritance Let's now take a look at the scenarios where the generated entities are not logically related, and how we can use these inheritance relationships to create a better domain model by implementing inheritance relationships between entities using the Entity Framework Database First approach. The Table per Type inheritance The Table per Type (TPT) inheritance is useful when our database has tables that are related to each other using a one-to-one relationship. This relation is being maintained in the database by a shared primary key. To illustrate this, let's take a look at an example scenario. Let's assume a scenario where an organization maintains a database of all the people who work in a department. Some of them are employees getting a fixed salary, and some of them are vendors who are hired at an hourly rate. This is modeled in the database by having all the common data in a table called Person, and there are separate tables for the data that is specific to the employees and vendors. Let's visualize this scenario by looking at the database schema: The database schema showing the TPT inheritance database schema The ID column for the People table can be an auto-increment identity column, but it should not be an auto-increment identity column for the Employee and Vendors tables. In the preceding figure, the People table contains all the data common to both type of worker. The Employee table contains the data specific to the employees and the Vendors table contains the data specific to the vendors. These tables have a shared primary key and thus, there is a one-to-one relationship between the tables. To implement the TPT inheritance, we need to perform the following steps in our application: Generate the default Entity Data Model. Delete the default relationships. Add the inheritance relationship between the entities. Use the entities via the DBContext object. Generating the default Entity Data Model Let's add a new ADO.NET Entity Data Model to our application, and generate the conceptual Entity Model for these tables. The default generated Entity Model will look like this: The generated Entity Data Model where the TPT inheritance could be used Looking at the preceding conceptual model, we can see that Entity Framework is able to figure out the one-to-one relationship between the tables and creates the entities with the same relationship. However, if we take a look at the generated entities from our application domain perspective, it is fairly evident that these entities can be better managed if they have an inheritance relationship between them. So, let's see how we can modify the generated conceptual model to implement the inheritance relationship, and Entity Framework will take care of updating the data in the respective tables. Deleting default relationships The first thing we need to do to create the inheritance relationship is to delete the existing relationship from the Entity Model. This can be done by right-clicking on the relationship and selecting Delete from Model as follows: Deleting an existing relationship from the Entity Model Adding inheritance relationships between entities Once the relationships are deleted, we can add the new inheritance relationships in our Entity Model as follows: Adding inheritance relationships in the Entity Model When we add an inheritance relationship, the Visual Entity Designer will ask for the base class and derived class as follows: Selecting the base class and derived class participating in the inheritance relationship Once the inheritance relationship is created, the Entity Model will look like this: Inheritance relationship in the Entity Model After creating the inheritance relationship, we will get a compile error that the ID property is defined in all the entities. To resolve this problem, we need to delete the ID column from the derived classes. This will still keep the ID column that maps the derived classes as it is. So, from the application perspective, the ID column is defined in the base class but from the mapping perspective, it is mapped in both the base class and derived class, so that the data will get inserted into tables mapped in both the base and derived entities. With this inheritance relationship in place, the entities can be used in an object-oriented manner, and Entity Framework will take care of updating the respective tables for each entity. Using the entities via the DBContext object As we know, DbContext is the primary class that should be used to perform various operations on entities. Let's try to use our SampleDbContext class to create an Employee and a Vendor using this Entity Model and see how the data gets updated in the database: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EmailID = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EmailID = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, what we are doing is creating an object of the Employee and Vendor type, and then adding them to People using the DbContext object. What Entity Framework will do internally is that it will look at the mappings of the base entity and the derived entities, and then push the respective data into the respective tables. So, if we take a look at the data inserted in the database, it will look like the following: A database snapshot of the inserted data It is clearly visible from the preceding database snapshot that Entity Framework looks at our inheritance relationship and pushes the data into the Person, Employee, and Vendor tables. The Table per Class Hierarchy inheritance The Table per Class Hierarchy (TPH) inheritance is modeled by having a single database table for all the entity classes in the inheritance hierarchy. The TPH inheritance is useful in cases where all the information about the related entities is stored in a single table. For example, using the earlier scenario, let's try to model the database in such a way that it will only contain a single table called Workers to store the Employee and Vendor details. Let's try to visualize this table: A database schema showing the TPH inheritance database schema Now what will happen in this case is that the common fields will be populated whenever we create a type of worker. Salary will only contain a value if the worker is of type Employee. The HourlyRate field will be null in this case. If the worker is of type Vendor, then the HourlyRate field will have a value, and Salary will be null. This pattern is not very elegant from a database perspective. Since we are trying to keep unrelated data in a single table, our table is not normalized. There will always be some redundant columns that contain null values if we use this approach. We should try not to use this pattern unless it is absolutely needed. To implement the TPH inheritance relationship using the preceding table structure, we need to perform the following activities: Generate the default Entity Data Model. Add concrete classes to the Entity Data Model. Map the concrete class properties to their respective tables and columns. Make the base class entity abstract. Use the entities via the DBContext object. Let's discuss this in detail. Generating the default Entity Data Model Let's now generate the Entity Data Model for this table. The Entity Framework will create a single entity, Worker, for this table: The generated model for the table created for implementing the TPH inheritance Adding concrete classes to the Entity Data Model From the application perspective, it would be a much better solution if we have classes such as Employee and Vendor, which are derived from the Worker entity. The Worker class will contain all the common properties, and Employee and Vendor will contain their respective properties. So, let's add new entities for Employee and Vendor. While creating the entity, we can specify the base class entity as Worker, which is as follows: Adding a new entity in the Entity Data Model using a base class type Similarly, we will add the Vendor entity to our Entity Data Model, and specify the Worker entity as its base class entity. Once the entities are generated, our conceptual model will look like this: The Entity Data Model after adding the derived entities Next, we have to remove the Salary and HourlyRate properties from the Worker entity, and put them in the Employee and the Vendor entities respectively. So, once the properties are put into the respective entities, our final Entity Data model will look like this: The Entity Data Model after moving the respective properties into the derived entities Mapping the concrete class properties to the respective tables and columns After this, we have to define the column mappings in the derived classes to let the derived classes know which table and column should be used to put the data. We also need to specify the mapping condition. The Employee entity should save the Salary property's value in the Salary column of the Workers table when the Salary property is Not Null and HourlyRate is Null: Table mapping and conditions to map the Employee entity to the respective tables Once this mapping is done, we have to mark the Salary property as Nullable=false in the entity property window. This will let Entity Framework know that if someone is creating an object of the Employee type, then the Salary field is mandatory: Setting the Employee entity properties as Nullable Similarly, the Vendor entity should save the HourlyRate property's value in the HourlyRate column of the Workers table when Salary is Null and HourlyRate is Not Null: Table mapping and conditions to map the Vendor entity to the respective tables And similar to the Employee class, we also have to mark the HourlyRate property as Nullable=false in the Entity Property window. This will help Entity Framework know that if someone is creating an object of the Vendor type, then the HourlyRate field is mandatory: Setting the Vendor entity properties to Nullable Making the base class entity abstract There is one last change needed to be able to use these models. To be able to use these models, we need to mark the base class as abstract, so that Entity Framework is able to resolve the object of Employee and Vendors to the Workers table. Making the base class Workers as abstract This will also be a better model from the application perspective because the Worker entity itself has no meaning from the application domain perspective. Using the entities via the DBContext object Now we have our Entity Data Model configured to use the TPH inheritance. Let's try to create an Employee object and a Vendor object, and add them to the database using the TPH inheritance hierarchy: using (SampleDbEntities db = new SampleDbEntities()){Employee employee = new Employee();employee.FirstName = "Employee 1";employee.LastName = "Employee 1";employee.PhoneNumber = "1234567";employee.Salary = 50000;employee.EmailID = "employee1@test.com";Vendor vendor = new Vendor();vendor.FirstName = "vendor 1";vendor.LastName = "vendor 1";vendor.PhoneNumber = "1234567";vendor.HourlyRate = 100;vendor.EmailID = "vendor1@test.com";db.Workers.Add(employee);db.Workers.Add(vendor);db.SaveChanges();} In the preceding code, we created objects of the Employee and Vendor types, and then added them to the Workers collection using the DbContext object. Entity Framework will look at the mappings of the base entity and the derived entities, will check the mapping conditions and the actual values of the properties, and then push the data to the respective tables. So, let's take a look at the data inserted in the Workers table: A database snapshot after inserting the data using the Employee and Vendor entities So, we can see that for our Employee and Vendor models, the actual data is being kept in the same table using Entity Framework's TPH inheritance. The Table per Concrete Class inheritance The Table per Concrete Class (TPC) inheritance can be used when the database contains separate tables for all the logical entities, and these tables have some common fields. In our existing example, if there are two separate tables of Employee and Vendor, then the database schema would look like the following: The database schema showing the TPC inheritance database schema One of the major problems in such a database design is the duplication of columns in the tables, which is not recommended from the database normalization perspective. To implement the TPC inheritance, we need to perform the following tasks: Generate the default Entity Data Model. Create the abstract class. Modify the CDSL to cater to the change. Specify the mapping to implement the TPT inheritance. Use the entities via the DBContext object. Generating the default Entity Data Model Let's now take a look at the generated entities for this database schema: The default generated entities for the TPC inheritance database schema Entity Framework has given us separate entities for these two tables. From our application domain perspective, we can use these entities in a better way if all the common properties are moved to a common abstract class. The Employee and Vendor entities will contain the properties specific to them and inherit from this abstract class to use all the common properties. Creating the abstract class Let's add a new entity called Worker to our conceptual model and move the common properties into this entity: Adding a base class for all the common properties Next, we have to mark this class as abstract from the properties window: Marking the base class as abstract class Modifying the CDSL to cater to the change Next, we have to specify the mapping for these tables. Unfortunately, the Visual Entity Designer has no support for this type of mapping, so we need to perform this mapping ourselves in the EDMX XML file. The conceptual schema definition language (CSDL) part of the EDMX file is all set since we have already moved the common properties into the abstract class. So, now we should be able to use these properties with an abstract class handle. The problem will come in the storage schema definition language (SSDL) and mapping specification language (MSL). The first thing that we need to do is to change the SSDL to let Entity Framework know that the abstract class Worker is capable of saving the data in two tables. This can be done by setting the EntitySet name in the EntityContainer tags as follows: <EntityContainer Name="todoDbModelStoreContainer">   <EntitySet Name="Employee" EntityType="Self.Employee" Schema="dbo" store_Type="Tables" />   <EntitySet Name="Vendor" EntityType="Self.Vendor" Schema="dbo" store_Type="Tables" /></EntityContainer> Specifying the mapping to implement the TPT inheritance Next, we need to change the MSL to properly map the properties to the respective tables based on the actual type of object. For this, we have to specify EntitySetMapping. The EntitySetMapping should look like the following: <EntityContainerMapping StorageEntityContainer="todoDbModelStoreContainer" CdmEntityContainer="SampleDbEntities">    <EntitySetMapping Name="Workers">   <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Vendor)">       <MappingFragment StoreEntitySet="Vendor">       <ScalarProperty Name="HourlyRate" ColumnName="HourlyRate" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       <ScalarProperty Name="ID" ColumnName="ID" />       </MappingFragment>   </EntityTypeMapping>      <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Employee)">       <MappingFragment StoreEntitySet="Employee">       <ScalarProperty Name="ID" ColumnName="ID" />       <ScalarProperty Name="Salary" ColumnName="Salary" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       </MappingFragment>   </EntityTypeMapping>   </EntitySetMapping></EntityContainerMapping> In the preceding code, we specified that if the actual type of object is Vendor, then the properties should map to the columns in the Vendor table, and if the actual type of entity is Employee, the properties should map to the Employee table, as shown in the following screenshot: After EDMX modifications, the mapping are visible in Visual Entity Designer If we now open the EDMX file again, we can see the properties being mapped to the respective tables in the respective entities. Doing this mapping from Visual Entity Designer is not possible, unfortunately. Using the entities via the DBContext object Let's use these "entities from our code: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EMailId = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EMailId = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, we created objects of the Employee and Vendor types and saved them using the Workers entity set, which is actually an abstract class. If we take a look at the inserted database, we will see the following: Database snapshot of the inserted data using TPC inheritance From the preceding screenshot, it is clear that the data is being pushed to the respective tables. The insert operation we saw in the previous code is successful but there will be an exception in the application. This exception is because when Entity Framework tries to access the values that are in the abstract class, it finds two records with same ID, and since the ID column is specified as a primary key, two records with the same value is a problem in this scenario. This exception clearly shows that the store/database generated identity columns will not work with the TPC inheritance. If we want to use the TPC inheritance, then we either need to use GUID based IDs, or pass the ID from the application, or perhaps use some database mechanism that can maintain the uniqueness of auto-generated columns across multiple tables. Choosing the inheritance strategy Now that we know about all the inheritance strategies supported by Entity Framework, let's try to analyze these approaches. The most important thing is that there is no single strategy that will work for all the scenarios. Especially if we have a legacy database. The best option would be to analyze the application requirements and then look at the existing table structure to see which approach is best suited. The Table per Class Hierarchy inheritance tends to give us denormalized tables and have redundant columns. We should only use it when the number of properties in the derived classes is very less, so that the number of redundant columns is also less, and this denormalized structure will not create problems over a period of time. Contrary to TPH, if we have a lot of properties specific to derived classes and only a few common properties, we can use the Table per Concrete Class inheritance. However, in this approach, we will end up with some properties being repeated in all the tables. Also, this approach imposes some limitations such as we cannot use auto-increment identity columns in the database. If we have a lot of common properties that could go into a base class and a lot of properties specific to derived classes, then perhaps Table per Type is the best option to go with. In any case, complex inheritance relationships that become unmanageable in the long run should be avoided. One alternative could be to have separate domain models to implement the application logic in an object-oriented manner, and then use mappers to map these domain models to Entity Framework's generated entity models. Summary In this article, we looked at the various types of inheritance relationship using Entity Framework. We saw how these inheritance relationships can be implemented, and some guidelines on which should be used in which scenario. Resources for Article: Further resources on this subject: Working with Zend Framework 2.0 [article] Hosting the service in IIS using the TCP protocol [article] Applying LINQ to Entities to a WCF Service [article]
Read more
  • 0
  • 0
  • 15753
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-boost-r-codes-using-c-fortran
Pravin Dhandre
18 Apr 2018
16 min read
Save for later

How to boost R codes using C++ and Fortran

Pravin Dhandre
18 Apr 2018
16 min read
Sometimes, R code just isn't fast enough. Maybe you've used profiling to figure out where your bottlenecks are, and you've done everything you can think of within R, but your code still isn't fast enough. In those cases, a useful alternative can be to delegate some parts of the implementation to Fortran or C++. This is an advanced technique that can often prove to be quite useful if know how to program in such languages. In today’s tutorial, we will try to explore techniques to boost R codes and calculations using efficient languages like Fortran and C++. Delegating code to other languages can address bottlenecks such as the following: Loops that can't be easily vectorized due to iteration dependencies Processes that involve calling functions millions of times Inefficient but necessary data structures that are slow in R Delegating code to other languages can provide great performance benefits, but it also incurs the cost of being more explicit and careful with the types of objects that are being moved around. In R, you can get away with simple things such as being imprecise about a number being an integer or a real. In these other languages, you can't; every object must have a precise type, and it remains fixed for the entire execution. Boost R codes using an old-school approach with Fortran We will start with an old-school approach using Fortran first. If you are not familiar with it, Fortran is the oldest programming language still under use today. It was designed to perform lots of calculations very efficiently and with very few resources. There are a lot of numerical libraries developed with it, and many high-performance systems nowadays still use it, either directly or indirectly. Here's our implementation, named sma_fortran(). The syntax may throw you off if you're not used to working with Fortran code, but it's simple enough to understand. First, note that to define a function technically known as a subroutine in Fortran, we use the subroutine keyword before the name of the function. As our previous implementations do, it receives the period and data (we use the dataa name with an extra a at the end because Fortran has a reserved keyword data, which we shouldn't use in this case), and we will assume that the data is already filtered for the correct symbol at this point. Next, note that we are sending new arguments that we did not send before, namely smas and n. Fortran is a peculiar language in the sense that it does not return values, it uses side effects instead. This means that instead of expecting something back from a call to a Fortran subroutine, we should expect that subroutine to change one of the objects that was passed to it, and we should treat that as our return value. In this case, smas fulfills that role; initially, it will be sent as an array of undefined real values, and the objective is to modify its contents with the appropriate SMA values. Finally, the n represents the number of elements in the data we send. Classic Fortran doesn't have a way to determine the size of an array being passed to it, and it needs us to specify the size manually; that's why we need to send n. In reality, there are ways to work around this, but since this is not a tutorial about Fortran, we will keep the code as simple as possible. Next, note that we need to declare the type of objects we're dealing with as well as their size in case they are arrays. We proceed to declare pos (which takes the place of position in our previous implementation, because Fortran imposes a limit on the length of each line, which we don't want to violate), n, endd (again, end is a keyword in Fortran, so we use the name endd instead), and period as integers. We also declare dataa(n), smas(n), and sma as reals because they will contain decimal parts. Note that we specify the size of the array with the (n) part in the first two objects. Once we have declared everything we will use, we proceed with our logic. We first create a for loop, which is done with the do keyword in Fortran, followed by a unique identifier (which are normally named with multiples of tens or hundreds), the variable name that will be used to iterate, and the values that it will take, endd and 1 to n in this case, respectively. Within the for loop, we assign pos to be equal to endd and sma to be equal to 0, just as we did in some of our previous implementations. Next, we create a while loop with the do…while keyword combination, and we provide the condition that should be checked to decide when to break out of it. Note that Fortran uses a very different syntax for the comparison operators. Specifically, the .lt. operator stand for less-than, while the .ge. operator stands for greater-than-or-equal-to. If any of the two conditions specified is not met, then we will exit the while loop. Having said that, the rest of the code should be self-explanatory. The only other uncommon syntax property is that the code is indented to the sixth position. This indentation has meaning within Fortran, and it should be kept as it is. Also, the number IDs provided in the first columns in the code should match the corresponding looping mechanisms, and they should be kept toward the left of the logic-code. For a good introduction to Fortran, you may take a look at Stanford's Fortran 77 Tutorial (https:/​/​web.​stanford.​edu/​class/​me200c/​tutorial_​77/​). You should know that there are various Fortran versions, and the 77 version is one of the oldest ones. However, it's also one of the better supported ones: subroutine sma_fortran(period, dataa, smas, n) integer pos, n, endd, period real dataa(n), smas(n), sma do 10 endd = 1, n pos = endd sma = 0.0 do 20 while ((endd - pos .lt. period) .and. (pos .ge. 1)) sma = sma + dataa(pos) pos = pos - 1 20 end do if (endd - pos .eq. period) then sma = sma / period else sma = 0 end if smas(endd) = sma 10 continue end Once your code is finished, you need to compile it before it can be executed within R. Compilation is the process of translating code into machine-level instructions. You have two options when compiling Fortran code: you can either do it manually outside of R or you can do it within R. The second one is recommended since you can take advantage of R's tools for doing so. However, we show both of them. The first one can be achieved with the following code: $ gfortran -c sma-delegated-fortran.f -o sma-delegated-fortran.so This code should be executed in a Bash terminal (which can be found in Linux or Mac operating systems). We must ensure that we have the gfortran compiler installed, which was probably installed when R was. Then, we call it, telling it to compile (using the -c option) the sma-delegated-fortran.f file (which contains the Fortran code we showed before) and provide an output file (with the -o option) named sma-delegatedfortran.so. Our objective is to get this .so file, which is what we need within R to execute the Fortran code. The way to compile within R, which is the recommended way, is to use the following line: system("R CMD SHLIB sma-delegated-fortran.f") It basically tells R to execute the command that produces a shared library derived from the sma-delegated-fortran.f file. Note that the system() function simply sends the string it receives to a terminal in the operating system, which means that you could have used that same command in the Bash terminal used to compile the code manually. To load the shared library into R's memory, we use the dyn.load() function, providing the location of the .so file we want to use, and to actually call the shared library that contains the Fortran implementation, we use the .Fortran() function. This function requires type checking and coercion to be explicitly performed by the user before calling it. To provide a similar signature as the one provided by the previous functions, we will create a function named sma_delegated_fortran(), which receives the period, symbol, and data parameters as we did before, also filters the data as we did earlier, calculates the length of the data and puts it in n, and uses the .Fortran() function to call the sma_fortran() subroutine, providing the appropriate parameters. Note that we're wrapping the parameters around functions that coerce the types of these objects as required by our Fortran code. The results list created by the .Fortran() function contains the period, dataa, smas, and n objects, corresponding to the parameters sent to the subroutine, with the contents left in them after the subroutine was executed. As we mentioned earlier, we are interested in the contents of the sma object since they contain the values we're looking for. That's why we send only that part back after converting it to a numeric type within R. The transformations you see before sending objects to Fortran and after getting them back is something that you need to be very careful with. For example, if instead of using single(n) and as.single(data), we use double(n) and as.double(data), our Fortran implementation will not work. This is something that can be ignored within R, but it can't be ignored in the case of Fortran: system("R CMD SHLIB sma-delegated-fortran.f") dyn.load("sma-delegated-fortran.so") sma_delegated_fortran <- function(period, symbol, data) { data <- data[which(data$symbol == symbol), "price_usd"] n <- length(data) results <- .Fortran( "sma_fortran", period = as.integer(period), dataa = as.single(data), smas = single(n), n = as.integer(n) ) return(as.numeric(results$smas)) } Just as we did earlier, we benchmark and test for correctness: performance <- microbenchmark( sma_12 <- sma_delegated_fortran(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_12 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$me In this case, our median time is of 148.0335 microseconds, making this the fastest implementation up to this point. Note that it's barely over half of the time from the most efficient implementation we were able to come up with using only R. Take a look at the following table: Boost R codes using a modern approach with C++ Now, we will show you how to use a more modern approach using C++. The aim of this section is to provide just enough information for you to start experimenting using C++ within R on your own. We will only look at a tiny piece of what can be done by interfacing R with C++ through the Rcpp package (which is installed by default in R), but it should be enough to get you started. If you have never heard of C++, it's a language used mostly when resource restrictions play an important role and performance optimization is of paramount importance. Some good resources to learn more about C++ are Meyer's books on the topic, a popular one being Effective C++ (Addison-Wesley, 2005), and specifically for the Rcpp package, Eddelbuettel's Seamless R and C++ integration with Rcpp by Springer, 2013, is great. Before we continue, you need to ensure that you have a C++ compiler in your system. On Linux, you should be able to use gcc. On Mac, you should install Xcode from the  application store. O n Windows, you should install Rtools. Once you test your compiler and know that it's working, you should be able to follow this section. We'll cover more on how to do this in Appendix, Required Packages. C++ is more readable than Fortran code because it follows more syntax conventions we're used to nowadays. However, just because the example we will use is readable, don't think that C++ in general is an easy language to use; it's not. It's a very low-level language and using it correctly requires a good amount of knowledge. Having said that, let's begin. The #include line is used to bring variable and function definitions from R into this file when it's compiled. Literally, the contents of the Rcpp.h file are pasted right where the include statement is. Files ending with the .h extensions are called header files, and they are used to provide some common definitions between a code's user and its developers. The using namespace Rcpp line allows you to use shorter names for your function. Instead of having to specify Rcpp::NumericVector, we can simply use NumericVector to define the type of the data object. Doing so in this example may not be too beneficial, but when you start developing for complex C++ code, it will really come in handy. Next, you will notice the // [[Rcpp::export(sma_delegated_cpp)]] code. This is a tag that marks the function right below it so that R know that it should import it and make it available within R code. The argument sent to export() is the name of the function that will be accessible within R, and it does not necessarily have to match the name of the function in C++. In this case, sma_delegated_cpp() will be the function we call within R, and it will call the smaDelegated() function within C++: #include using namespace Rcpp; // [[Rcpp::export(sma_delegated_cpp)]] NumericVector smaDelegated(int period, NumericVector data) { int position, n = data.size(); NumericVector result(n); double sma; for (int end = 0; end < n; end++) { position = end; sma = 0; while(end - position < period && position >= 0) { sma = sma + data[position]; position = position - 1; } if (end - position == period) { sma = sma / period; } else { sma = NA_REAL; } result[end] = sma; } return result; } Next, we will explain the actual smaDelegated() function. Since you have a good idea of what it's doing at this point, we won't explain its logic, only the syntax that is not so obvious. The first thing to note is that the function name has a keyword before it, which is the type of the return value for the function. In this case, it's NumericVector, which is provided in the Rcpp.h file. This is an object designed to interface vectors between R and C++. Other types of vector provided by Rcpp are IntegerVector, LogicalVector, and CharacterVector. You also have IntegerMatrix, NumericMatrix, LogicalMatrix, and CharacterMatrix available. Next, you should note that the parameters received by the function also have types associated with them. Specifically, period is an integer (int), and data is NumericVector, just like the output of the function. In this case, we did not have to pass the output or length objects as we did with Fortran. Since functions in C++ do have output values, it also has an easy enough way of computing the length of objects. The first line in the function declare a variables position and n, and assigns the length of the data to the latter one. You may use commas, as we do, to declare various objects of the same type one after another instead of splitting the declarations and assignments into its own lines. We also declare the vector result with length n; note that this notation is similar to Fortran's. Finally, instead of using the real keyword as we do in Fortran, we use the float or double keyword here to denote such numbers. Technically, there's a difference regarding the precision allowed by such keywords, and they are not interchangeable, but we won't worry about that here. The rest of the function should be clear, except for maybe the sma = NA_REAL assignment. This NA_REAL object is also provided by Rcpp as a way to denote what should be sent to R as an NA. Everything else should result familiar. Now that our function is ready, we save it in a file called sma-delegated-cpp.cpp and use R's sourceCpp() function to bring compile it for us and bring it into R. The .cpp extension denotes contents written in the C++ language. Keep in mind that functions brought into R from C++ files cannot be saved in a .Rdata file for a later session. The nature of C++ is to be very dependent on the hardware under which it's compiled, and doing so will probably produce various errors for you. Every time you want to use a C++ function, you should compile it and load it with the sourceCpp() function at the moment of usage. library(Rcpp) sourceCpp("./sma-delegated-cpp.cpp") sma_delegated_cpp <- function(period, symbol, data) { data <- as.numeric(data[which(data$symbol == symbol), "price_usd"]) return(sma_cpp(period, data)) } If everything worked fine, our function should be usable within R, so we benchmark and test for correctness. I promise this is the last one: performance <- microbenchmark( sma_13 <- sma_delegated_cpp(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_13 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$median #> [1] 80.6415 This time, our median time was 80.6415 microseconds, which is three orders of magnitude faster than our first implementation. Think about it this way: if you provide an input for sma_delegated_cpp() so that it took around one hour for it to execute, sma_slow_1() would take around 1,000 hours, which is roughly 41 days. Isn't that a surprising difference? When you are in situations that take that much execution time, it's definitely worth it to try and make your implementations as optimized as possible. You may use the cppFunction() function to write your C++ code directly inside an .R file, but you should not do so. Keep that just for testing small pieces of code. Separating your C++ implementation into its own files allows you to use the power of your editor of choice (or IDE) to guide you through the development as well as perform deeper syntax checks for you. You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book provides step-by-step guide to build simple-to-advanced applications through examples in R using modern tools. Getting Inside a C++ Multithreaded Application Understanding the Dependencies of a C++ Application    
Read more
  • 0
  • 0
  • 15700

article-image-how-to-add-frameworks-with-carthage
Fabrizio Brancati
27 Sep 2016
5 min read
Save for later

How to Add Frameworks to iOS Applications with Carthage

Fabrizio Brancati
27 Sep 2016
5 min read
With the advent of iOS 8, Apple allowed the option of creating dynamic frameworks. In this post, you will learn how to create a dynamic framework from the ground up, and you will use Carthage to add frameworks to your Apps. Let’s get started! Creating Xcode project Open Xcode and create a new project. Select Frameworks & Library under the iOS menù from the templates and then Cocoa Touch Framework. Type a name for your framework and select Swift for the language. Now we will create a framework that helps to store data using NSUserDefaults. We can name it DataStore, which is a generic name, in case we want to expand it in the future to allow for the use of other data stores such as CoreData. The project is now empty and you have to add your first class, so add a new Swift file and name it DataStore, like the framework name. You need to create the class: public enum DataStoreType { case UserDefaults } public class DataStore { private init() {} public static func save(data: AnyObject, forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().setObject(data, forKey: key) } } public static func read(forKey key: String, in store: DataStoreType) -> AnyObject? { switch store { case .UserDefaults: return NSUserDefaults.standardUserDefaults().objectForKey(key) } } public static func delete(forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().removeObjectForKey(key) } } } Here we have created a DataStoreType enum to allow the expand feature in the future, and the DataStore class with the functions to save, read and delete. That’s it! You have just created the framework! How to use the framework To use the created framework, build it with CMD + B, right-click on the framework in the Products folder in the Xcode project, and click on Show in Finder. To use it you must drag and dropbox this file in your project. In this case, we will create an example project to show you how to do it. Add the framework to your App project by adding it in the Embedded Binaries section in the General page of the Xcode project. Note that if you see it duplicated in the Linked Frameworks and Libraries section, you can remove the first one. You have just included your framework in the App. Now we have to use it, so import it (I will import it in the ViewController class for test purposes, but you can include it whenever you want). And let’s use the DataStore framework by saving and reading a String from the NSUserDefaults. This is the code: import UIKit import DataStore class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. DataStore.save("Test", forKey: "Test", in: .UserDefaults) print(DataStore.read(forKey: "Test", in: .UserDefaults)!) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } Build the App and see the framework do its work! You should see this in the Xcode console: Test Now you have created a framework in Swift and you have used it with an App! Note that the framework created for the iOS Simulator is different from the one created for a device, because is built for a different architetture. To build a universal framework, you can use Carthage, which is shown in the next section. Using Carthage Carthage is a decentralized dependency manager that builds your dependencies and provides you with binary frameworks. To install it you can download the Carthage.pkg file from GitHub or with Homebrew: brew update brew install carthage Because Carthage is only able to build a framework from Git, we will use Alamofire, a popular HTTP networking library available on GitHub. Oper the project folder and create a file named Cartfile. Here is where we will tell Carthage what it has to build and in what version: github “Alamofire/Alamofire” We don’t specify a version because this is only a test, but it’s good practice. Here you can see an example, but opening the Terminal App, going into the project folder, and typing: carthage update You should see Carthage do some things, but when it has finished, with Finder go to project folder, then Carthage, Build, iOS and there is where the framework is[VP1] . To add it to the App, we have to do more work than what we have done before. Drag and drop the framework from the Carthage/Build/iOS folder in the Linked Frameworks and Libraries section on the General setting tab of the Xcode project. On the Build Phases tab, click on the + icon and choose New Run Script Phase with the following script: /usr/local/bin/carthage copy-frameworks Now you can add the paths of the frameworks under Input Files, which in this case is: $(SRCROOT)/FrameworkTest/Carthage/Build/iOS/Alamofire.framework This script works around an App Store submission bug triggered by universal binaries and ensures that the necessary bitcode-related files and dSYMs are copied when archiving. Now you only have to import the frameworks in your Swift file and use it like we did earlier in this post! Summary In this post, you learned how to create a custom framework for creating shared code between your apps, along with the creation of a GitHub repository to share your open source framework with the community of developers. You also learned how to use Carthage for your GitHub repository, or with a popular framework like Alamofire, and how to import it in your Apps. About The Author Fabrizio Brancati is a mobile app developer and web developer currently working and living in Milan, Italy, with a passion for innovation and discover new things. He develops with Objective-C for iOS 3 and iPod touch. When Swift came out, he learned it and was so excited that he remade an Objective-C framework available on GitHub in Swift (BFKit / BFKit-Swift). Software development is his driving passion, and he loves when others make use of his software.
Read more
  • 0
  • 0
  • 15637

article-image-layout-management-python-gui
Packt
05 Apr 2017
13 min read
Save for later

Layout Management for Python GUI

Packt
05 Apr 2017
13 min read
In this article written by Burkhard A. Meierwe, the author of the book Python GUI Programming Cookbook - Second Edition, we will lay out our GUI using Python 3.6 and above: Arranging several labels within a label frame widget Using padding to add space around widgets How widgets dynamically expand the GUI Aligning the GUI widgets by embedding frames within frames (For more resources related to this topic, see here.)  In this article, we will explore how to arrange widgets within widgets to create our Python GUI. Learning the fundamentals of GUI layout design will enable us to create great looking GUIs. There are certain techniques that will help us to achieve this layout design. The grid layout manager is one of the most important layout tools built into tkinter that we will be using. We can very easily create menu bars, tabbed controls (aka Notebooks) and many more widgets using tkinter . Arranging several labels within a label frame widget The LabelFrame widget allows us to design our GUI in an organized fashion. We are still using the grid layout manager as our main layout design tool, yet by using LabelFrame widgets we get much more control over our GUI design. Getting ready We are starting to add more and more widgets to our GUI, and we will make the GUI fully functional in the coming recipes. Here we are starting to use the LabelFrame widget. Add the following code just above the main event loop towards the bottom of the Python module. Running the code will result in the GUI looking like this: Uncomment line 111 and notice the different alignment of the LabelFrame. We can easily align the labels vertically by changing our code, as shown next. Note that the only change we had to make was in the column and row numbering. Now the GUI LabelFrame looks as such: How it works... In line 109 we create our first ttk LabelFrame widget and assign the resulting instance to the variable buttons_frame. The parent container is win, our main window. In lines 114 -116, we create labels and place them in the LabelFrame. buttons_frame is the parent of the labels. We are using the important grid layout tool to arrange the labels within the LabelFrame. The column and row properties of this layout manager give us the power to control our GUI layout. The parent of our labels is the buttons_frame instance variable of the LabelFrame, not the win instance variable of the main window. We can see the beginning of a layout hierarchy here. We can see how easy it is to change our layout via the column and row properties. Note how we change the column to 0, and how we layer our labels vertically by numbering the row values sequentially. The name ttk stands for themed tk. The tk-themed widget set was introduced in Tk 8.5. There's more... In a recipe later in this article we will embed LabelFrame widgets within LabelFrame widgets, nesting them to control our GUI layout. Using padding to add space around widgets Our GUI is being created nicely. Next, we will improve the visual aspects of our widgets by adding a little space around them, so they can breathe. Getting ready While tkinter might have had a reputation for creating ugly GUIs, this has dramatically changed since version 8.5. You just have to know how to use the tools and techniques that are available. That's what we will do next. tkinter version 8.6 ships with Python 3.6. How to do it... The procedural way of adding spacing around widgets is shown first, and then we will use a loop to achieve the same thing in a much better way. Our LabelFrame looks a bit tight as it blends into the main window towards the bottom. Let's fix this now. Modify line 110 by adding padx and pady: And now our LabelFrame got some breathing space: How it works... In tkinter, adding space horizontally and vertically is done by using the built-in properties named padx and pady. These can be used to add space around many widgets, improving horizontal and vertical alignments, respectively. We hard-coded 20 pixels of space to the left and right of the LabelFrame, and we added 40 pixels to the top and bottom of the frame. Now our LabelFrame stands out more than it did before. The screenshot above only shows the relevant change. We can use a loop to add space around the labels contained within the LabelFrame: Now the labels within the LabelFrame widget have some space around them too: The grid_configure() function enables us to modify the UI elements before the main loop displays them. So, instead of hard-coding values when we first create a widget, we can work on our layout and then arrange spacing towards the end of our file, just before the GUI is being created. This is a neat technique to know. The winfo_children() function returns a list of all the children belonging to the buttons_frame variable. This enables us to loop through them and assign the padding to each label. One thing to notice is that the spacing to the right of the labels is not really visible. This is because the title of the LabelFrame is longer than the names of the labels. We can experiment with this by making the names of the labels longer. Now our GUI looks like this. Note how there is now some space added to the right of the long label next to the dots. The last dot does not touch the LabelFrame which it otherwise would without the added space. We can also remove the name of the LabelFrame to see the effect padx has on positioning our labels. By setting the text property to an empty string, we remove the name that was previously displayed for the LabelFrame. How widgets dynamically expand the GUI You probably noticed in previous screenshots and by running the code that widgets have a capability to extend themselves to the space they need to visually display their text. Java introduced the concept of dynamic GUI layout management. In comparison, visual development IDEs like VS.NET lay out the GUI in a visual manner, and are basically hard-coding the x and y coordinates of UI elements. Using tkinter, this dynamic capability creates both an advantage and a little bit of a challenge, because sometimes our GUI dynamically expands when we would prefer it rather not to be so dynamic! Well, we are dynamic Python programmers so we can figure out how to make the best use of this fantastic behavior! Getting ready At the beginning of the previous recipe we added a LabelFrame widget. This moved some of our controls to the center of column 0. We might not wish this modification to our GUI layout. Next we will explore some ways to solve this. How to do it... Let us first become aware of the subtle details that are going on in our GUI layout in order to understand it better. We are using the grid layout manager widget and it lays out our widgets in a zero-based grid. This is very similar to an Excel spreadsheet or a database table. Grid Layout Manager Example with 2 Rows and 3 Columns: Row 0; Col 0 Row 0; Col 1 Row 0; Col 2 Row 1; Col 0 Row 1; Col 1 Row 1; Col 2 Using the grid layout manager, what is happening is that the width of any given column is determined by the longest name or widget in that column. This affects all rows. By adding our LabelFrame widget and giving it a title that is longer than some hard-coded size widget like the top-left label and the text entry below it, we dynamically move those widgets to the center of column 0, adding space to the left and right sides of those widgets. Incidentally, because we used the sticky property for the Checkbutton and ScrolledText widgets, those remain attached to the left side of the frame. Let's look in more detail at the screenshot from the first recipe of this article. We added the following code to create the LabelFrame and then placed labels into this frame. Since the text property of the LabelFrame, which is displayed as the title of the LabelFrame, is longer than both our Enter a name: label and the text box entry below it, those two widgets are dynamically centered within the new width of column 0. The Checkbutton and Radiobutton widgets in column 0 did not get centered because we used the sticky=tk.W property when we created those widgets. For the ScrolledText widget we used sticky=tk.WE, which binds the widget to both the west (aka left) and east (aka right) side of the frame. Let's remove the sticky property from the ScrolledText widget and observe the effect this change has. Now our GUI has new space around the ScrolledText widget both on the left and right sides. Because we used the columnspan=3 property, our ScrolledText widget still spans all three columns. If we remove columnspan=3 we get the following GUI which is not what we want. Now our ScrolledText only occupies column 0 and, because of its size, it stretches the layout. One way to get our layout back to where we were before adding the LabelFrame is to adjust the grid column position. Change the column value from 0 to 1. Now our GUI looks like this: How it works... Because we are still using individual widgets, our layout can get messed up. By moving the column value of the LabelFrame from 0 to 1, we were able to get the controls back to where they used to be and where we prefer them to be. At least the left-most label, text, Checkbutton, ScrolledText, and Radiobutton widgets are now located where we intended them to be. The second label and text Entry located in column 1 aligned themselves to the center of the length of the Labels in a Frame widget so we basically moved our alignment challenge one column to the right. It is not so visible because the size of the Choose a number: label is almost the same as the size of the Labels in a Frame title, and so the column width was already close to the new width generated by the LabelFrame. There's more... In the next recipe we will embed frames within frames to avoid the accidental misalignment of widgets we just experienced in this recipe. Aligning the GUI widgets by embedding frames within frames We have much better control of our GUI layout if we embed frames within frames. This is what we will do in this recipe. Getting ready The dynamic behavior of Python and its GUI modules can create a little bit of a challenge to really get our GUI looking the way we want. Here we will embed frames within frames to get more control of our layout. This will establish a stronger hierarchy among the different UI elements, making the visual appearance easier to achieve. We will continue to use the GUI we created in the previous recipe. How to do it... Here, we will create a top-level frame that will contain other frames and widgets. This will help us to get our GUI layout just the way we want. In order to do so, we will have to embed our current controls within a central ttk.LabelFrame. This ttk.LabelFrame is a child of the main parent window and all controls will be children of this ttk.LabelFrame. Up to this point in our recipes we have assigned all widgets to our main GUI frame directly. Now we will only assign our LabelFrame to our main window and after that we will make this LabelFrame the parent container for all the widgets. This creates the following hierarchy in our GUI layout: In this diagram, win is the variable that holds a reference to our main GUI tkinter window frame; mighty is the variable that holds a reference to our LabelFrame and is a child of the main window frame (win); and Label and all other widgets are now placed into the LabelFrame container (mighty). Add the following code towards the top of our Python module: Next we will modify all the following controls to use mighty as the parent, replacing win. Here is an example of how to do this: Note how all the widgets are now contained in the Mighty Python LabelFrame which surrounds all of them with a barely visible thin line. Next, we can reset the Labels in a Frame widget to the left without messing up our GUI layout: Oops - maybe not. While our frame within another frame aligned nicely to the left, it again pushed our top widgets into the center (a default). In order to align them to the left, we have to force our GUI layout by using the sticky property. By assigning it 'W' (West) we can control the widget to be left-aligned. How it works... Note how we aligned the label, but not the text box below it. We have to use the sticky property for all the controls we want to left-align. We can do that in a loop, using the winfo_children() and grid_configure(sticky='W') properties, as we did before in the second recipe of this article. The winfo_children() function returns a list of all the children belonging to the parent. This enables us to loop through all of the widgets and change their properties. Using tkinter to force left, right, top, bottom the naming is very similar to Java: west, east, north and south, abbreviated to: 'W' and so on. We can also use the following syntax: tk.W instead of 'W'. This requires having imported the tkinter module aliased as tk. In a previous recipe we combined both 'W' and 'E' to make our ScrolledText widget attach itself both to the left and right sides of its container using 'WE'. We can add more combinations: 'NSE' will stretch our widget to the top, bottom and right side. If we have only one widget in our form, for example a button, we can make it fill in the entire frame by using all options: 'NSWE'. We can also use tuple syntax: sticky=(tk.N, tk.S, tk.W, tk.E). Let's align the entry in column 0 to the left. Now both the label and the  Entry are aligned towards the West (left). In order to separate the influence that the length of our Labels in a Frame LabelFrame has on the rest of our GUI layout we must not place this LabelFrame into the same LabelFrame as the other widgets but assign it directly to the main GUI form (win). Summary We have learned the layout management for Python GUI using the following reciepes: Arranging several labels within a label frame widget Using padding to add space around widgets How widgets dynamically expand the GUI Aligning the GUI widgets by embedding frames within frames Resources for Article:  Further resources on this subject: Python Scripting Essentials [article] An Introduction to Python Lists and Dictionaries [article] Test all the things with Python [article]
Read more
  • 0
  • 0
  • 15558

article-image-mocking-static-methods-simple
Packt
30 Oct 2013
7 min read
Save for later

Mocking static methods (Simple)

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready The use of static methods is usually considered a bad Object Oriented Programming practice, but if we end up in a project that uses a pattern such as active record (see http://en.wikipedia.org/wiki/Active_record_pattern), we will end up having a lot of static methods. In such situations, we will need to write some unit tests and PowerMock could be quite handy. Start your favorite IDE (which we set up in the Getting and installing PowerMock (Simple) recipe), and let's fire away. How to do it... We will start where we left off. In the EmployeeService.java file, we need to implement the getEmployeeCount method; currently it throws an instance of UnsupportedOperationException. Let's implement the method in the EmployeeService class; the updated classes are as follows: /** * This class is responsible to handle the CRUD * operations on the Employee objects. * @author Deep Shah */ public class EmployeeService { /** * This method is responsible to return * the count of employees in the system. * It does it by calling the * static count method on the Employee class. * @return Total number of employees in the system. */ public int getEmployeeCount() { return Employee.count(); } } /** * This is a model class that will hold * properties specific to an employee in the system. * @author Deep Shah */ public class Employee { /** * The method that is responsible to return the * count of employees in the system. * @return The total number of employees in the system. * Currently this * method throws UnsupportedOperationException. */ public static int count() { throw new UnsupportedOperationException(); } } The getEmployeeCount method of EmployeeService calls the static method count of the Employee class. This method in turn throws an instance of UnsupportedOperationException. To write a unit test of the getEmployeeCount method of EmployeeService, we will need to mock the static method count of the Employee class. Let's create a file called EmployeeServiceTest.java in the test directory. This class is as follows: /** * The class that holds all unit tests for * the EmployeeService class. * @author Deep Shah */ @RunWith(PowerMockRunner.class) @PrepareForTest(Employee.class) public class EmployeeServiceTest { @Test public void shouldReturnTheCountOfEmployeesUsingTheDomainClass() { PowerMockito.mockStatic(Employee.class); PowerMockito.when(Employee.count()).thenReturn(900); EmployeeService employeeService = newEmployeeService(); Assert.assertEquals(900,employeeService.getEmployeeCount()); } } If we run the preceding test, it passes. The important things to notice are the two annotations (@RunWith and @PrepareForTest) at the top of the class, and the call to the PowerMockito.mockStatic method. The @RunWith(PowerMockRunner.class) statement tells JUnit to execute the test using PowerMockRunner. The @PrepareForTest(Employee.class) statement tells PowerMock to prepare the Employee class for tests. This annotation is required when we want to mock final classes or classes with final, private, static, or native methods. The PowerMockito.mockStatic(Employee.class) statement tells PowerMock that we want to mock all the static methods of the Employee class. The next statements in the code are pretty standard, and we have looked at them earlier in the Saying Hello World! (Simple) recipe. We are basically setting up the static count method of the Employee class to return 900. Finally, we are asserting that when the getEmployeeCount method on the instance of EmployeeService is invoked, we do get 900 back. Let's look at one more example of mocking a static method; but this time, let's mock a static method that returns void. We want to add another method to the EmployeeService class that will increment the salary of all employees (wouldn't we love to have such a method in reality?). Updated code is as follows: /** * This method is responsible to increment the salary * of all employees in the system by the given percentage. * It does this by calling the static giveIncrementOf method * on the Employee class. * @param percentage the percentage value by which * salaries would be increased * @return true if the increment was successful. * False if increment failed because of some exception* otherwise. */ public boolean giveIncrementToAllEmployeesOf(intpercentage) { try{ Employee.giveIncrementOf(percentage); return true; } catch(Exception e) { return false; } } The static method Employee.giveIncrementOf is as follows: /** * The method that is responsible to increment * salaries of all employees by the given percentage. * @param percentage the percentage value by which * salaries would be increased * Currently this method throws * UnsupportedOperationException. */ public static void giveIncrementOf(int percentage) { throw new UnsupportedOperationException(); } The earlier syntax would not work for mocking a void static method . The test case that mocks this method would look like the following: @RunWith(PowerMockRunner.class) @PrepareForTest(Employee.class) public class EmployeeServiceTest { @Test public void shouldReturnTrueWhenIncrementOf10PercentageIsGivenSuccessfully() { PowerMockito.mockStatic(Employee.class); PowerMockito.doNothing().when(Employee.class); Employee.giveIncrementOf(10); EmployeeService employeeService = newEmployeeService(); Assert.assertTrue(employeeService.giveIncrementToAllEmployeesOf(10)); } @Test public void shouldReturnFalseWhenIncrementOf10PercentageIsNotGivenSuccessfully() { PowerMockito.mockStatic(Employee.class); PowerMockito.doThrow(newIllegalStateException()).when(Employee.class); Employee.giveIncrementOf(10); EmployeeService employeeService = newEmployeeService(); Assert.assertFalse(employeeService.giveIncrementToAllEmployeesOf(10)); } } Notice that we still need the two annotations @RunWith and @PrepareForTest, and we still need to inform PowerMock that we want to mock the static methods of the Employee class. Notice the syntax for PowerMockito.doNothing and PowerMockito.doThrow: The PowerMockito.doNothing method tells PowerMock to literally do nothing when a certain method is called. The next statement of the doNothing call sets up the mock method. In this case it's the Employee.giveIncrementOf method. This essentially means that PowerMock will do nothing when the Employee.giveIncrementOf method is called. The PowerMockito.doThrow method tells PowerMock to throw an exception when a certain method is called. The next statement of the doThrow call tells PowerMock about the method that should throw an exception; in this case, it would again be Employee.giveIncrementOf. Hence, when the Employee.giveIncrementOf method is called, PowerMock will throw an instance of IllegalStateException. How it works... PowerMock uses custom class loader and bytecode manipulation to enable mocking of static methods. It does this by using the @RunWith and @PrepareForTest annotations. The rule of thumb is whenever we want to mock any method that returns a non-void value , we should be using the PowerMockito.when().thenReturn() syntax. It's the same syntax for instance methods as well as static methods. But for methods that return void, the preceding syntax cannot work. Hence, we have to use PowerMockito.doNothing and PowerMockito.doThrow. This syntax for static methods looks a bit like the record-playback style. On a mocked instance created using PowerMock, we can choose to return canned values only for a few methods; however, PowerMock will provide defaults values for all the other methods. This means that if we did not provide any canned value for a method that returns an int value, PowerMock will mock such a method and return 0 (since 0 is the default value for the int datatype) when invoked. There's more... The syntax of PowerMockito.doNothing and PowerMockito.doThrow can be used on instance methods as well. .doNothing and .doThrow on instance methods The syntax on instance methods is simpler compared to the one used for static methods. Let's say we want to mock the instance method save on the Employee class. The save method returns void, hence we have to use the doNothing and doThrow syntax. The test code to achieve is as follows: /** * The class that holds all unit tests for * the Employee class. * @author Deep Shah */ public class EmployeeTest { @Test() public void shouldNotDoAnythingIfEmployeeWasSaved() { Employee employee =PowerMockito.mock(Employee.class); PowerMockito.doNothing().when(employee.save(); try { employee.save(); } catch(Exception e) { Assert.fail("Should not have thrown anexception"); } } @Test(expected = IllegalStateException.class) public void shouldThrowAnExceptionIfEmployeeWasNotSaved() { Employee employee =PowerMockito.mock(Employee.class); PowerMockito.doThrow(newIllegalStateException()).when(employee).save(); employee.save(); } } To inform PowerMock about the method to mock, we just have to invoke it on the return value of the when method. The line PowerMockito.doNothing().when(employee).save() essentially means do nothing when the save method is invoked on the mocked Employee instance. Similarly, PowerMockito.doThrow(new IllegalStateException()).when(employee).save() means throw IllegalStateException when the save method is invoked on the mocked Employee instance. Notice that the syntax is more fluent when we want to mock void instance methods. Summary In this article, we saw how easily we can mock static methods. Resources for Article: Further resources on this subject: Important features of Mockito [Article] Python Testing: Mock Objects [Article] Easily Writing SQL Queries with Spring Python [Article]
Read more
  • 0
  • 0
  • 15338
article-image-aspnet-core-high-performance
Packt
07 Mar 2018
20 min read
Save for later

ASP.NET Core High Performance

Packt
07 Mar 2018
20 min read
In this article by James Singleton, the author of the book ASP.NET Core High Performance, we will see that many things have changed for version 2 of the ASP.NET Core framework and there have also been a lot of improvements to the various supporting technologies. Now is a great time to give it a try, as the code has stabilized and the pace of change has settled down a bit. There were significant differences between the original release candidate and version 1 of ASP.NET Core and yet more alterations between version 1 and version 2. Some of these changes have been controversial, particularly around tooling but the scope of .NET Core has grown massively and ultimately this is a good thing. One of the highest profile differences between 1 and 2 is the change (and some would say regression) away from the new JavaScript Object Notation (JSON) based project format and back towards the Extensible Markup Language (XML) based .csproj format. However, it is a simplified and stripped down version compared to the format used in the original .NET Framework. There has been a move towards standardization between the different .NET frameworks and .NET Core 2 has a much larger API surface as a result. The interface specification known as .NET Standard 2 covers the intersection between .NET Core, the .NET Framework, and Xamarin. There is also an effort to standardize Extensible Application Markup Language (XAML) into the XAML Standard that will work across Universal Windows Platform (UWP) and Xamarin.Forms apps. C# and .NET can be used on a huge amount of platforms and in a large number of use cases, from server side web applications to mobile apps and even games using engines like Unity 3D. In this article we will go over the changes between version 1 and version 2 of the new Core releases. We will also look at some new features of the C# language. There are many useful additions and a plethora of performance improvement too. In this article we will cover: .NET Core 2 scope increases ASP.NET Core 2 additions Performance improvements .NET Standard 2 New C# 6 features New C# 7 features JavaScript considerations New in Core 2 There are two different products in the Core family. The first is .NET Core, which is the low level framework providing basic libraries. This can be used to write console applications and it is also the foundation for higher level application frameworks. The second is ASP.NET Core, which is a framework for building web applications that run on a server and service clients (usually web browsers). This was originally the only workload for .NET Core until it grew in scope to handle a more diverse range of scenarios. We'll cover the differences in the new versions separately for each of these frameworks. The changes in .NET Core will also apply to ASP.NET Core, unless you are running it on top of the .NET Framework version 4. New in .NET Core 2 The main focus of .NET Core 2 is the huge increase in scope. There are more than double the number of APIs included and it supports .NET Standard 2 (covered later in this article). You can also refer .NET Framework assemblies with no recompile required. This should just work as long as the assemblies only use APIs that have been implemented in .NET Core. This means that more NuGet packages will work with .NET Core. Finding if your favorite library was supported or not, was always a challenge with the previous version. The author set up a repository listing package compatibility to help with this. You can find the ASP.NET Core Library and Framework Support (ANCLAFS) list at github.com/jpsingleton/ANCLAFS and also via anclafs.com. If you want to make a change then please send a pull request. Hopefully in the future all packages will support Core and this list will no longer be required. There is now support in Core for Visual Basic and for more Linux distributions. You can also perform live unit testing with Visual Studio 2017, much like the old NCrunch extension. Performance improvements Some of the more interesting changes for 2 are the performance improvements over the original .NET Framework. There have been tweaks to the implementations of many of the framework data structures. Some of the classes and methods that have seen speed improvements or memory reduction include: List<T> Queue<T> SortedSet<T> ConcurrentQueue<T> Lazy<T> Enumerable.Concat() Enumerable.OrderBy() Enumerable.ToList() Enumerable.ToArray() DeflateStream SHA256 BigInteger BinaryFormatter Regex WebUtility.UrlDecode() Encoding.UTF8.GetBytes() Enum.Parse() DateTime.ToString() String.IndexOf() String.StartsWith() FileStream Socket NetworkStream SslStream ThreadPool SpinLock We won't go into specific benchmarks here because benchmarking is hard and the improvements you see will clearly depend on your usage. The thing to take away is that lots of work has been done to increase the performance of .NET Core, both over the previous version 1 and .NET Framework 4.7. Many of these changes have come from the community, which shows one of the benefits of open source development. Some of these advances will probably work their way back into a future version of the regular .NET Framework too. There have also been improvements to the RyuJIT compiler for .NET Core 2. As just one example, finally blocks are now almost as efficient as not using exception handing at all, in the normal situation where no exceptions are thrown. You now have no excuses not to liberally use try and using blocks, for example by having checked arithmetic to avoid integer overflows. New in ASP.NET Core 2 ASP.NET Core 2 takes advantage of all the improvements to .NET Core 2, if that is what you choose to run it on. It will also run on .NET Framework 4.7 but it's best to run it on .NET Core, if you can. With the increase in scope and support of .NET Core 2 this should be less of a problem than it was previously. It includes a new meta package so you only need to reference one NuGet item to get all the things! However, it is still composed of individual packages if you want to pick and choose. They haven't reverted back to the bad old days of one huge System.Web assembly. A new package trimming feature will ensure that if you don't use a package then its binaries won't be included in your deployment, even if you use the meta package to reference it. There is also a sensible default for setting up a web host configuration. You don't need to add logging, Kestrel, and IIS individually anymore. Logging has also got simpler and, as it is built in, you have no excuses not to use it from the start. A new feature is support for controller-less Razor Pages. These are exactly what they sound like and allow you to write pages with just a Razor template. This is similar to the Web Pages product, not to be confused with Web Forms. There is talk of Web Forms making a comeback, but if so then hopefully the abstraction will be thought out more and it won't carry so much state around with it. There is a new authentication model that makes better use of Dependency Injection. ASP.NET Core Identity allows you to use OpenID, OAuth 2 and get access tokens for your APIs. A nice time saver is you no longer need to emit anti-forgery tokens in forms (to prevent Cross Site Request Forgery) with attributes to validate them on post methods. This is all done automatically for you, which should prevent you forgetting to do this and leaving a security vulnerability. Performance improvements There have been additional increases to performance in ASP.NET Core that are not related to the improvements in .NET Core, which also help. Startup time has been reduced by shipping binaries that have already been through the Just In Time compilation process. Although not a new feature in ASP.NET Core 2, output caching is now available. In 1.0, only response caching was included, which simply set the correct HTTP headers. In 1.1, an in-memory cache was added and today you can use local memory or a distributed cache kept in SQL Server or Redis. Standards Standards are important, that's why we have so many of them. The latest version of the .NET Standard is 2 and .NET Core 2 implements this. A good way to think about .NET Standard is as an interface that a class would implement. The interface defines an abstract API but the concrete implementation of that API is left up to the classes that inherit from it. Another way to think about it is like the HTML5 standard that is supported by different web browsers. Version 2 of the .NET Standard was defined by looking at the intersection of the .NET Framework and Mono. This standard was then implemented by .NET Core 2, which is why is contains so many more APIs than version 1. Version 4.6.1 of the .NET Framework also implements .NET Standard 2 and there is work to support the latest versions of the .NET Framework, UWP and Xamarin (including Xamarin.Forms). There is also the new XAML Standard that aims to find the common ground between Xamarin.Forms and UWP. Hopefully it will include Windows Presentation Foundation (WPF) in the future. If you create libraries and packages that use these standards then they will work on all the platforms that support them. As a developer simply consuming libraries, you don't need to worry about these standards. It just means that you are more likely to be able to use the packages that you want, on the platforms you are working with. New C# features It not just the frameworks and libraries that have been worked on. The underlying language has also had some nice new features added. We will focus on C# here as it is the most popular language for the Common Language Runtime (CLR). Other options include Visual Basic and the functional programming language F#. C# is a great language to work with, especially when compared to a language like JavaScript. Although JavaScript is great for many reasons (such as its ubiquity and the number of frameworks available), the elegance and design of the language is not one of them. Many of these new features are just syntactic sugar, which means they don't add any new functionality. They simply provide a more succinct and easier to read way of writing code that does the same thing. C# 6 Although the latest version of C# is 7, there are some very handy features in C# 6 that often go underused. Also, some of the new additions in 7 are improvements on features added in 6 and would not make much sense without context. We will quickly cover a few features of C# 6 here, in case you are unaware of how useful they can be. String interpolation String interpolation is a more elegant and easier to work with version of the familiar string format method. Instead of supplying the arguments to embed in the string placeholders separately, you can now embed them directly in the string. This is far more readable and less error prone. Let us demonstrate with an example. Consider the following code that embeds an exception in a string. catch (Exception e) { Console.WriteLine("Oh dear, oh dear! {0}", e); } This embeds the first (and in this case only) object in the string at the position marked by the zero. It may seem simple but this quickly gets complex if you have many objects and want to add another at the start. You then have to correctly renumber all the placeholders. Instead you can now prefix the string with a dollar character and embed the object directly in it. This is shown in the following code that behaves the same as the previous example. catch (Exception e) { Console.WriteLine($"Oh dear, oh dear! {e}"); } The ToString() method on an exception outputs all the required information including name, message, stack trace and any inner exceptions. There is no need to deconstruct it manually, you may even miss things if you do. You can also use the same format strings as you are used to. Consider the following code that formats a date in a custom manner. Console.WriteLine($"Starting at: {DateTimeOffset.UtcNow:yyyy/MM/ddHH:mm:ss}"); When this feature was being built, the syntax was slightly different. So be wary of any old blog posts or documentation that may not be correct. Null conditional The null conditional operator is a way of simplifying null checks. You can now inline a check for null rather than using an if statement or ternary operator. This makes it easier to use in more places and will hopefully help you to avoid the dreaded null reference exception. You can avoid doing a manual null check like in the following code. int? length = (null == bytes) ? null : (int?)bytes.Length; This can now be simplified to the following statement by adding a question mark. int? length = bytes?.Length; Exception filters You can filter exceptions more easily with the when keyword. You no longer need to catch every type of exception that you are interested in and then filter manually inside the catch block. This is a feature that was already present in VB and F# so it's nice that C# has finally caught up. There are some small benefits to this approach. For example, if your filter is not matched then the exception can still be caught by other catch blocks in the same try statement. You also don't need to remember to re-throw the exception to avoid it being swallowed. This helps with debugging, as Visual Studio will no longer break, as it would when you throw. For example, you could check to see if there is a message in the exception and handle it differently, as shown here. catch (Exception e) when (e?.Message?.Length> 0) When this feature was in development, a different keyword (if) was used. So be careful of any old information online. One thing to keep in mind is that relying on a particular exception message is fragile. If your application is localized then the message may be in a different language than what you expect. This holds true outside of exception filtering too. Asynchronous availability Another small improvement is that you can use the await keyword inside catch and finally blocks. This was not initially allowed when this incredibly useful feature was added in C# 5. There is not a lot more to say about this. The implementation is complex but you don't need to worry about this unless you're interested in the internals. From a developer point of view, it just works, as in this trivial example. catch (Exception e) when (e?.Message?.Length> 0) { await Task.Delay(200); } This feature has been improved in C# 7, so read on. You will see async and await used a lot. Asynchronous programming is a great way of improving performance and not just from within your C# code. Expression bodies Expression bodies allow you to assign an expression to a method or getter property using the lambda arrow operator (=>) that you may be familiar with from fluent LINQ syntax. You no longer need to provide a full statement or method signature and body. This feature has also been improved in C# 7 so see the examples in the next section. For example, a getter property can be implemented like so. public static string Text => $"Today: {DateTime.Now:o}"; A method can be written in a similar way, such as the following example. private byte[] GetBytes(string text) => Encoding.UTF8.GetBytes(text); C# 7 The most recent version of the C# language is 7 and there are yet more improvements to readability and ease of use. We'll cover a subset of the more interesting changes here. Literals There are couple of minor additional capabilities and readability enhancements when specifying literal values in code. You can specify binary literals, which means you don't have to work out how to represent them using a different base anymore. You can also put underscores anywhere within a literal to make it easier to read the number. The underscores are ignored but allow you to separate digits into convention groupings. This is particularly well suited to the new binary literal as it can be very verbose listing out all those zeros and ones. Take the following example using the new 0b prefix to specify a binary literal that will be rendered as an integer in a string. Console.WriteLine($"Binary solo! {0b0000001_00000011_000000111_00001111}"); You can do this with other bases too, such as this integer, which is formatted to use a thousands separator. Console.WriteLine($"Over {9_000:#,0}!"); // Prints "Over 9,000!" Tuples One of the big new features in C# 7 is support for tuples. Tuples are groups of values and you can now return them directly from method calls. You are no longer restricted to returning a single value. Previously you could work around this limitation in a few sub-optimal ways, including creating a custom complex object to return, perhaps with a Plain Old C# Object (POCO) or Data Transfer Object (DTO), which are the same thing. You could have also passed in a reference using the ref or out keywords, which although there are improvements to the syntax are still not great. There was System.Tuple in C# 6 but this wasn't ideal. It was a framework feature, rather than a language feature and the items were only numbered and not named. With C# 7 tuples, you can name the objects and they make a great alternative to anonymous types, particularly in LINQ query expression lambda functions. As an example, if you only want to work on a subset of the data available, perhaps when filtering a database table with an O/RM such as Entity Framework, then you could use a tuple for this. The following example returns a tuple from a method. You may need to add the System.ValueTupleNuGet package for this to work. private static (int one, string two, DateTime three) GetTuple() { return (one: 1, two: "too", three: DateTime.UtcNow); } You can also use tuples in string interpolation and all the values are rendered, as shown here. Console.WriteLine($"Tuple = {GetTuple()}"); Out variables If you did want to pass parameters into a method for modification then you have always needed to declare them first. This is no longer necessary and you can simply declare the variables at the point you pass them in. You can also declare a variable to be discarded by using an underscore. This is particularly useful if you don't want to use the returned value, for example in some of the try parse methods of the native framework data types. Here we parse a date without declaring the dt variable first. DateTime.TryParse("2017-08-09", out var dt); In this example we test for an integer but we don't care what it is. var isInt = int.TryParse("w00t", out _); References You can now return values by reference from a method as well as consume them. This is a little like working with pointers in C but safer. For example, you can only return references that were passed into the method and you can't modify references to point to a different location in memory. This is a very specialist feature but in certain niche situations it can dramatically improve performance. Given the following method. private static ref string GetFirstRef(ref string[] texts) { if (texts?.Length> 0) { return ref texts[0]; } throw new ArgumentOutOfRangeException(); } You could call it like so, and the second console output line would appear differently (one instead of 1). var strings = new string[] { "1", "2" }; ref var first = ref GetFirstRef(ref strings); Console.WriteLine($"{strings?[0]}"); // 1 first = "one"; Console.WriteLine($"{strings?[0]}"); // one Patterns The other big addition is you can now match patterns in C# 7 using the is keyword. This simplifies testing for null and matching against types, among other things. It also lets you easily use the cast value. This is a simpler alternative to using full polymorphism (where a derived class can be treated as a base class and override methods). However, if you control the code base and can make use of proper polymorphism, then you should still do this and follow good Object-Oriented Programming (OOP) principles. In the following example, pattern matching is used to parse the type and value of an unknown object. private static int PatternMatch(object obj) { if (obj is null) { return 0; } if (obj is int i) { return i++; } if (obj is DateTime d || (obj is string str && DateTime.TryParse(str, out d))) { return d.DayOfYear; } return -1; } You can also use pattern matching in the cases of a switch statement and you can switch on non-primitive types such as custom objects. More expression bodies Expression bodies are expanded from the offering in C# 6 and you can now use them in more places, for example as object constructors and property setters. Here we extend our previous example to include setting the value on the property we were previously just reading. private static string text; public static string Text { get => text ?? $"Today: {DateTime.Now:r}"; set => text = value; } More asynchronous improvements There have been some small improvements to what async methods can return and, although small, they could offer big performance gains in certain situations. You no longer have to return a task, which can be beneficial if the value is already available. This can reduce the overheads of using async methods and creating a task object. JavaScript You can't write a book on web applications without covering JavaScript. It is everywhere. If you write a web app that does a full page load on every request and it's not a simple content site then it will feel slow. Users expect responsiveness. If you are a back-end developer then you may think that you don't have to worry about this. However, if you are building an API then you may want to make it easy to consume with JavaScript and you will need to make sure that your JSON is correctly and quickly serialized. Even if you are building a Single Page Application (SPA) in JavaScript (or TypeScript) that runs in the browser, the server can still play a key role. You can use SPA services to run Angular or React on the server and generate the initial output. This can increase performance, as the browser has something to render immediately. For example, there is a project called React.NET that integrates React with ASP.NET, and it supports ASP.NET Core. If you have been struggling to keep up with the latest developments in the .NET world then JavaScript is on another level. There seems to be something new almost every week and this can lead to framework fatigue and the paradox of choice. There is so much to choose from that you don't know what to pick. Summary In this article, you have seen a brief high-level summary of what has changed in .NET Core 2 and ASP.NET Core 2, compared to the previous versions. You are also now aware of .NET Standard 2 and what it is for. We have shown examples of some of the new features available in C# 6 and C# 7. These can be very useful in letting you write more with less, and in making your code more readable and easier to maintain.
Read more
  • 0
  • 0
  • 15312

article-image-patterns-traversing
Packt
25 Sep 2015
15 min read
Save for later

Patterns of Traversing

Packt
25 Sep 2015
15 min read
 In this article by Ryan Lemmer, author of the book Haskell Design Patterns, we will focus on two fundamental patterns of recursion: fold and map. The more primitive forms of these patterns are to be found in the Prelude, the "old part" of Haskell. With the introduction of Applicative, came more powerful mapping (traversal), which opened the door to type-level folding and mapping in Haskell. First, we will look at how Prelude's list fold is generalized to all Foldable containers. Then, we will follow the generalization of list map to all Traversable containers. Our exploration of fold and map culminates with the Lens library, which raises Foldable and Traversable to an even higher level of abstraction and power. In this article, we will cover the following: Traversable Modernizing Haskell Lenses (For more resources related to this topic, see here.) Traversable As with Prelude.foldM, mapM fails us beyond lists, for example, we cannot mapM over the Tree from earlier: main = mapM doF aTree >>= print -- INVALID The Traversable type-class is to map in the same way as Foldable is to fold: -- required: traverse or sequenceA class (Functor t, Foldable t) => Traversable (t :: * -> *) where -- APPLICATIVE form traverse :: Applicative f => (a -> f b) -> t a -> f (t b) sequenceA :: Applicative f => t (f a) -> f (t a) -- MONADIC form (redundant) mapM :: Monad m => (a -> m b) -> t a -> m (t b) sequence :: Monad m => t (m a) -> m (t a) The traverse fuction generalizes our mapA function, which was written for lists, to all Traversable containers. Similarly, Traversable.mapM is a more general version of Prelude.mapM for lists: mapM :: Monad m => (a -> m b) -> [a] -> m [b] mapM :: Monad m => (a -> m b) -> t a -> m (t b) The Traversable type-class was introduced along with Applicative: "we introduce the type class Traversable, capturing functorial data structures through which we can thread an applicative computation"                         Applicative Programming with Effects - McBride and Paterson A Traversable Tree Let's make our Traversable Tree. First, we'll do it the hard way: – a Traversable must also be a Functor and Foldable: instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Node x lTree rTree) = Node (f x) (fmap f lTree) (fmap f rTree) instance Foldable Tree where foldMap f (Leaf x) = f x foldMap f (Node x lTree rTree) = (foldMap f lTree) `mappend` (f x) `mappend` (foldMap f rTree) --traverse :: Applicative ma => (a -> ma b) -> mt a -> ma (mt b) instance Traversable Tree where traverse g (Leaf x) = Leaf <$> (g x) traverse g (Node x ltree rtree) = Node <$> (g x) <*> (traverse g ltree) <*> (traverse g rtree) data Tree a = Node a (Tree a) (Tree a) | Leaf a deriving (Show) aTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) -- import Data.Traversable main = traverse doF aTree where doF n = do print n; return (n * 2) The easier way to do this is to auto-implement Functor, Foldable, and Traversable: {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE DeriveFoldable #-} {-# LANGUAGE DeriveTraversable #-} import Data.Traversable data Tree a = Node a (Tree a) (Tree a)| Leaf a deriving (Show, Functor, Foldable, Traversable) aTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) main = traverse doF aTree where doF n = do print n; return (n * 2) Traversal and the Iterator pattern The Gang of Four Iterator pattern is concerned with providing a way "...to access the elements of an aggregate object sequentially without exposing its underlying representation"                                       "Gang of Four" Design Patterns, Gamma et al, 1995 In The Essence of the Iterator Pattern, Jeremy Gibbons shows precisely how the Applicative traversal captures the Iterator pattern. The Traversable.traverse class is the Applicative version of Traversable.mapM, which means it is more general than mapM (because Applicative is more general than Monad). Moreover, because mapM does not rely on the Monadic bind chain to communicate between iteration steps, Monad is a superfluous type for mapping with effects (Applicative is sufficient). In other words, Applicative traverse is superior to Monadic traversal (mapM): "In addition to being parametrically polymorphic in the collection elements, the generic traverse operation is parametrised along two further dimensions: the datatype being tra- versed, and the applicative functor in which the traversal is interpreted" "The improved compositionality of applicative functors over monads provides better glue for fusion of traversals, and hence better support for modular programming of iterations"                                        The Essence of the Iterator Pattern - Jeremy Gibbons Modernizing Haskell 98 The introduction of Applicative, along with Foldable and Traversable, had a big impact on Haskell. Foldable and Traversable lift Prelude fold and map to a much higher level of abstraction. Moreover, Foldable and Traversable also bring a clean separation between processes that preserve or discard the shape of the structure that is being processed. Traversable describes processes that preserve that shape of the data structure being traversed over. Foldable processes, in turn, discard or transform the shape of the structure being folded over. Since Traversable is a specialization of Foldable, we can say that shape preservation is a special case of shape transformation. This line between shape preservation and transformation is clearly visible from the fact that functions that discard their results (for example, mapM_, forM_, sequence_, and so on) are in Foldable, while their shape-preserving counterparts are in Traversable. Due to the relatively late introduction of Applicative, the benefits of Applicative, Foldable, and Traversable have not found their way into the core of the language. This is due to the change with the Foldable Traversable In Prelude proposal (planned for inclusion in the core libraries from GHC 7.10). For more information, visit https://wiki.haskell.org/Foldable_Traversable_In_Prelude. This will involve replacing less generic functions in Prelude, Control.Monad, and Data.List with their more polymorphic counterparts in Foldable and Traversable. There have been objections to the movement to modernize, the main concern being that more generic types are harder to understand, which may compromise Haskell as a learning language. These valid concerns will indeed have to be addressed, but it seems certain that the Haskell community will not resist climbing to new abstract heights. Lenses A Lens is a type that provides access to a particular part of a data structure. Lenses express a high-level pattern for composition. However, Lens is also deeply entwined with Traversable, and so we describe it in this article instead. Lenses relate to the getter and setter functions, which also describe access to parts of data structures. To find our way to the Lens abstraction (as per Edward Kmett's Lens library), we'll start by writing a getter and setter to access the root node of a Tree. Deriving Lens Returning to our Tree from earlier: data Tree a = Node a (Tree a) (Tree a) | Leaf a deriving (Show) intTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) listTree = Node [1,1] (Leaf [2,1]) (Node [3,2] (Leaf [5,2]) (Leaf [7,4])) tupleTree = Node (1,1) (Leaf (2,1)) (Node (3,2) (Leaf (5,2)) (Leaf (7,4))) Let's start by writing generic getter and setter functions: getRoot :: Tree a -> a getRoot (Leaf z) = z getRoot (Node z _ _) = z setRoot :: Tree a -> a -> Tree a setRoot (Leaf z) x = Leaf x setRoot (Node z l r) x = Node x l r main = do print $ getRoot intTree print $ setRoot intTree 11 print $ getRoot (setRoot intTree 11) If we want to pass in a setter function instead of setting a value, we use the following: fmapRoot :: (a -> a) -> Tree a -> Tree a fmapRoot f tree = setRoot tree newRoot where newRoot = f (getRoot tree) We have to do a get, apply the function, and then set the result. This double work is akin to the double traversal we saw when writing traverse in terms of sequenceA. In that case we resolved the issue by defining traverse first (and then sequenceA i.t.o. traverse): We can do the same thing here by writing fmapRoot to work in a single step (and then rewriting setRoot' i.t.o. fmapRoot'): fmapRoot' :: (a -> a) -> Tree a -> Tree a fmapRoot' f (Leaf z) = Leaf (f z) fmapRoot' f (Node z l r) = Node (f z) l r setRoot' :: Tree a -> a -> Tree a setRoot' tree x = fmapRoot' (_ -> x) tree main = do print $ setRoot' intTree 11 print $ fmapRoot' (*2) intTree The fmapRoot' function delivers a function to a particular part of the structure and returns the same structure: fmapRoot' :: (a -> a) -> Tree a -> Tree a To allow for I/O, we need a new function: fmapRootIO :: (a -> IO a) -> Tree a -> IO (Tree a) We can generalize this beyond I/O to all Monads: fmapM :: (a -> m a) -> Tree a -> m (Tree a) It turns out that if we relax the requirement for Monad, and generalize f' to all the Functor container types, then we get a simple van Laarhoven Lens! type Lens' s a = Functor f' => (a -> f' a) -> s -> f' s The remarkable thing about a van Laarhoven Lens is that given the preceding function type, we also gain "get", "set", "fmap", "mapM", and many other functions and operators. The Lens function type signature is all it takes to make something a Lens that can be used with the Lens library. It is unusual to use a type signature as "primary interface" for a library. The immediate benefit is that we can define a lens without referring to the Lens library. We'll explore more benefits and costs to this approach, but first let's write a few lenses for our Tree. The derivation of the Lens abstraction used here has been based on Jakub Arnold's Lens tutorial, which is available at http://blog.jakubarnold.cz/2014/07/14/lens-tutorial-introduction-part-1.html. Writing a Lens A Lens is said to provide focus on an element in a data structure. Our first lens will focus on the root node of a Tree. Using the lens type signature as our guide, we arrive at: lens':: Functor f => (a -> f' a) -> s -> f' s root :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) Still, this is not very tangible; fmapRootIO is easier to understand with the Functor f' being IO: fmapRootIO :: (a -> IO a) -> Tree a -> IO (Tree a) fmapRootIO g (Leaf z) = (g z) >>= return . Leaf fmapRootIO g (Node z l r) = (g z) >>= return . (x -> Node x l r) displayM x = print x >> return x main = fmapRootIO displayM intTree If we drop down from Monad into Functor, we have a Lens for the root of a Tree: root :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) root g (Node z l r) = fmap (x -> Node x l r) (g z) root g (Leaf z) = fmap Leaf (g z) As Monad is a Functor, this function also works with Monadic functions: main = root displayM intTree As root is a lens, the Lens library gives us the following: -– import Control.Lens main = do -- GET print $ view root listTree print $ view root intTree -- SET print $ set root [42] listTree print $ set root 42 intTree -- FMAP print $ over root (+11) intTree The over is the lens way of fmap'ing a function into a Functor. Composable getters and setters Another Lens on Tree might be to focus on the rightmost leaf: rightMost :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) rightMost g (Node z l r) = fmap (r' -> Node z l r') (rightMost g r) rightMost g (Leaf z) = fmap (x -> Leaf x) (g z) The Lens library provides several lenses for Tuple (for example, _1 which brings focus to the first Tuple element). We can compose our rightMost lens with the Tuple lenses: main = do print $ view rightMost tupleTree print $ set rightMost (0,0) tupleTree -- Compose Getters and Setters print $ view (rightMost._1) tupleTree print $ set (rightMost._1) 0 tupleTree print $ over (rightMost._1) (*100) tupleTree A Lens can serve as a getter, setter, or "function setter". We are composing lenses using regular function composition (.)! Note that the order of composition is reversed in (rightMost._1) the rightMost lens is applied before the _1 lens. Lens Traversal A Lens focuses on one part of a data structure, not several, for example, a lens cannot focus on all the leaves of a Tree: set leaves 0 intTree over leaves (+1) intTree To focus on more than one part of a structure, we need a Traversal class, the Lens generalization of Traversable). Whereas Lens relies on Functor, Traversal relies on Applicative. Other than this, the signatures are exactly the same: traversal :: Applicative f' => (a -> f' a) -> Tree a -> f' (Tree a) lens :: Functor f'=> (a -> f' a) -> Tree a -> f' (Tree a) A leaves Traversal delivers the setter function to all the leaves of the Tree: leaves :: Applicative f' => (a -> f' a) -> Tree a -> f' (Tree a) leaves g (Node z l r) = Node z <$> leaves g l <*> leaves g r leaves g (Leaf z) = Leaf <$> (g z) We can use set and over functions with our new Traversal class: set leaves 0 intTree over leaves (+1) intTree The Traversals class compose seamlessly with Lenses: main = do -- Compose Traversal + Lens print $ over (leaves._1) (*100) tupleTree -- Compose Traversal + Traversal print $ over (leaves.both) (*100) tupleTree -- map over each elem in target container (e.g. list) print $ over (leaves.mapped) (*(-1)) listTree -- Traversal with effects mapMOf leaves displayM tupleTree (The both is a Tuple Traversal that focuses on both elements). Lens.Fold The Lens.Traversal lifts Traversable into the realm of lenses: main = do print $ sumOf leaves intTree print $ anyOf leaves (>0) intTree The Lens Library We used only "simple" Lenses so far. A fully parametrized Lens allows for replacing parts of a data structure with different types: type Lens s t a b = Functor f' => (a -> f' b) -> s -> f' t –- vs simple Lens type Lens' s a = Lens s s a a Lens library function names do their best to not clash with existing names, for example, postfixing of idiomatic function names with "Of" (sumOf, mapMOf, and so on), or using different verb forms such as "droppingWhile" instead of "dropWhile". While this creates a burden as i.t.o has to learn new variations, it does have a big plus point—it allows for easy unqualified import of the Lens library. By leaving the Lens function type transparent (and not obfuscating it with a new type), we get Traversals by simply swapping out Functor for Applicative. We also get to define lenses without having to reference the Lens library. On the downside, Lens type signatures can be bewildering at first sight. They form a language of their own that requires effort to get used to, for example: mapMOf :: Profunctor p => Over p (WrappedMonad m) s t a b -> p a (m b) -> s -> m t foldMapOf :: Profunctor p => Accessing p r s a -> p a r -> s -> r On the surface, the Lens library gives us composable getters and setters, but there is much more to Lenses than that. By generalizing Foldable and Traversable into Lens abstractions, the Lens library lifts Getters, Setters, Lenses, and Traversals into a unified framework in which they are all compose together. Edward Kmett's Lens library is a sprawling masterpiece that is sure to leave a lasting impact on idiomatic Haskell. Summary We started with Lists (Haskel 98), then generalizing for all Traversable containers (Introduced in the mid-2000s). Following that, we saw how the Lens library (2012) places traversing in an even broader context. Lenses give us a unified vocabulary to navigate data structures, which explains why it has been described as a "query language for data structures". Resources for Article: Further resources on this subject: Plotting in Haskell[article] The Hunt for Data[article] Getting started with Haskell [article]
Read more
  • 0
  • 0
  • 15213

article-image-push-your-data-web
Packt
22 Feb 2016
27 min read
Save for later

Push your data to the Web

Packt
22 Feb 2016
27 min read
This article covers the following topics: An introduction to the Shiny app framework Creating your first Shiny app The connection between the server file and the user interface The concept of reactive programming Different types of interface layouts, widgets, and Shiny tags How to create a dynamic user interface Ways to share your Shiny applications with others How to deploy Shiny apps to the web (For more resources related to this topic, see here.) Introducing Shiny – the app framework The Shiny package delivers a powerful framework to build fully featured interactive Web applications just with R and RStudio. Basic Shiny applications typically consist of two components: ~/shinyapp |-- ui.R |-- server.R While the ui.R function represents the appearance of the user interface, the server.R function contains all the code for the execution of the app. The look of the user interface is based on the famous Twitter bootstrap framework, which makes the look and layout highly customizable and fully responsive. In fact, you only need to know R and how to use the shiny package to build a pretty web application. Also, a little knowledge of HTML, CSS, and JavaScript may help. If you want to check the general possibilities and what is possible with the Shiny package, it is advisable to take a look at the inbuilt examples. Just load the library and enter the example name: library(shiny) runExample("01_hello") As you can see, running the first example opens the Shiny app in a new window. This app creates a simple histogram plot where you can interactively change the number of bins. Further, this example allows you to inspect the corresponding ui.R and server.R code files. There are currently eleven inbuilt example apps: 01_hello 02_text 03_reactivity 04_mpg 05_sliders 06_tabsets 07_widgets 08_html 09_upload 10_download 11_timer These examples focus mainly on the user interface possibilities and elements that you can create with Shiny. Creating a new Shiny web app with RStudio RStudio offers a fast and easy way to create the basis of every new Shiny app. Just click on New Project and select the New Directory option in the newly opened window: After that, click on the Shiny Web Application field: Give your new app a name in the next step, and click on Create Project: RStudio will then open a ready-to-use Shiny app by opening a prefilled ui.R and server.R file: You can click on the now visible Run App button in the right corner of the file pane to display the prefilled example application. Creating your first Shiny application In your effort to create your first Shiny application, you should first create or consider rough sketches for your app. Questions that you might ask in this context are, What do I want to show? How do I want it to show?, and so on. Let's say we want to create an application that allows users to explore some of the variables of the mtcars dataset. The data was extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models). Sketching the final app We want the user of the app to be able to select one out of the three variables of the dataset that gets displayed in a histogram. Furthermore, we want users to get a summary of the dataset under the main plot. So, the following figure could be a rough project sketch: Constructing the user interface for your app We will reuse the already opened ui.R file from the RStudio example, and adapt it to our needs. The layout of the ui.R file for your first app is controlled by nested Shiny functions and looks like the following lines: library(shiny) shinyUI(pageWithSidebar( headerPanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) Creating the server file The server file holds all the code for the execution of the application: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) The final application After changing the ui.R and the server.R files according to our needs, just hit the Run App button and the final app opens in a new window: As planned in the app sketch, the app offers the user a drop-down menu to choose the desired variable on the left side, and shows a histogram and data summary of the selected variable on the right side. Deconstructing the final app into its components For a better understanding of the Shiny application logic and the interplay of the two main files, ui.R and server.R, we will disassemble your first app again into its individual parts. The components of the user interface We have divided the user interface into three parts: After loading the Shiny library, the complete look of the app gets defined by the shinyUI() function. In our app sketch, we chose a sidebar look; therefore, the shinyUI function holds the argument, pageWithSidebar(): library(shiny) shinyUI(pageWithSidebar( ... The headerPanel() argument is certainly the simplest component, since usually only the title of the app will be stored in it. In our ui.R file, it is just a single line of code: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), ... The sidebarPanel() function defines the look of the sidebar, and most importantly, handles the input of the variables of the chosen mtcars dataset: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), ... Finally, the mainPanel() function ensures that the output is displayed. In our case, this is the histogram and the data summary for the selected variables: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) The server file in detail While the ui.R file defines the look of the app, the server.R file holds instructions for the execution of the R code. Again, we use our first app to deconstruct the related server.R file into its main important parts. After loading the needed libraries, datasets, and further scripts, the function, shinyServer(function(input, output) {} ), defines the server logic: library(shiny) library(datasets) shinyServer(function(input, output) { The marked lines of code that follow translate the inputs of the ui.R file into matching outputs. In our case, the server side output$ object is assigned to carsPlot, which in turn was called in the mainPanel() function of the ui.R file as plotOutput(). Moreover, the render* function, in our example it is renderPlot(), reflects the type of output. Of course, here it is the histogram plot. Within the renderPlot() function, you can recognize the input$ object assigned to the variables that were defined in the user interface file: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... In the following lines, you will see another type of the render function, renderPrint() , and within the curly braces, the actual R function, summary(), with the defined input variable: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) There are plenty of different render functions. The most used are as follows: renderPlot: This creates normal plots renderPrin: This gives printed output types renderUI: This gives HTML or Shiny tag objects renderTable: This gives tables, data frames, and matrices renderText: This creates character strings Every code outside the shinyServer() function runs only once on the first launch of the app, while all the code in between the brackets and before the output functions runs as often as a user visits or refreshes the application. The code within the output functions runs every time a user changes the widget that belongs to the corresponding output. The connection between the server and the ui file As already inspected in our decomposed Shiny app, the input functions of the ui.R file are linked with the output functions of the server file. The following figure illustrates this again: The concept of reactivity Shiny uses a reactive programming model, and this is a big deal. By applying reactive programming, the framework is able to be fast, efficient, and robust. Briefly, changing the input in the user interface, Shiny rebuilds the related output. Shiny uses three reactive objects: Reactive source Reactive conductor Reactive endpoint For simplicity, we use the formal signs of the RStudio documentation: The implementation of a reactive source is the reactive value; that of a reactive conductor is a reactive expression; and the reactive endpoint is also called the observer. The source and endpoint structure As taught in the previous section, the defined input of the ui.R links is the output of the server.R file. For simplicity, we use the code from our first Shiny app again, along with the introduced formal signs: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... The input variable, in our app these are the Horsepower; Miles per Gallon, and Number of Carburetors choices, represents the reactive source. The histogram called carsPlot stands for the reactive endpoint. In fact, it is possible to link the reactive source to numerous reactive endpoints, and also conversely. In our Shiny app, we also connected the input variable to our first and second output—carsSummary: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) ... To sum it up, this structure ensures that every time a user changes the input, the output refreshes automatically and accordingly. The purpose of the reactive conductor The reactive conductor differs from the reactive source and the endpoint is so far that this reactive type can be dependent and can have dependents. Therefore, it can be placed between the source, which can only have dependents and the endpoint, which in turn can only be dependent. The primary function of a reactive conductor is the encapsulation of heavy and difficult computations. In fact, reactive expressions are caching the results of these computations. The following graph displays a possible connection of the three reactive types: In general, reactivity raises the impression of a logically working directional system; after input, the output occurs. You get the feeling that an input pushes information to an output. But this isn't the case. In reality, it works vice versa. The output pulls the information from the input. And this all works due to sophisticated server logic. The input sends a callback to the server, which in turn informs the output that pulls the needed value from the input and shows the result to the user. But of course, for a user, this all feels like an instant updating of any input changes, and overall, like a responsive app's behavior. Of course, we have just touched upon the main aspects of reactivity, but now you know what's really going on under the hood of Shiny. Discovering the scope of the Shiny user interface After you know how to build a simple Shiny application, as well as how reactivity works, let us take a look at the next step: the various resources to create a custom user interface. Furthermore, there are nearly endless possibilities to shape the look and feel of the layout. As already mentioned, the entire HTML, CSS, and JavaScript logic and functions of the layout options are based on the highly flexible bootstrap framework. And, of course, everything is responsive by default, which makes it possible for the final application layout to adapt to the screen of any device. Exploring the Shiny interface layouts Currently, there are four common shinyUI () page layouts: pageWithSidebar() fluidPage() navbarPage() fixedPage() These page layouts can be, in turn, structured with different functions for a custom inner arrangement structure of the page layout. In the following sections, we are introducing the most useful inner layout functions. As an example, we will use our first Shiny application again. The sidebar layout The sidebar layout, where the sidebarPanel() function is used as the input area, and the mainPanel() function as the output, just like in our first Shiny app. The sidebar layout uses the pageWithSidebar() function: library(shiny) shinyUI(pageWithSidebar( headerPanel("The Sidebar Layout"), sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) )) When you only change the first three functions, you can create exactly the same look as the application with the fluidPage() layout. This is the sidebar layout with the fluidPage() function: library(shiny) shinyUI(fluidPage( titlePanel("The Sidebar Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) ))   The grid layout The grid layout is where rows are created with the fluidRow() function. The input and output are made within free customizable columns. Naturally, a maximum of 12 columns from the bootstrap grid system must be respected. This is the grid layout with the fluidPage () function and a 4-8 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(8, tags$h3("Eight-column output area"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) )) As you can see from inspecting the previous ui.R file, the width of the columns is defined within the fluidRow() function, and the sum of these two columns adds up to 12. Since the allocation of the columns is completely flexible, you can also create something like the grid layout with the fluidPage() function and a 4-4-4 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(4, tags$h5("Four-column output area"), plotOutput("carsPlot") ), column(4, tags$h5("Another four-column output area"), verbatimTextOutput("carsSummary") ) ) )) The tabset panel layout The tabsetPanel() function can be built into the mainPanel() function of the aforementioned sidebar layout page. By applying this function, you can integrate several tabbed outputs into one view. This is the tabset layout with the fluidPage() function and three tab panels: library(shiny) shinyUI(fluidPage( titlePanel("The Tabset Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tabsetPanel( tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", dataTableOutput("tableData")) ) ) ) )) After changing the code to include the tabsetPanel() function, the three tabs with the tabPanel() function display the respective output. With the help of this layout, you are no longer dependent on representing several outputs among themselves. Instead, you can display each output in its own tab, while the sidebar does not change. The position of the tabs is flexible and can be assigned to be above, below, right, and left. For example, in the following code file detail, the position of the tabsetPanel() function was assigned as follows: ... mainPanel( tabsetPanel(position = "below", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", tableOutput("tableData")) ) ) ... The navlist panel layout The navlistPanel() function is similar to the tabsetPanel() function, and can be seen as an alternative if you need to integrate a large number of tabs. The navlistPanel() function also uses the tabPanel() function to include outputs: library(shiny) shinyUI(fluidPage( titlePanel("The Navlist Layout"), navlistPanel( "Discovering The Dataset", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Another Plot", plotOutput("barPlot")), tabPanel("Even A Third Plot", plotOutput("thirdPlot"), "More Information", tabPanel("Raw Data", tableOutput("tableData")), tabPanel("More Datatables", tableOutput("moreData")) ) ))   The navbar page as the page layout In the previous examples, we have used the page layouts, fluidPage() and pageWithSidebar(), in the first line. But, especially when you want to create an application with a variety of tabs, sidebars, and various input and output areas, it is recommended that you use the navbarPage() layout. This function makes use of the standard top navigation of the bootstrap framework: library(shiny) shinyUI(navbarPage("The Navbar Page Layout", tabPanel("Data Analysis", sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ), tabPanel("Calculations" … ), tabPanel("Some Notes" … ) )) Adding widgets to your application After inspecting the most important page layouts in detail, we now look at the different interface input and output elements. By adding these widgets, panels, and other interface elements to an application, we can further customize each page layout. Shiny input elements Already, in our first Shiny application, we got to know a typical Shiny input element: the selection box widget. But, of course, there are a lot more widgets with different types of uses. All widgets can have several arguments; the minimum setup is to assign an inputId, which instructs the input slot to communicate with the server file, and a label to communicate with a widget. Each widget can also have its own specific arguments. As an example, we are looking at the code of a slider widget. In the previous screenshot are two versions of a slider; we took the slider range for inspection: sliderInput(inputId = "sliderExample", label = "Slider range", min = 0, max = 100, value = c(25, 75)) Besides the mandatory arguments, inputId and label, three more values have been added to the slider widget. The min and max arguments specify the minimum and maximum values that can be selected. In our example, these are 0 and 100. A numeric vector was assigned to the value argument, and this creates a double-ended range slider. This vector must logically be within the set minimum and maximum values. Currently, there are more than twenty different input widgets, which in turn are all individually configurable by assigning to them their own set of arguments. A brief overview of the output elements As we have seen, the output elements in the ui.R file are connected to the rendering functions in the server file. The mainly used output elements are: htmlOutput imageOutput plotOutput tableOutput textOutput verbatimTextOutput downloadButton Due to their unambiguous naming, the purpose of these elements should be clear. Individualizing your app even further with Shiny tags Although you don't need to know HTML to create stunning Shiny applications, you have the option to create highly customized apps with the usage of raw HTML or so-called Shiny tags. To add raw HTML, you can use the HTML() function. We will focus on Shiny tags in the following list. Currently, there are over a 100 different Shiny tag objects, which can be used to add text styling, colors, different headers, visual and audio, lists, and many more things. You can use these tags by writing tags $tagname. Following is a brief list of useful tags: tags$h1: This is first level header; of course you can also use the known h1 -h6 tags$hr: This makes a horizontal line, also known as a thematic break tags$br: This makes a line break, a popular way to add some space tags$strong = This makes the text bold tags$div: This makes a division of text with a uniform style tags$a: This links to a webpage tags$iframe: This makes an inline frame for embedding possibilities The following ui.R file and corresponding screenshot show the usage of Shiny tags by an example: shinyUI(fluidPage( fluidRow( column(6, tags$h3("Customize your app with Shiny tags!"), tags$hr(), tags$a(href = "http://www.rstudio.com", "Click me"), tags$hr() ), column(6, tags$br(), tags$em("Look - the R project logo"), tags$br(), tags$img(src = "http://www.r-project.org/Rlogo.png") ) ), fluidRow( column(6, tags$strong("We can even add a video"), tags$video(src = "video.mp4", type = "video/mp4", autoplay = NA, controls = NA) ), column(6, tags$br(), tags$ol( tags$li("One"), tags$li("Two"), tags$li("Three")) ) ) ))   Creating dynamic user interface elements We know how to build completely custom user interfaces with all the bells and whistles. But all the introduced types of interface elements are fixed and static. However, if you need to create dynamic interface elements, Shiny offers three ways to achieve it: The conditinalPanel() function: The renderUI() function The use of directly injected JavaScript code In the following section, we only show how to use the first two ways, because firstly, they are built into the Shiny package, and secondly, the JavaScript method is indicated as experimental. Using conditionalPanel The condtionalPanel() functions allow you to show or hide interface elements dynamically, and is set in the ui.R file. The dynamic of this function is achieved by JavaScript expressions, but as usual in the Shiny package, all you need to know is R programming. The following example application shows how this function works for the ui.R file: library(shiny) shinyUI(fluidPage( titlePanel("Dynamic Interface With Conditional Panels"), column(4, wellPanel( sliderInput(inputId = "n", label= "Number of points:", min = 10, max = 200, value = 50, step = 10) )), column(5, "The plot below will be not displayed when the slider value", "is less than 50.", conditionalPanel("input.n >= 50", plotOutput("scatterPlot", height = 300) ) ) )) The following example application shows how this function works for the Related server.R file: library(shiny) shinyServer(function(input, output) { output$scatterPlot <- renderPlot({ x <- rnorm(input$n) y <- rnorm(input$n) plot(x, y) }) }) The code for this example application was taken from the Shiny gallery of RStudio (http://shiny.rstudio.com/gallery/conditionalpanel-demo.html). As readable in both code files, the defined function, input.n, is the linchpin for the dynamic behavior of the example app. In the conditionalPanel() function, it is defined that inputId="n" must have a value of 50 or higher, while the input and output of the plot will work as already defined. Taking advantage of the renderUI function The renderUI() function is hooked, contrary to the previous model, to the server file to create a dynamic user interface. We have already introduced different render output functions in this article. The following example code shows the basic functionality using the ui.R file: # Partial example taken from the Shiny documentation numericInput("lat", "Latitude"), numericInput("long", "Longitude"), uiOutput("cityControls") The following example code shows the basic functionality of the Related sever.R file: # Partial example output$cityControls <- renderUI({ cities <- getNearestCities(input$lat, input$long) checkboxGroupInput("cities", "Choose Cities", cities) }) As described, the dynamic of this method gets defined in the renderUI() process as an output, which then gets displayed through the uiOutput() function in the ui.R file. Sharing your Shiny application with others Typically, you create a Shiny application not only for yourself, but also for other users. There are a two main ways to distribute your app; either you let users download your application, or you deploy it on the web. Offering a download of your Shiny app By offering the option to download your final Shiny application, other users can run your app locally. Actually, there are four ways to deliver your app this way. No matter which way you choose, it is important that the user has installed R and the Shiny package on his/her computer. Gist Gist is a public code sharing pasteboard from GitHub. To share your app this way, it is important that both the ui.R file and the server.R file are in the same Gist and have been named correctly. Take a look at the following screenshot: There are two options to run apps via Gist. First, just enter runGist("Gist_URL") in the console of RStudio; or second, just use the Gist ID and place it in the shiny::runGist("Gist_ID") function. Gist is a very easy way to share your application, but you need to keep in mind that your code is published on a third-party server. GitHub The next way to enable users to download your app is through a GitHub repository: To run an application from GitHub, you need to enter the command, shiny::runGitHub ("Repository_Name", "GitHub_Account_Name"), in the console: Zip file There are two ways to share a Shiny application by zip file. You can either let the user download the zip file over the web, or you can share it via email, USB stick, memory card, or any other such device. To download a zip file via the Web, you need to type runUrl ("Zip_File_URL") in the console: Package Certainly, a much more labor-intensive but also publically effective way is to create a complete R package for your Shiny application. This especially makes sense if you have built an extensive application that may help many other users. Another advantage is the fact that you can also publish your application on CRAN. Later in the book, we will show you how to create an R package. Deploying your app to the web After showing you the ways users can download your app and run it on their local machines, we will now check the options to deploy Shiny apps to the web. Shinyapps.io http://www.shinyapps.io/ is a Shiny app- hosting service by RStudio. There is a free-to- use account package, but it is limited to a maximum of five applications, 25 so-called active hours, and the apps are branded with the RStudio logo. Nevertheless, this service is a great way to publish one's own applications quickly and easily to the web. To use http://www.shinyapps.io/ with RStudio, a few R packages and some additional operating system software needs to be installed: RTools (If you use Windows) GCC (If you use Linux) XCode Command Line Tools (If you use Mac OS X) The devtools R package The shinyapps package Since the shinyapps package is not on CRAN, you need to install it via GitHub by using the devtools package: if (!require("devtools")) install.packages("devtools") devtools::install_github("rstudio/shinyapps") library(shinyapps) When everything that is needed is installed ,you are ready to publish your Shiny apps directly from the RStudio IDE. Just click on the Publish icon, and in the new window you will need to log in to your http://www.shinyapps.io/ account once, if you are using it for the first time. All other times, you can directly create a new Shiny app or update an existing app: After clicking on Publish, a new tab called Deploy opens in the console pane, showing you the progress of the deployment process. If there is something set incorrectly, you can use the deployment log to find the error: When the deployment is successful, your app will be publically reachable with its own web address on http://www.shinyapps.io/.   Setting up a self-hosted Shiny server There are two editions of the Shiny Server software: an open source edition and the professional edition. The open source edition can be downloaded for free and you can use it on your own server. The Professional edition offers a lot more features and support by RStudio, but is also priced accordingly. Diving into the Shiny ecosystem Since the Shiny framework is such an awesome and powerful tool, a lot of people, and of course, the creators of RStudio and Shiny have built several packages around it that are enormously extending the existing functionalities of Shiny. These almost infinite possibilities of technical and visual individualization, which are possible by deeply checking the Shiny ecosystem, would certainly go beyond the scope of this article. Therefore, we are presenting only a few important directions to give a first impression. Creating apps with more files In this article, you have learned how to build a Shiny app consisting of two files: the server.R and the ui.R. To include every aspect, we first want to point out that it is also possible to create a single file Shiny app. To do so, create a file called app.R. In this file, you can include both the server.R and the ui.R file. Furthermore, you can include global variables, data, and more. If you build larger Shiny apps with multiple functions, datasets, options, and more, it could be very confusing if you do all of it in just one file. Therefore, single-file Shiny apps are a good idea for simple and small exhibition apps with a minimal setup. Especially for large Shiny apps, it is recommended that you outsource extensive custom functions, datasets, images, and more into your own files, but put them into the same directory as the app. An example file setup could look like this: ~/shinyapp |-- ui.R |-- server.R |-- helper.R |-- data |-- www |-- js |-- etc   To access the helper file, you just need to add source("helpers.R") into the code of your server.R file. The same logic applies to any other R files. If you want to read in some data from your data folder, you store it in a variable that is also in the head of your server.R file, like this: myData &lt;- readRDS("data/myDataset.rds") Expanding the Shiny package As said earlier, you can expand the functionalities of Shiny with several add-on packages. There are currently ten packages available on CRAN with different inbuilt functions to add some extra magic to your Shiny app. shinyAce: This package makes available Ace editor bindings to enable a rich text-editing environment within Shiny. shinybootstrap2: The latest Shiny package uses bootstrap 3; so, if you built your app with bootstrap 2 features, you need to use this package. shinyBS: This package adds the additional features of the original Twitter Bootstraptheme, such as tooltips, modals, and others, to Shiny. shinydashboard: This packages comes from the folks at RStudio and enables the user to create stunning and multifunctional dashboards on top of Shiny. shinyFiles: This provides functionality for client-side navigation of the server side file system in Shiny apps. shinyjs: By using this package, you can perform common JavaScript operations in Shiny applications without having to know any JavaScript. shinyRGL: This package provides Shiny wrappers for the RGL package. This package exposes RGL's ability to export WebGL visualization in a shiny-friendly format. shinystan: This package is, in fact, not a real add-on. Shinystan is a fantastic full-blown Shiny application to give users a graphical interface for Markov chain Monte Carlo simulations. shinythemes: This packages gives you the option of changing the whole look and feel of your application by using different inbuilt bootstrap themes. shinyTree: This exposes bindings to jsTree—a JavaScript library that supports interactive trees—to enable rich, editable trees in Shiny. Of course, you can find a bunch of other packages with similar or even more functionalities, extensions, and also comprehensive Shiny apps on GitHub. Summary To learn more about Shiny, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning Shiny (https://www.packtpub.com/application-development/learning-shiny) Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r)
Read more
  • 0
  • 0
  • 15036
article-image-exploring-usages-delphi
Packt
24 Sep 2014
12 min read
Save for later

Exploring the Usages of Delphi

Packt
24 Sep 2014
12 min read
This article written by Daniele Teti, the author of Delphi Cookbook, explains the process of writing enumerable types. It also discusses the steps to customize FireMonkey controls. (For more resources related to this topic, see here.) Writing enumerable types When the for...in loop was introduced in Delphi 2005, the concept of enumerable types was also introduced into the Delphi language. As you know, there are some built-in enumerable types. However, you can create your own enumerable types using a very simple pattern. To make your container enumerable, implement a single method called GetEnumerator, that must return a reference to an object, interface, or record, that implements the following three methods and one property (in the sample, the element to enumerate is TFoo):    function GetCurrent: TFoo;    function MoveNext: Boolean;    property Current: TFoo read GetCurrent; There are a lot of samples related to standard enumerable types, so in this recipe you'll look at some not-so-common utilizations. Getting ready In this recipe, you'll see a file enumerable function as it exists in other, mostly dynamic, languages. The goal is to enumerate all the rows in a text file without actual opening, reading and closing the file, as shown in the following code: var row: String; begin for row in EachRows('....myfile.txt') do WriteLn(row); end; Nice, isn't it? Let's start… How to do it... We have to create an enumerable function result. The function simply returns the actual enumerable type. This type is not freed automatically by the compiler so you've to use a value type or an interfaced type. For the sake of simplicity, let's code to return a record type: function EachRows(const AFileName: String): TFileEnumerable; begin Result := TFileEnumerable.Create(AFileName); end; The TFileEnumerable type is defined as follows: type TFileEnumerable = record private FFileName: string; public constructor Create(AFileName: String); function GetEnumerator: TEnumerator<String>; end; . . . constructor TFileEnumerable.Create(AFileName: String); begin FFileName := AFileName; end; function TFileEnumerable.GetEnumerator: TEnumerator<String<; begin Result := TFileEnumerator.Create(FFileName); end; No logic here; this record is required only because you need a type that has a GetEnumerator method defined. This method is called automatically by the compiler when the type is used on the right side of the for..in loop. An interesting thing happens in the TFileEnumerator type, the actual enumerator, declared in the implementation section of the unit. Remember, this object is automatically freed by the compiler because it is the return of the GetEnumerator call: type TFileEnumerator = class(TEnumerator<String>) private FCurrent: String; FFile: TStreamReader; protected constructor Create(AFileName: String); destructor Destroy; override; function DoGetCurrent: String; override; function DoMoveNext: Boolean; override; end; { TFileEnumerator } constructor TFileEnumerator.Create(AFileName: String); begin inherited Create; FFile := TFile.OpenText(AFileName); end; destructor TFileEnumerator.Destroy; begin FFile.Free; inherited; end; function TFileEnumerator.DoGetCurrent: String; begin Result := FCurrent; end; function TFileEnumerator.DoMoveNext: Boolean; begin Result := not FFile.EndOfStream; if Result then FCurrent := FFile.ReadLine; end; The enumerator inherits from TEnumerator<String> because each row of the file is represented as a string. This class also gives a mechanism to implement the required methods. The DoGetCurrent (called internally by the TEnumerator<T>.GetCurrent method) returns the current line. The DoMoveNext method (called internally by the TEnumerator<T>.MoveNext method) returns true or false if there are more lines to read in the file or not. Remember that this method is called before the first call to the GetCurrent method. After the first call to the DoMoveNext method, FCurrent is properly set to the first row of the file. The compiler generates a piece of code similar to the following pseudo code: it = typetoenumerate.GetEnumerator; while it.MoveNext do begin S := it.Current; //do something useful with string S end it.free; There's more… Enumerable types are really powerful and help you to write less, and less error prone, code. There are some shortcuts to iterate over in-place data without even creating an actual container. If you have a bounce or integers or if you want to create a not homogenous for loop over some kind of data type, you can use the new TArray<T> type as shown here: for i in TArray<Integer>.Create(2, 4, 8, 16) do WriteLn(i); //write 2 4 8 16 TArray<T> is a generic type, so the same works also for strings: for s in TArray<String>.Create('Hello','Delphi','World') do WriteLn(s); It can also be used for Plain Old Delphi Object (PODO) or controls: for btn in TArray<TButton>.Create(btn1, btn31,btn2) do btn.Enabled := false See also http://docwiki.embarcadero.com/RADStudio/XE6/en/Declarations_and_Statements#Iteration_Over_Containers_Using_For_statements: This Embarcadero documentation will provide a detailed introduction to enumerable types Giving a new appearance to the standard FireMonkey controls using styles Since Version XE2, RAD Studio includes FireMonkey. FireMonkey is an amazing library. It is a really ambitious target for Embarcadero, but it's important for its long-term strategy. VCL is and will remain a Windows-only library, while FireMonkey has been designed to be completely OS and device independent. You can develop one application and compile it anywhere (if anywhere is contained in Windows, OS X, Android, and iOS; let's say that is a good part of anywhere). Getting ready A styled component doesn't know how it will be rendered on the screen, but the style. Changing the style, you can change the aspect of the component without changing its code. The relation between the component code and style is similar to the relation between HTML and CSS, one is the content and another is the display. In terms of FireMonkey, the component code contains the actual functionalities the component has, but the aspect is completely handled by the associated style. All the TStyledControl child classes support styles. Let's say you have to create an application to find a holiday house for a travel agency. Your customer wants a nice-looking application to search for the dream house their customers. Your graphic design department (if present) decided to create a semitransparent look-and-feel, as shown in the following screenshot, and you've to create such an interface. How to do that? This is the UI we want How to do it… In this case, you require some step-by-step instructions, so here they are: Create a new FireMonkey desktop application (navigate to File | New | FireMonkey Desktop Application). Drop a TImage component on the form. Set its Align property to alClient, and use the MultiResBitmap property and its property editor to load a nice-looking picture. Set the WrapMode property to iwFit and resize the form to let the image cover the entire form. Now, drop a TEdit component and a TListBox component over the TImage component. Name the TEdit component as EditSearch and the TListBox component as ListBoxHouses. Set the Scale property of the TEdit and TListBox components to the following values: Scale.X: 2 Scale.Y: 2 Your form should now look like this: The form with the standard components The actions to be performed by the users are very simple. They should write some search criteria in the Edit field and click on Return. Then, the listbox shows all the houses available for that criteria (with a "contains" search). In a real app, you require a database or a web service to query, but this is a sample so you'll use fake search criteria on fake data. Add the RandomUtilsU.pas file from the Commons folder of the project and add it to the uses clause of the main form. Create an OnKeyUp event handler for the TEdit component and write the following code inside it: procedure TForm1.EditSearchKeyUp(Sender: TObject;      var Key: Word; var KeyChar: Char; Shift: TShiftState); var I: Integer; House: string; SearchText: string; begin if Key <> vkReturn then    Exit;   // this is a fake search... ListBoxHouses.Clear; SearchText := EditSearch.Text.ToUpper; //now, gets 50 random houses and match the criteria for I := 1 to 50 do begin    House := GetRndHouse;    if House.ToUpper.Contains(SearchText) then      ListBoxHouses.Items.Add(House); end; if ListBoxHouses.Count > 0 then    ListBoxHouses.ItemIndex := 0 else    ListBoxHouses.Items.Add('<Sorry, no houses found>'); ListBoxHouses.SetFocus; end; Run the application and try it to familiarize yourself with the behavior. Now, you have a working application, but you still need to make it transparent. Let's start with the FireMonkey Style Designer (FSD). Just to be clear, at the time of writing, the FireMonkey Style Designer is far to be perfect. It works, but it is not a pleasure to work with it. However, it does its job. Right-click on the TEdit component. From the contextual menu, choose Edit Custom Style (general information about styles and the style editor can be found at http://docwiki.embarcadero.com/RADStudio/XE6/en/FireMonkey_Style_Designer and http://docwiki.embarcadero.com/RADStudio/XE6/en/Editing_a_FireMonkey_Style). Delphi opens a new tab that contains the FSD. However, to work with it, you need the Structure pane to be visible as well (navigate to View | Structure or Shift + Alt + F11). In the Structure pane, there are all the styles used by the TEdit control. You should see a Structure pane similar to the following screenshot: The Structure pane showing the default style for the TEdit control In the Structure pane, open the editsearchstyle1 node, select the background subnode, and go to the Object Inspector. In the Object Inspector window, remove the content of the SourceLookup property. The background part of the style is TActiveStyleObject. A TActiveStyleObject style is a style that is able to show a part of an image as default and another part of the same image when the component that uses it is active, checked, focused, mouse hovered, pressed, or selected. The image to use is in the SourceLookup property. Our TEdit component must be completely transparent in every state, so we removed the value of the SourceLookup property. Now the TEdit component is completely invisible. Click on Apply and Close and run the application. As you can confirm, the edit works but it is completely transparent. Close the application. When you opened the FSD for the first time, a TStyleBook component has been automatically dropped on the form and contains all your custom styles. Double-click on it and the style designer opens again. The edit, as you saw, is transparent, but it is not usable at all. You need to see at least where to click and write. Let's add a small bottom line to the edit style, just like a small underline. To perform the next step, you require the Tool Palette window and the Structure pane visible. Here is my preferred setup for this situation: The Structure pane and the Tool Palette window are visible at the same time using the docking mechanism; you can also use the floating windows if you wish Now, search for a TLine component in the Tool Palette window. Drag-and-drop the TLine component onto the editsearchstyle1 node in the Structure pane. Yes, you have to drop a component from the Tool Palette window directly onto the Structure pane. Now, select the TLine component in the Structure Pane (do not use the FSD to select the components, you have to use the Structure pane nodes). In the Object Inspector, set the following properties: Align: alContents HitTest: False LineType: ltTop RotationAngle: 180 Opacity: 0.6 Click on Apply and Close. Run the application. Now, the text is underlined by a small black line that makes it easy to identify that the application is transparent. Stop the application. Now, you've to work on the listbox; it is still 100 percent opaque. Right-click on the ListBoxHouses option and click on Edit Custom Style. In the Structure pane, there are some new styles related to the TListBox class. Select the listboxhousesstyle1 option, open it, and select its child style, background. In the Object Inspector, change the Opacity property of the background style to 0.6. Click on Apply and Close. That's it! Run the application, write Calif in the Edit field and press Return. You should see a nice-looking application with a semitransparent user interface showing your dream houses in California (just like it was shown in the screenshot in the Getting ready section of this recipe). Are you amazed by the power of FireMonkey styles? How it works... The trick used in this recipe is simple. If you require a transparent UI, just identify which part of the style of each component is responsible to draw the background of the component. Then, put the Opacity setting to a level less than 1 (0.6 or 0.7 could be enough for most cases). Why not simply change the Opacity property of the component? Because if you change the Opacity property of the component, the whole component will be drawn with that opacity. However, you need only the background to be transparent; the inner text must be completely opaque. This is the reason why you changed the style and not the component property. In the case of the TEdit component, you completely removed the painting when you removed the SourceLookup property from TActiveStyleObject that draws the background. As a thumb rule, if you have to change the appearance of a control, check its properties. If the required customization is not possible using only the properties, then change the style. There's more… If you are new to FireMonkey styles, probably most concepts in this recipe must have been difficult to grasp. If so, check the official documentation on the Embarcadero DocWiki at the following URL: http://docwiki.embarcadero.com/RADStudio/XE6/en/Customizing_FireMonkey_Applications_with_Styles Summary In this article, we discussed ways to write enumerable types in Delphi. We also discussed how we can use styles to make our FireMonkey controls look better. Resources for Article: Further resources on this subject: Adding Graphics to the Map [Article] Application Performance [Article] Coding for the Real-time Web [Article]
Read more
  • 0
  • 0
  • 15011

article-image-queues-and-topics
Packt
10 Jul 2017
8 min read
Save for later

Queues and topics

Packt
10 Jul 2017
8 min read
In this article by Luca Stancapiano, the author of the book Mastering Java EE Development with WildFly, we will see how to implement Java Message Service (JMS) in a queue channel using WildFly console. (For more resources related to this topic, see here.) JMS works inside channels of messages that manage the messages asynchronously. These channels contain messages that they will collect or remove according the configuration and the type of channel. These channels are of two types, queues and topics. These channels are highly configurable by the WildFly console. As for all components in WildFly they can be installed through the console command line or directly with maven plugins of the project.  In the next two paragraphs we will show what do they mean and all the possible configurations. Queues Queues collect the sent messages that are waiting to be read. The messages are delivered in the order they are sent and when beds are removed from the queue. Create the queue from the web console See now the steps to create a new queue through the web console. Connect to http://localhost:9990/.  Go in Configuration | Subsystems/Messaging - ActiveMQ/default. And click on Queues/Topics. Now select the Queues menu and click on the Add button. You will see this screen: The parameters to insert are as follows: Name:  The name of the queue. JNDI Names: The jndi names the queue will be bound to. Durable?: Whether the queue is durable or not. Selector: The queue selector.     As for all enterprise components, JMS components are callable through Java Naming Directory Interface (JNDI).  Durable queues keep messages around persistently for any suitable consumer to consume them. Durable queues do not need to concern themselves with which consumer is going to consume the messages at some point in the future. There is just one copy of a message that any consumer in the future can consume. Message Selectors allows to filter the messages that a Message Consumer will receive. The filter is a relatively complex language similar to the syntax of an SQL WHERE clause. The selector can use all the message headers and properties for filtering operations, but cannot use the message content.Selectors are mostly useful for channels that broadcast a very large number of messages to its subscribers. On Queues, only messages that match the selector will be returned. Others stay in the queue (and thus can be read by a MessageConsumer with different selector). The following SQL elements are allowed in our filters and we can put them in the Selector field of the form:  Element  Description of the Element  Example of Selectors  AND, OR, NOT  Logical operators  (releaseYear < 1986) ANDNOT (title = 'Bad')  String Literals  String literals in single quotes, duplicate to escape  title = 'Tom''s'  Number Literals  Numbers in Java syntax. They can be double or integer  releaseYear = 1982  Properties  Message properties that follow Java identifier naming  releaseYear = 1983  Boolean Literals  TRUE and FALSE  isAvailable = FALSE  ( )  Round brackets  (releaseYear < 1981) OR (releaseYear > 1990) BETWEEN Checks whether number is in range (both numbers inclusive) releaseYear BETWEEN 1980 AND 1989 Header Fields Any headers except JMSDestination, JMSExpiration and JMSReplyTo JMSPriority = 10 =, <>, <, <=, >, >= Comparison operators (releaseYear < 1986) AND (title <> 'Bad') LIKE String comparison with wildcards '_' and '%' title LIKE 'Mirror%' IN Finds value in set of strings title IN ('Piece of mind', 'Somewhere in time', 'Powerslave') IS NULL, IS NOT NULL Checks whether value is null or not null. releaseYear IS NULL *, +, -, / Arithmetic operators releaseYear * 2 > 2000 - 18 Fill the form now: In this article we will implement a messaging service to send coordinates of the bus means . The queue is created and showed in the queues list: Create the queue using CLI and Maven WildFly plugin The same thing can be done with the Command Line Interface (CLI). So start a WildFly instance, go in the bin directory of WildFly and execute the following script: bash-3.2$ ./jboss-cli.sh You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands. [disconnected /] connect [standalone@localhost:9990 /] /subsystem=messagingactivemq/ server=default/jmsqueue= gps_coordinates:add(entries=["java:/jms/queue/GPS"]) {"outcome" => "success"} The same thing can be done through maven. Simply add this snippet in your pom.xml: <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>1.0.2.Final</version> <executions> <execution> <id>add-resources</id> <phase>install</phase> <goals> <goal>add-resource</goal> </goals> <configuration> <resources> <resource> <address>subsystem=messaging-activemq,server=default,jmsqueue= gps_coordinates</address> <properties> <durable>true</durable> <entries>!!["gps_coordinates", "java:/jms/queue/GPS"]</entries> </properties> </resource> </resources> </configuration> </execution> <execution> <id>del-resources</id> <phase>clean</phase> <goals> <goal>undeploy</goal> Queues and topics [ 7 ] </goals> <configuration> <afterDeployment> <commands> <command>/subsystem=messagingactivemq/ server=default/jms-queue=gps_coordinates:remove </command> </commands> </afterDeployment> </configuration> </execution> </executions> </plugin> The Maven WildFly plugin lets you to do admin operations in WildFly using the same custom protocol used by command line. Two executions are configured: add-resources: It hooks the install maven scope and it adds the queue passing the name, JNDI and durable parameters seen in the previous paragraph. del-resources: It hooks the clean maven scope and remove the chosen queue by name. Create the queue through an Arquillian test case Or we can add and remove the queue through an Arquillian test case: @RunWith(Arquillian.class) @ServerSetup(MessagingResourcesSetupTask.class) public class MessageTestCase { ... private static final String QUEUE_NAME = "gps_coordinates"; private static final String QUEUE_LOOKUP = "java:/jms/queue/GPS"; static class MessagingResourcesSetupTask implements ServerSetupTask { @Override public void setup(ManagementClient managementClient, String containerId) throws Exception { getInstance(managementClient.getControllerClient()).createJmsQueue(QUEUE_NA ME, QUEUE_LOOKUP); } @Override public void tearDown(ManagementClient managementClient, String containerId) throws Exception { getInstance(managementClient.getControllerClient()).removeJmsQueue(QUEUE_NA ME); } } Queues and topics [ 8 ] ... } The Arquillian org.jboss.as.arquillian.api.ServerSetup annotation let to use an external setup manager used to install or remove new components inside WildFly. In this case we are installing the queue declared with the two variables QUEUE_NAME and QUEUE_LOOKUP. When the test ends, automatically the tearDown method will be started and it will remove the installed queue. To use Arquillian it's important add the WildFly testsuite dependency in your pom.xml project: ... <dependencies> <dependency> <groupId>org.wildfly</groupId> <artifactId>wildfly-testsuite-shared</artifactId> <version>10.1.0.Final</version> <scope>test</scope> </dependency> </dependencies> ... Going in the standalone-full.xml we will find the created queue as: <subsystem > <server name="default"> ... <jms-queue name="gps_coordinates" entries="java:/jms/queue/GPS"/> ... </server> </subsystem> JMS is available in the standalone-full configuration. By default WildFly supports 4 standalone configurations. They can be found in the standalone/configuration directory: standalone.xml: It supports all components except the messaging and corba/iiop standalone-full.xml: It supports all components standalone-ha.xml: It supports all components except the messaging and corba/iiop with the enabled cluster standalone-full-ha.xml: It supports all components with the enabled cluster To start WildFly with the chosen configuration simply add a -c with the configuration in the standalone.sh script. Here a sample to start the standalone full configuration: ./standalone.sh -c standalone-full.xml Create the java client for the queue See now how create a client to send a message to the queue. JMS 2.0 simplify very much the creation of clients. Here a sample of a client inside a stateless Enterprise Java Beans (EJB): @Stateless public class MessageQueueSender { @Inject private JMSContext context; @Resource(mappedName = "java:/jms/queue/GPS") private Queue queue; public void sendMessage(String message) { context.createProducer().send(queue, message); } } The javax.jms.JMSContext is injectable from any EE component. We will see the JMS context in details in the next paragraph The JMS Context. The queue is represented in JMS by the javax.jms.Queue class. It can be injected as JNDI resource through the @Resource annotation. The JMS context through the createProducer method creates a producer represented by the javax.jms.JMSProducer class used to send the messages. We can now create a client injecting the stateless and sending a string message hello! ... @EJB private MessageQueueSender messageQueueSender; ... messageQueueSender.sendMessage("hello!"); Summary In this article we have seen how to implement Java Message Service in a queue channel using web console, Command Line Interface and Maven WildFly plugins, Arquillian test cases and how to create Java clients for queue. Resources for Article: Further resources on this subject: WildFly – the Basics [article] WebSockets in Wildfly [article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 15006
Modal Close icon
Modal Close icon