Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Languages

135 Articles
article-image-lukasz-langa-at-pylondinium19-if-python-stays-synonymous-with-cpython-for-too-long-well-be-in-big-trouble
Sugandha Lahoti
13 Aug 2019
7 min read
Save for later

Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”

Sugandha Lahoti
13 Aug 2019
7 min read
PyLondinium, the conference for Python developers was held in London, from the 14th to the 16th of June, 2019. At the Sunday Keynote Łukasz Langa, the creator of Black (Python code formatter) and Python 3.8 release manager spoke on where Python could be in 2020 and how Python developers should try new browser and mobile-friendly versions of Python. Python is an extremely expressive language, says Łukasz. “When I first started I was amazed how much you can accomplish with just a few lines of code especially compared to Java. But there are still languages that are even more expressive and enables even more compact notation.” So what makes Python special? Python is run above pseudocode; it reads like English; it is very elegant. “Our responsibility as developers,” Łukasz mentions “is to make Python’s runnable pseudocode convenient to use for new programmers.” Python has gotten much bigger, stable and more complex in the last decade. However, the most low-hanging fruit, Łukasz says, has already been picked up and what's left is the maintenance of an increasingly fossilizing interpreter and a stunted library. This maintenance is both tedious and tricky especially for a dynamic interpreter language like Python. Python being a community-run project is both a blessing and a curse Łukasz talks about how Python is the biggest community ran programming language on the planet. Other programming languages with similar or larger market penetration are either run by single corporations or multiple committees. Being a community project is both a blessing and a curse for Python, says Łukasz. It's a blessing because it's truly free from shareholder pressure and market swing. It’s a curse because almost the entire core developer team is volunteering their time and effort for free and the Python Software Foundation is graciously funding infrastructure and events; it does not currently employ any core developers. Since there is both Python and software right in the name of the foundation, Lukasz says he wants it to change. “If you don't pay people, you have no influence over what they work on. Core developers often choose problems to tackle based on what inspires them personally. So we never had an explicit roadmap on where Python should go and what problems or developers should focus on,” he adds. Python is no longer governed by a BDFL says Łukasz, “My personal hope is that the steering council will be providing visionary guidance from now on and will present us with an explicit roadmap on where we should go.” Interesting and dead projects in Python Łukasz talked about mypyc and invited people to work and contribute to this project as well as organizations to sponsor it. Mypyc is a compiler that compiles mypy-annotated, statically typed Python modules into CPython C extensions. This restricts the Python language to enable compilation. Mypyc supports a subset of Python. He also mentioned MicroPython, which is a Kickstarter-funded subset of Python optimized to run on microcontrollers and other constrained environments. It is a compatible runtime for microcontrollers that has very little memory- 16 kilobytes of RAM and 256 kilobytes for code memory and minimal computing power. He also talks about micro:bit. He also mentions many dead/dying/defunct projects for alternative Python interpreters, including Unladen Swallow, Pyston, IronPython. He talked about PyPy - the JIT Python compiler written in Python. Łukasz mentions that since it is written in Python 2, it makes it the most complex applications written in the industry. “This is at risk at the moment,” says Łukasz “since it’s a large Python 2 codebase needs updating to Python 3. Without a tremendous investment, it is very unlikely to ever migrate to Python 3.” Also, trying to replicate CPython quirks and bugs requires a lot of effort. Python should be aligned with where developer trends are shifting Łukasz believes that a stronger division between language and the reference implementation is important in case of Python. He declared, “If Python stays synonymous with CPython for too long, we’ll be in big trouble.” This is because CPython is not available where developer trends are shifting. For the web, the lingua franca is JavaScript now. For the two biggest operating systems on mobile, there is Swift the modern take on Objective C and Kotlin, the modern take on Java. For VR AR and 3D games, there is C# provided by Unity. While Python is growing fast, it’s not winning ground in two big areas: the browser, and mobile. Python is also slowly losing ground in the field of systems orchestration where Go is gaining traction. He adds, “if there were not the rise of machine learning and artificial intelligence, Python would have not survived the transition between Python 2 and Python 3.” Łukasz mentions how providing a clear supported and official option for the client-side web is what Python needs in order to satisfy the legion of people that want to use it.  He says, “for Python, the programming language to need to reach new heights we need a new kind of Python. One that caters to where developer trends are shifting - mobile, web, VR, AR, and 3D games. There should be more projects experimenting with Python for these platforms. This especially means trying restricted versions of the language because they are easier to optimize. We need a Python compiler for Web and Python on Mobile Łukasz talked about the need to shift to where developer trends are shifting. He says we need a Python compiler for the web - something that compiles your Python code to the web platform directly. He also adds, that to be viable for professional production use, Python on the web must not be orders of magnitude slower than the default option (Javascript) which is already better supported and has better documentation and training. Similarly, for mobile he wants a small Python application so that websites run fast and have quick user interactions. He gives the example of the Go programming language stating how “one of Go’s claims to fame is the fact that they shipped static binaries so you only have one file. You can choose to still use containers but it’s not necessary; you don't have virtual ends, you don't have pip installs, and you don't have environments that you have to orchestrate.” Łukasz further adds how the areas of modern focus where Python currently has no penetration don't require full compatibility with CPython. Starting out with a familiar subset of Python for the user that looks like Python would simplify the development of a new runtime or compiler a lot and potentially would even fit the target platform better. What if I want to work on CPython? Łukasz says that developers can still work on CPython if they want to. “I'm not saying that CPython is a dead end; it will forever be an important runtime for Python. New people are still both welcome and needed in fact. However, working on CPython today is different from working on it ten years ago; the runtime is mission-critical in many industries which is why developers must be extremely careful.” Łukasz sums his talk by declaring, “I strongly believe that enabling Python on new platforms is an important job. I'm not saying Python as the entire programming language should just abandon what it is now. I would prefer for us to be able to keep Python exactly as it is and just move it to all new platforms. Albeit, it is not possible without multi-million dollar investments over many years.” The talk was well appreciated by Twitter users with people lauding it as ‘fantastic’ and ‘enlightening’. https://twitter.com/WillingCarol/status/1156411772472971264 https://twitter.com/freakboy3742/status/1156365742435995648 https://twitter.com/jezdez/status/1156584209366081536 You can watch the full Keynote on YouTube. NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 16543

article-image-boost-r-codes-using-c-fortran
Pravin Dhandre
18 Apr 2018
16 min read
Save for later

How to boost R codes using C++ and Fortran

Pravin Dhandre
18 Apr 2018
16 min read
Sometimes, R code just isn't fast enough. Maybe you've used profiling to figure out where your bottlenecks are, and you've done everything you can think of within R, but your code still isn't fast enough. In those cases, a useful alternative can be to delegate some parts of the implementation to Fortran or C++. This is an advanced technique that can often prove to be quite useful if know how to program in such languages. In today’s tutorial, we will try to explore techniques to boost R codes and calculations using efficient languages like Fortran and C++. Delegating code to other languages can address bottlenecks such as the following: Loops that can't be easily vectorized due to iteration dependencies Processes that involve calling functions millions of times Inefficient but necessary data structures that are slow in R Delegating code to other languages can provide great performance benefits, but it also incurs the cost of being more explicit and careful with the types of objects that are being moved around. In R, you can get away with simple things such as being imprecise about a number being an integer or a real. In these other languages, you can't; every object must have a precise type, and it remains fixed for the entire execution. Boost R codes using an old-school approach with Fortran We will start with an old-school approach using Fortran first. If you are not familiar with it, Fortran is the oldest programming language still under use today. It was designed to perform lots of calculations very efficiently and with very few resources. There are a lot of numerical libraries developed with it, and many high-performance systems nowadays still use it, either directly or indirectly. Here's our implementation, named sma_fortran(). The syntax may throw you off if you're not used to working with Fortran code, but it's simple enough to understand. First, note that to define a function technically known as a subroutine in Fortran, we use the subroutine keyword before the name of the function. As our previous implementations do, it receives the period and data (we use the dataa name with an extra a at the end because Fortran has a reserved keyword data, which we shouldn't use in this case), and we will assume that the data is already filtered for the correct symbol at this point. Next, note that we are sending new arguments that we did not send before, namely smas and n. Fortran is a peculiar language in the sense that it does not return values, it uses side effects instead. This means that instead of expecting something back from a call to a Fortran subroutine, we should expect that subroutine to change one of the objects that was passed to it, and we should treat that as our return value. In this case, smas fulfills that role; initially, it will be sent as an array of undefined real values, and the objective is to modify its contents with the appropriate SMA values. Finally, the n represents the number of elements in the data we send. Classic Fortran doesn't have a way to determine the size of an array being passed to it, and it needs us to specify the size manually; that's why we need to send n. In reality, there are ways to work around this, but since this is not a tutorial about Fortran, we will keep the code as simple as possible. Next, note that we need to declare the type of objects we're dealing with as well as their size in case they are arrays. We proceed to declare pos (which takes the place of position in our previous implementation, because Fortran imposes a limit on the length of each line, which we don't want to violate), n, endd (again, end is a keyword in Fortran, so we use the name endd instead), and period as integers. We also declare dataa(n), smas(n), and sma as reals because they will contain decimal parts. Note that we specify the size of the array with the (n) part in the first two objects. Once we have declared everything we will use, we proceed with our logic. We first create a for loop, which is done with the do keyword in Fortran, followed by a unique identifier (which are normally named with multiples of tens or hundreds), the variable name that will be used to iterate, and the values that it will take, endd and 1 to n in this case, respectively. Within the for loop, we assign pos to be equal to endd and sma to be equal to 0, just as we did in some of our previous implementations. Next, we create a while loop with the do…while keyword combination, and we provide the condition that should be checked to decide when to break out of it. Note that Fortran uses a very different syntax for the comparison operators. Specifically, the .lt. operator stand for less-than, while the .ge. operator stands for greater-than-or-equal-to. If any of the two conditions specified is not met, then we will exit the while loop. Having said that, the rest of the code should be self-explanatory. The only other uncommon syntax property is that the code is indented to the sixth position. This indentation has meaning within Fortran, and it should be kept as it is. Also, the number IDs provided in the first columns in the code should match the corresponding looping mechanisms, and they should be kept toward the left of the logic-code. For a good introduction to Fortran, you may take a look at Stanford's Fortran 77 Tutorial (https:/​/​web.​stanford.​edu/​class/​me200c/​tutorial_​77/​). You should know that there are various Fortran versions, and the 77 version is one of the oldest ones. However, it's also one of the better supported ones: subroutine sma_fortran(period, dataa, smas, n) integer pos, n, endd, period real dataa(n), smas(n), sma do 10 endd = 1, n pos = endd sma = 0.0 do 20 while ((endd - pos .lt. period) .and. (pos .ge. 1)) sma = sma + dataa(pos) pos = pos - 1 20 end do if (endd - pos .eq. period) then sma = sma / period else sma = 0 end if smas(endd) = sma 10 continue end Once your code is finished, you need to compile it before it can be executed within R. Compilation is the process of translating code into machine-level instructions. You have two options when compiling Fortran code: you can either do it manually outside of R or you can do it within R. The second one is recommended since you can take advantage of R's tools for doing so. However, we show both of them. The first one can be achieved with the following code: $ gfortran -c sma-delegated-fortran.f -o sma-delegated-fortran.so This code should be executed in a Bash terminal (which can be found in Linux or Mac operating systems). We must ensure that we have the gfortran compiler installed, which was probably installed when R was. Then, we call it, telling it to compile (using the -c option) the sma-delegated-fortran.f file (which contains the Fortran code we showed before) and provide an output file (with the -o option) named sma-delegatedfortran.so. Our objective is to get this .so file, which is what we need within R to execute the Fortran code. The way to compile within R, which is the recommended way, is to use the following line: system("R CMD SHLIB sma-delegated-fortran.f") It basically tells R to execute the command that produces a shared library derived from the sma-delegated-fortran.f file. Note that the system() function simply sends the string it receives to a terminal in the operating system, which means that you could have used that same command in the Bash terminal used to compile the code manually. To load the shared library into R's memory, we use the dyn.load() function, providing the location of the .so file we want to use, and to actually call the shared library that contains the Fortran implementation, we use the .Fortran() function. This function requires type checking and coercion to be explicitly performed by the user before calling it. To provide a similar signature as the one provided by the previous functions, we will create a function named sma_delegated_fortran(), which receives the period, symbol, and data parameters as we did before, also filters the data as we did earlier, calculates the length of the data and puts it in n, and uses the .Fortran() function to call the sma_fortran() subroutine, providing the appropriate parameters. Note that we're wrapping the parameters around functions that coerce the types of these objects as required by our Fortran code. The results list created by the .Fortran() function contains the period, dataa, smas, and n objects, corresponding to the parameters sent to the subroutine, with the contents left in them after the subroutine was executed. As we mentioned earlier, we are interested in the contents of the sma object since they contain the values we're looking for. That's why we send only that part back after converting it to a numeric type within R. The transformations you see before sending objects to Fortran and after getting them back is something that you need to be very careful with. For example, if instead of using single(n) and as.single(data), we use double(n) and as.double(data), our Fortran implementation will not work. This is something that can be ignored within R, but it can't be ignored in the case of Fortran: system("R CMD SHLIB sma-delegated-fortran.f") dyn.load("sma-delegated-fortran.so") sma_delegated_fortran <- function(period, symbol, data) { data <- data[which(data$symbol == symbol), "price_usd"] n <- length(data) results <- .Fortran( "sma_fortran", period = as.integer(period), dataa = as.single(data), smas = single(n), n = as.integer(n) ) return(as.numeric(results$smas)) } Just as we did earlier, we benchmark and test for correctness: performance <- microbenchmark( sma_12 <- sma_delegated_fortran(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_12 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$me In this case, our median time is of 148.0335 microseconds, making this the fastest implementation up to this point. Note that it's barely over half of the time from the most efficient implementation we were able to come up with using only R. Take a look at the following table: Boost R codes using a modern approach with C++ Now, we will show you how to use a more modern approach using C++. The aim of this section is to provide just enough information for you to start experimenting using C++ within R on your own. We will only look at a tiny piece of what can be done by interfacing R with C++ through the Rcpp package (which is installed by default in R), but it should be enough to get you started. If you have never heard of C++, it's a language used mostly when resource restrictions play an important role and performance optimization is of paramount importance. Some good resources to learn more about C++ are Meyer's books on the topic, a popular one being Effective C++ (Addison-Wesley, 2005), and specifically for the Rcpp package, Eddelbuettel's Seamless R and C++ integration with Rcpp by Springer, 2013, is great. Before we continue, you need to ensure that you have a C++ compiler in your system. On Linux, you should be able to use gcc. On Mac, you should install Xcode from the  application store. O n Windows, you should install Rtools. Once you test your compiler and know that it's working, you should be able to follow this section. We'll cover more on how to do this in Appendix, Required Packages. C++ is more readable than Fortran code because it follows more syntax conventions we're used to nowadays. However, just because the example we will use is readable, don't think that C++ in general is an easy language to use; it's not. It's a very low-level language and using it correctly requires a good amount of knowledge. Having said that, let's begin. The #include line is used to bring variable and function definitions from R into this file when it's compiled. Literally, the contents of the Rcpp.h file are pasted right where the include statement is. Files ending with the .h extensions are called header files, and they are used to provide some common definitions between a code's user and its developers. The using namespace Rcpp line allows you to use shorter names for your function. Instead of having to specify Rcpp::NumericVector, we can simply use NumericVector to define the type of the data object. Doing so in this example may not be too beneficial, but when you start developing for complex C++ code, it will really come in handy. Next, you will notice the // [[Rcpp::export(sma_delegated_cpp)]] code. This is a tag that marks the function right below it so that R know that it should import it and make it available within R code. The argument sent to export() is the name of the function that will be accessible within R, and it does not necessarily have to match the name of the function in C++. In this case, sma_delegated_cpp() will be the function we call within R, and it will call the smaDelegated() function within C++: #include using namespace Rcpp; // [[Rcpp::export(sma_delegated_cpp)]] NumericVector smaDelegated(int period, NumericVector data) { int position, n = data.size(); NumericVector result(n); double sma; for (int end = 0; end < n; end++) { position = end; sma = 0; while(end - position < period && position >= 0) { sma = sma + data[position]; position = position - 1; } if (end - position == period) { sma = sma / period; } else { sma = NA_REAL; } result[end] = sma; } return result; } Next, we will explain the actual smaDelegated() function. Since you have a good idea of what it's doing at this point, we won't explain its logic, only the syntax that is not so obvious. The first thing to note is that the function name has a keyword before it, which is the type of the return value for the function. In this case, it's NumericVector, which is provided in the Rcpp.h file. This is an object designed to interface vectors between R and C++. Other types of vector provided by Rcpp are IntegerVector, LogicalVector, and CharacterVector. You also have IntegerMatrix, NumericMatrix, LogicalMatrix, and CharacterMatrix available. Next, you should note that the parameters received by the function also have types associated with them. Specifically, period is an integer (int), and data is NumericVector, just like the output of the function. In this case, we did not have to pass the output or length objects as we did with Fortran. Since functions in C++ do have output values, it also has an easy enough way of computing the length of objects. The first line in the function declare a variables position and n, and assigns the length of the data to the latter one. You may use commas, as we do, to declare various objects of the same type one after another instead of splitting the declarations and assignments into its own lines. We also declare the vector result with length n; note that this notation is similar to Fortran's. Finally, instead of using the real keyword as we do in Fortran, we use the float or double keyword here to denote such numbers. Technically, there's a difference regarding the precision allowed by such keywords, and they are not interchangeable, but we won't worry about that here. The rest of the function should be clear, except for maybe the sma = NA_REAL assignment. This NA_REAL object is also provided by Rcpp as a way to denote what should be sent to R as an NA. Everything else should result familiar. Now that our function is ready, we save it in a file called sma-delegated-cpp.cpp and use R's sourceCpp() function to bring compile it for us and bring it into R. The .cpp extension denotes contents written in the C++ language. Keep in mind that functions brought into R from C++ files cannot be saved in a .Rdata file for a later session. The nature of C++ is to be very dependent on the hardware under which it's compiled, and doing so will probably produce various errors for you. Every time you want to use a C++ function, you should compile it and load it with the sourceCpp() function at the moment of usage. library(Rcpp) sourceCpp("./sma-delegated-cpp.cpp") sma_delegated_cpp <- function(period, symbol, data) { data <- as.numeric(data[which(data$symbol == symbol), "price_usd"]) return(sma_cpp(period, data)) } If everything worked fine, our function should be usable within R, so we benchmark and test for correctness. I promise this is the last one: performance <- microbenchmark( sma_13 <- sma_delegated_cpp(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_13 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$median #> [1] 80.6415 This time, our median time was 80.6415 microseconds, which is three orders of magnitude faster than our first implementation. Think about it this way: if you provide an input for sma_delegated_cpp() so that it took around one hour for it to execute, sma_slow_1() would take around 1,000 hours, which is roughly 41 days. Isn't that a surprising difference? When you are in situations that take that much execution time, it's definitely worth it to try and make your implementations as optimized as possible. You may use the cppFunction() function to write your C++ code directly inside an .R file, but you should not do so. Keep that just for testing small pieces of code. Separating your C++ implementation into its own files allows you to use the power of your editor of choice (or IDE) to guide you through the development as well as perform deeper syntax checks for you. You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book provides step-by-step guide to build simple-to-advanced applications through examples in R using modern tools. Getting Inside a C++ Multithreaded Application Understanding the Dependencies of a C++ Application    
Read more
  • 0
  • 0
  • 15700

article-image-exploring-usages-delphi
Packt
24 Sep 2014
12 min read
Save for later

Exploring the Usages of Delphi

Packt
24 Sep 2014
12 min read
This article written by Daniele Teti, the author of Delphi Cookbook, explains the process of writing enumerable types. It also discusses the steps to customize FireMonkey controls. (For more resources related to this topic, see here.) Writing enumerable types When the for...in loop was introduced in Delphi 2005, the concept of enumerable types was also introduced into the Delphi language. As you know, there are some built-in enumerable types. However, you can create your own enumerable types using a very simple pattern. To make your container enumerable, implement a single method called GetEnumerator, that must return a reference to an object, interface, or record, that implements the following three methods and one property (in the sample, the element to enumerate is TFoo):    function GetCurrent: TFoo;    function MoveNext: Boolean;    property Current: TFoo read GetCurrent; There are a lot of samples related to standard enumerable types, so in this recipe you'll look at some not-so-common utilizations. Getting ready In this recipe, you'll see a file enumerable function as it exists in other, mostly dynamic, languages. The goal is to enumerate all the rows in a text file without actual opening, reading and closing the file, as shown in the following code: var row: String; begin for row in EachRows('....myfile.txt') do WriteLn(row); end; Nice, isn't it? Let's start… How to do it... We have to create an enumerable function result. The function simply returns the actual enumerable type. This type is not freed automatically by the compiler so you've to use a value type or an interfaced type. For the sake of simplicity, let's code to return a record type: function EachRows(const AFileName: String): TFileEnumerable; begin Result := TFileEnumerable.Create(AFileName); end; The TFileEnumerable type is defined as follows: type TFileEnumerable = record private FFileName: string; public constructor Create(AFileName: String); function GetEnumerator: TEnumerator<String>; end; . . . constructor TFileEnumerable.Create(AFileName: String); begin FFileName := AFileName; end; function TFileEnumerable.GetEnumerator: TEnumerator<String<; begin Result := TFileEnumerator.Create(FFileName); end; No logic here; this record is required only because you need a type that has a GetEnumerator method defined. This method is called automatically by the compiler when the type is used on the right side of the for..in loop. An interesting thing happens in the TFileEnumerator type, the actual enumerator, declared in the implementation section of the unit. Remember, this object is automatically freed by the compiler because it is the return of the GetEnumerator call: type TFileEnumerator = class(TEnumerator<String>) private FCurrent: String; FFile: TStreamReader; protected constructor Create(AFileName: String); destructor Destroy; override; function DoGetCurrent: String; override; function DoMoveNext: Boolean; override; end; { TFileEnumerator } constructor TFileEnumerator.Create(AFileName: String); begin inherited Create; FFile := TFile.OpenText(AFileName); end; destructor TFileEnumerator.Destroy; begin FFile.Free; inherited; end; function TFileEnumerator.DoGetCurrent: String; begin Result := FCurrent; end; function TFileEnumerator.DoMoveNext: Boolean; begin Result := not FFile.EndOfStream; if Result then FCurrent := FFile.ReadLine; end; The enumerator inherits from TEnumerator<String> because each row of the file is represented as a string. This class also gives a mechanism to implement the required methods. The DoGetCurrent (called internally by the TEnumerator<T>.GetCurrent method) returns the current line. The DoMoveNext method (called internally by the TEnumerator<T>.MoveNext method) returns true or false if there are more lines to read in the file or not. Remember that this method is called before the first call to the GetCurrent method. After the first call to the DoMoveNext method, FCurrent is properly set to the first row of the file. The compiler generates a piece of code similar to the following pseudo code: it = typetoenumerate.GetEnumerator; while it.MoveNext do begin S := it.Current; //do something useful with string S end it.free; There's more… Enumerable types are really powerful and help you to write less, and less error prone, code. There are some shortcuts to iterate over in-place data without even creating an actual container. If you have a bounce or integers or if you want to create a not homogenous for loop over some kind of data type, you can use the new TArray<T> type as shown here: for i in TArray<Integer>.Create(2, 4, 8, 16) do WriteLn(i); //write 2 4 8 16 TArray<T> is a generic type, so the same works also for strings: for s in TArray<String>.Create('Hello','Delphi','World') do WriteLn(s); It can also be used for Plain Old Delphi Object (PODO) or controls: for btn in TArray<TButton>.Create(btn1, btn31,btn2) do btn.Enabled := false See also http://docwiki.embarcadero.com/RADStudio/XE6/en/Declarations_and_Statements#Iteration_Over_Containers_Using_For_statements: This Embarcadero documentation will provide a detailed introduction to enumerable types Giving a new appearance to the standard FireMonkey controls using styles Since Version XE2, RAD Studio includes FireMonkey. FireMonkey is an amazing library. It is a really ambitious target for Embarcadero, but it's important for its long-term strategy. VCL is and will remain a Windows-only library, while FireMonkey has been designed to be completely OS and device independent. You can develop one application and compile it anywhere (if anywhere is contained in Windows, OS X, Android, and iOS; let's say that is a good part of anywhere). Getting ready A styled component doesn't know how it will be rendered on the screen, but the style. Changing the style, you can change the aspect of the component without changing its code. The relation between the component code and style is similar to the relation between HTML and CSS, one is the content and another is the display. In terms of FireMonkey, the component code contains the actual functionalities the component has, but the aspect is completely handled by the associated style. All the TStyledControl child classes support styles. Let's say you have to create an application to find a holiday house for a travel agency. Your customer wants a nice-looking application to search for the dream house their customers. Your graphic design department (if present) decided to create a semitransparent look-and-feel, as shown in the following screenshot, and you've to create such an interface. How to do that? This is the UI we want How to do it… In this case, you require some step-by-step instructions, so here they are: Create a new FireMonkey desktop application (navigate to File | New | FireMonkey Desktop Application). Drop a TImage component on the form. Set its Align property to alClient, and use the MultiResBitmap property and its property editor to load a nice-looking picture. Set the WrapMode property to iwFit and resize the form to let the image cover the entire form. Now, drop a TEdit component and a TListBox component over the TImage component. Name the TEdit component as EditSearch and the TListBox component as ListBoxHouses. Set the Scale property of the TEdit and TListBox components to the following values: Scale.X: 2 Scale.Y: 2 Your form should now look like this: The form with the standard components The actions to be performed by the users are very simple. They should write some search criteria in the Edit field and click on Return. Then, the listbox shows all the houses available for that criteria (with a "contains" search). In a real app, you require a database or a web service to query, but this is a sample so you'll use fake search criteria on fake data. Add the RandomUtilsU.pas file from the Commons folder of the project and add it to the uses clause of the main form. Create an OnKeyUp event handler for the TEdit component and write the following code inside it: procedure TForm1.EditSearchKeyUp(Sender: TObject;      var Key: Word; var KeyChar: Char; Shift: TShiftState); var I: Integer; House: string; SearchText: string; begin if Key <> vkReturn then    Exit;   // this is a fake search... ListBoxHouses.Clear; SearchText := EditSearch.Text.ToUpper; //now, gets 50 random houses and match the criteria for I := 1 to 50 do begin    House := GetRndHouse;    if House.ToUpper.Contains(SearchText) then      ListBoxHouses.Items.Add(House); end; if ListBoxHouses.Count > 0 then    ListBoxHouses.ItemIndex := 0 else    ListBoxHouses.Items.Add('<Sorry, no houses found>'); ListBoxHouses.SetFocus; end; Run the application and try it to familiarize yourself with the behavior. Now, you have a working application, but you still need to make it transparent. Let's start with the FireMonkey Style Designer (FSD). Just to be clear, at the time of writing, the FireMonkey Style Designer is far to be perfect. It works, but it is not a pleasure to work with it. However, it does its job. Right-click on the TEdit component. From the contextual menu, choose Edit Custom Style (general information about styles and the style editor can be found at http://docwiki.embarcadero.com/RADStudio/XE6/en/FireMonkey_Style_Designer and http://docwiki.embarcadero.com/RADStudio/XE6/en/Editing_a_FireMonkey_Style). Delphi opens a new tab that contains the FSD. However, to work with it, you need the Structure pane to be visible as well (navigate to View | Structure or Shift + Alt + F11). In the Structure pane, there are all the styles used by the TEdit control. You should see a Structure pane similar to the following screenshot: The Structure pane showing the default style for the TEdit control In the Structure pane, open the editsearchstyle1 node, select the background subnode, and go to the Object Inspector. In the Object Inspector window, remove the content of the SourceLookup property. The background part of the style is TActiveStyleObject. A TActiveStyleObject style is a style that is able to show a part of an image as default and another part of the same image when the component that uses it is active, checked, focused, mouse hovered, pressed, or selected. The image to use is in the SourceLookup property. Our TEdit component must be completely transparent in every state, so we removed the value of the SourceLookup property. Now the TEdit component is completely invisible. Click on Apply and Close and run the application. As you can confirm, the edit works but it is completely transparent. Close the application. When you opened the FSD for the first time, a TStyleBook component has been automatically dropped on the form and contains all your custom styles. Double-click on it and the style designer opens again. The edit, as you saw, is transparent, but it is not usable at all. You need to see at least where to click and write. Let's add a small bottom line to the edit style, just like a small underline. To perform the next step, you require the Tool Palette window and the Structure pane visible. Here is my preferred setup for this situation: The Structure pane and the Tool Palette window are visible at the same time using the docking mechanism; you can also use the floating windows if you wish Now, search for a TLine component in the Tool Palette window. Drag-and-drop the TLine component onto the editsearchstyle1 node in the Structure pane. Yes, you have to drop a component from the Tool Palette window directly onto the Structure pane. Now, select the TLine component in the Structure Pane (do not use the FSD to select the components, you have to use the Structure pane nodes). In the Object Inspector, set the following properties: Align: alContents HitTest: False LineType: ltTop RotationAngle: 180 Opacity: 0.6 Click on Apply and Close. Run the application. Now, the text is underlined by a small black line that makes it easy to identify that the application is transparent. Stop the application. Now, you've to work on the listbox; it is still 100 percent opaque. Right-click on the ListBoxHouses option and click on Edit Custom Style. In the Structure pane, there are some new styles related to the TListBox class. Select the listboxhousesstyle1 option, open it, and select its child style, background. In the Object Inspector, change the Opacity property of the background style to 0.6. Click on Apply and Close. That's it! Run the application, write Calif in the Edit field and press Return. You should see a nice-looking application with a semitransparent user interface showing your dream houses in California (just like it was shown in the screenshot in the Getting ready section of this recipe). Are you amazed by the power of FireMonkey styles? How it works... The trick used in this recipe is simple. If you require a transparent UI, just identify which part of the style of each component is responsible to draw the background of the component. Then, put the Opacity setting to a level less than 1 (0.6 or 0.7 could be enough for most cases). Why not simply change the Opacity property of the component? Because if you change the Opacity property of the component, the whole component will be drawn with that opacity. However, you need only the background to be transparent; the inner text must be completely opaque. This is the reason why you changed the style and not the component property. In the case of the TEdit component, you completely removed the painting when you removed the SourceLookup property from TActiveStyleObject that draws the background. As a thumb rule, if you have to change the appearance of a control, check its properties. If the required customization is not possible using only the properties, then change the style. There's more… If you are new to FireMonkey styles, probably most concepts in this recipe must have been difficult to grasp. If so, check the official documentation on the Embarcadero DocWiki at the following URL: http://docwiki.embarcadero.com/RADStudio/XE6/en/Customizing_FireMonkey_Applications_with_Styles Summary In this article, we discussed ways to write enumerable types in Delphi. We also discussed how we can use styles to make our FireMonkey controls look better. Resources for Article: Further resources on this subject: Adding Graphics to the Map [Article] Application Performance [Article] Coding for the Real-time Web [Article]
Read more
  • 0
  • 0
  • 15011

article-image-dependency-management-sbt
Packt
07 Oct 2013
17 min read
Save for later

Dependency Management in SBT

Packt
07 Oct 2013
17 min read
(For more resources related to this topic, see here.) In the early days of Java, when projects were small and didn't have many external dependencies, developers tended to manage dependencies manually by copying the required JAR files in the lib folder and checking it in their SCM/VCS with their code. This is still followed by a lot of developers, even today. But due to the aforementioned issues, this is not an option for larger projects. In many enterprises, there are central servers, FTP, shared drives, and so on, which store the approved libraries for use and also internally released libraries. But managing and tracking them manually is never easy. They end up relying on scripts and build files. Maven came and standardized this process. Maven defines standards for the project format to define its dependencies, formats for repositories to store libraries, the automated process to fetch transitive dependencies, and much more. Most of the systems today either back onto Maven's dependency management system or on Ivy's, which can function in the same way, and also provides its own standards, which is heavily inspired by Maven. SBT uses Ivy in the backend for dependency management, but uses a custom DSL to specify the dependency. Quick introduction to Maven or Ivy dependency management Apache Maven is not a dependency management tool. It is a project management and a comprehension tool. Maven is configured using a Project Object Model (POM), which is represented in an XML file. A POM has all the details related to the project right from the basic ones, such as groupId, artifactId, version, and so on, to environment settings such as prerequisites, and repositories. Apache Ivy is a dependency management tool and a subproject of Apache Ant. Ivy integrates publicly available artifact repositories automatically. The project dependencies are declared using XML in a file called ivy.xml. This is commonly known as the Ivy file. Ivy is configured using a settings file. The settings file (ivysettings.xml) defines a set of dependency resolvers. Each resolver points to an Ivy file and/or artifacts. So, the configuration essentially indicates which resource should be used to resolve a module. How Ivy works The following diagram depicts the usual cycle of Ivy modules between different locations: The tags along the arrows are the Ivy commands that need to be run for that task, which are explained in detail in the following sections. Resolve Resolve is the phase where Ivy resolves the dependencies of a module by accessing the Ivy file defined for that module. For each dependency in the Ivy file, Ivy finds the module using the configuration. A module could be an Ivy file or artifact. Once a module is found, its Ivy file is downloaded to the Ivy cache. Then, Ivy checks for the dependencies of that module. If the module has dependencies on other modules, Ivy recursively traverses the graph of dependencies, handling conflicts simultaneously. After traversing the whole graph, Ivy downloads all the dependencies that are not already in the cache and have not been evicted by conflict management. Ivy uses a filesystem-based cache to avoid loading dependencies already available in the cache. In the end, an XML report of the dependencies of the module is generated in the cache. Retrieve Retrieve is the act of copying artifacts from the cache to another directory structure. The destination for the files to be copied is specified using a pattern. Before copying, Ivy checks if the files are not already copied to maximize performance. After dependencies have been copied, the build becomes independent of Ivy. Publish Ivy can then be used to publish the module to a repository. This can be done by manually running a task or from a continuous integration server. Dependency management in SBT In SBT, library dependencies can be managed in the following two ways: By specifying the libraries in the build definition By manually adding the JAR files of the library Manual addition of JAR files may seem simple in the beginning of a project. But as the project grows, it may depend on a lot of other projects, or the projects it depends on may have newer versions. These situations make handling dependencies manually a cumbersome task. Hence, most developers prefer to automate dependency management. Automatic dependency management SBT uses Apache Ivy to handle automatic dependency management. When dependencies are configured in this manner, SBT handles the retrieval and update of the dependencies. An update does not happen every time there is a change, since that slows down all the processes. To update the dependencies, you need to execute the update task. Other tasks depend on the output generated through the update. Whenever dependencies are modified, an update should be run for these changes to get reflected. There are three ways in which project dependencies can be specified. They are as follows: Declarations within the build definition Maven dependency files, that is, POM files Configuration and settings files used for Ivy Adding JAR files manually Declaring dependencies in the build definition The Setting key libraryDependencies is used to configure the dependencies of a project. The following are some of the possible syntaxes for libraryDependencies: libraryDependencies += groupID % artifactID % revision libraryDependencies += groupID %% artifactID % revision libraryDependencies += groupID % artifactID % revision % configuration libraryDependencies ++= Seq( groupID %% artifactID % revision, groupID %% otherID % otherRevision ) Let's explain some of these examples in more detail: groupID: This is the organization/group's ID by whom it was published artifactID: This is the project's name on which there is a dependency revision: This is the Ivy revision of the project on which there is a dependency configuration: This is the Ivy configuration for which we want to specify the dependency Notice that the first and second syntax are not the same. The second one has a %% symbol after groupID. This tells SBT to append the project's Scala version to artifactID. So, in a project with Scala Version 2.9.1, libraryDependencies ++= Seq("mysql" %% "mysql-connector-java" % "5.1.18") is equivalent to libraryDependencies ++= Seq("mysql" % "mysql-connector-java_2.9.1" % "5.1.18"). The %% symbol is very helpful for cross-building a project. Cross-building is the process of building a project for multiple Scala versions. SBT uses the crossScalaVersion key's value to configure dependencies for multiple versions of Scala. Cross-building is possible only for Scala Version 2.8.0 or higher. The %% symbol simply appends the current Scala version, so it should not be used when you know that there is no dependency for a given Scala version, although it is compatible with an older version. In such cases, you have to hardcode the version using the first syntax. Using the third syntax, we could add a dependency only for a specific configuration. This is very useful as some dependencies are not required by all configurations. For example, the dependency on a testing library is only for the test configuration. We could declare this as follows: libraryDependencies ++= Seq("org.specs2" % "specs2_2.9.1" % "1.12.3" % "test") We could also specify dependency for the provided scope (where the JDK or container provides the dependency at runtime).This scope is only available on compilation and test classpath, and is not transitive. Generally, servlet-api dependencies are declared in this scope: libraryDependencies += "javax.servlet" % "javax.servlet-api" % "3.0.1" % "provided" The revision does not have to be a single-fixed version, that is, it can be set with some constraints, and Ivy will select the one that matches best. For example, it could be latest integration or 12.0 or higher, or even a range of versions. A URL for the dependency JAR If the dependency is not published to a repository, you can also specify a direct URL to the JAR file: libraryDependencies += groupID %% artifactID % revision from directURL directURL is used only if the dependency cannot be found in the specified repositories and is not included in published metadata. For example: libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar" Extra attributes SBT also supports Ivy's extra attributes. To specify extra attributes, one could use the extra method. Consider that the project has a dependency on the following Ivy module: <ivy-module version ="2.0" > <info organization="packt" module = "introduction" e:media = "screen" status = "integration" e:codeWord = "PP1872"</ivy-module> A dependency on this can be declared by using the following: libraryDependencies += "packt" % "introduction" % "latest.integration" extra( "media"->"screen", "codeWord"-> "PP1872") The extra method can also be used to specify extra attributes for the current project, so that when it is published to the repository its Ivy file will also have extra attributes. An example for this is as follows: projectID << projectID {id => id extra( "codeWord"-> "PP1952")} Classifiers Classifiers ensure that the dependency being loaded is compatible with the platform for which the project is written. For example, to fetch the dependency relevant to JDK 1.5, use the following: libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15" We could also have multiple classifiers, as follows: libraryDependencies += "org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifier "natives-linux" classifier "natives-osx" Transitivity In logic and mathematics, a relationship between three elements is said to be transitive. If the relationship holds between the first and second elements and between the second and third elements, it implies that it also holds a relationship between the first and third elements. Relating this to the dependencies of a project, imagine that you have a project that depends on the project Foo for some of its functionality. Now, Foo depends on another project, Bar, for some of its functionality. If a change in the project Bar affects your project's functionality, then this implies that your project indirectly depends on project Bar. This means that your project has a transitive dependency on the project Bar. But if in this case a change in the project Bar does not affect your project's functionality, then your project does not depend on the project Bar. This means that your project does not have a dependency on the project Bar. SBT cannot know whether your project has a transitive dependency or not, so to avoid dependency issues, it loads the library dependencies transitively by default. In situations where this is not required for your project, you can disable it using intransitive() or notTransitive(). A common case where artifact dependencies are not required is in projects using the Felix OSGI framework (only its main JAR is required). The dependency can be declared as follows: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive() Or, it can be declared as follows: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" notTransitive() If we need to exclude certain transitive dependencies of a dependency, we could use the excludeAll or exclude method. libraryDependencies += "log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")libraryDependencies += "log4j" % "log4j" % "1.2.15" excludeAll( ExclusionRule(organization = "com.sun.jdmk"), ExclusionRule(organization = "com.sun.jmx"), ExclusionRule(organization = "javax.jms") ) Although excludeAll provides more flexibility, it should not be used in projects that will be published in the Maven style as it cannot be represented in a pom.xml file. The exclude method is more useful in projects that require pom.xml when being published, since it requires both organizationID and name to exclude a module. Download documentation Generally, an IDE plugin is used to download the source and API documentation JAR files. However, one can configure SBT to download the documentation without using an IDE plugin. To download the dependency's sources, add withSources() to the dependency definition. For example: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() To download API JAR files, add withJavaDoc() to the dependency definition. For example: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc() The documentation downloaded like this is not transitive. You must use the update-classifiers task to do so. Dependencies using Maven files SBT can be configured to use a Maven POM file to handle the project dependencies by using the externalPom method. The following statements can be used in the build definition: externalPom(): This will set pom.xml in the project's base directory as the source for project dependencies externalPom(baseDirectory{base=>base/"myProjectPom"}): This will set the custom-named POM file myProjectPom.xml in the project's base directory as the source for project dependencies There are a few restrictions with using a POM file, as follows: It can be used only for configuring dependencies. The repositories mentioned in POM will not be considered. They need to be specified explicitly in the build definition or in an Ivy settings file. There is no support for relativePath in the parent element of POM and its existence will result in an error. Dependencies using Ivy files or Ivy XML Both Ivy settings and dependencies can be used to configure project dependencies in SBT through the build definition. They can either be loaded from a file or can be given inline in the build definition. The Ivy XML can be declared as follows: ivyXML := <dependencies> <dependency org="org.specs2" name="specs2" rev="1.12.3"></dependency></ dependencies> The commands to load from a file are as follows: externalIvySettings(): This will set ivysettings.xml in the project's base directory as the source for dependency settings. externalIvySettings(baseDirectory{base=>base/"myIvySettings"}): This will set the custom-named settings file myIvySettings.xml in the project's base directory as the source for dependency settings. externalIvySettingsURL(url("settingsURL")): This will set the settings file at settingsURL as the source for dependency settings. externalIvyFile(): This will set ivy.xml in the project's base directory as the source for dependency. externalIvyFile(baseDirectory(_/"myIvy"): This will set the custom-named settings file myIvy.xml in the project's base directory as the source for project dependencies. When using Ivy settings and configuration files, the configurations need to be mapped, because Ivy files specify their own configurations. So, classpathConfiguration must be set for the three main configurations. For example: classpathConfiguration in Compile := Compile classpathConfiguration in Test := Test classpathConfiguration in Runtime := Runtime Adding JAR files manually To handle dependencies manually in SBT, you need to create a lib folder in the project and add the JAR files to it. That is the default location where SBT looks for unmanaged dependencies. If you have the JAR files located in some other folder, you could specify that in the build definition. The key used to specify the source for manually added JAR files is unmanagedBase. For example, if the JAR files your project depends on are in project/extras/dependencies instead of project/lib, modify the value of unmanagedBase as follows: unmanagedBase <<= baseDirectory {base => base/"extras/dependencies"} Here, baseDirectory is the project's root directory. unmanagedJars is a task which lists the JAR files from the unmanagedBase directory. To see the list of JAR files in the interactive shell type, type the following: > show unmanaged-jars [info] ArrayBuffer() Or in the project folder, type: $ sbt show unmanaged-jars If you add a Spring JAR (org.springframework.aop-3.0.1.jar) to the dependencies folder, then the result of the previous command would be: > show unmanaged-jars [info] ArrayBuffer(Attributed(/home/introduction/extras/dependencies/ org.springframework.aop-3.0.1.jar)) It is also possible to specify the path(s) of JAR files for different configurations using unmanagedJars. In the build definition, the unmanagedJars task may need to be replaced when the jars are in multiple directories and other complex cases. unmanagedJars in Compile += file("/home/downloads/ org.springframework.aop-3.0.1.jar") Resolvers Resolvers are alternate resources provided for the projects on which there is a dependency. If the specified project's JAR is not found in the default repository, these are tried. The default repository used by SBT is Maven2 and the local Ivy repository. The simplest ways of adding a repository are as follows: resolvers += name at location. For example: resolvers += "releases" at "http://oss.sonatype.org/content/ repositories/releases" resolvers ++= Seq (name1 at location1, name2 at location2). For example: resolvers ++= Seq("snapshots" at "http://oss.sonatype.org/ content/repositories/snapshots", "releases" at "http://oss.sonatype.org/content/repositories/releases") resolvers := Seq (name1 at location1, name2 at location2). For example: resolvers := Seq("sgodbillon" at "https://bitbucket.org/ sgodbillon/repository/raw/master/snapshots/", "Typesafe backup repo" at " http://repo.typesafe.com/typesafe/repo/", "Maven repo1" at "http://repo1.maven.org/") ) You can also add their own local Maven repository as a resource using the following syntax: resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository" An Ivy repository of the file types URL, SSH, or SFTP can also be added as resources using sbt.Resolver. Note that sbt.Resolver is a class with factories for interfaces to Ivy repositories that require a hostname, port, and patterns. Let's see how to use the Resolver class. For filesystem repositories, the following line defines an atomic filesystem repository in the test directory of the current working directory: resolvers += Resolver.file ("my-test-repo", file("test")) transactional() For URL repositories, the following line defines a URL repository at http://example.org/repo-releases/: resolvers += Resolver.url(" my-test-repo", url("http://example.org/repo-releases/")) The following line defines an Ivy repository at http://joscha.github.com/play-easymail/repo/releases/: resolvers += Resolver.url("my-test-repo", url("http://joscha.github.com/play-easymail/repo/releases/")) (Resolver.ivyStylePatterns) For SFTP repositories, the following line defines a repository that is served by SFTP from the host example.org: resolvers += Resolver.sftp(" my-sftp-repo", "example.org") The following line defines a repository that is served by SFTP from the host example.org at port 22: resolvers += Resolver.sftp("my-sftp-repo", "example.org", 22) The following line defines a repository that is served by SFTP from the host example.org with maven2/repo-releases/ as the base path: resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/") For SSH repositories, the following line defines an SSH repository with user-password authentication: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", "password") The following line defines an SSH repository with an access request for the given user. The user will be prompted to enter the password to complete the download. resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user") The following line defines an SSH repository using key authentication: resolvers += { val keyFile: File = ... Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword") } The next line defines an SSH repository using key authentication where no keyFile password is required to be prompted for before download: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile) The following line defines an SSH repository with the permissions. It is a mode specification such as chmod: resolvers += Resolver.ssh("my-ssh-repo", "example.org") withPermissions("0644") SFTP authentication can be handled in the same way as shown for SSH in the previous examples. Ivy patterns can also be given to the factory methods. Each factory method uses a Patterns instance which defines the patterns to be used. The default pattern passed to the factory methods gives the Maven-style layout. To use a different layout, provide a Patterns object describing it. The following are some examples that specify custom repository layouts using patterns: resolvers += Resolver.url("my-test-repo", url)( Patterns("[organisation]/[module]/ [revision]/[artifact].[ext]") ) You can specify multiple patterns or patterns for the metadata and artifacts separately. For filesystem and URL repositories, you can specify absolute patterns by omitting the base URL, passing an empty patterns instance, and using Ivy instances and artifacts. resolvers += Resolver.url("my-test-repo") artifacts "http://example.org/[organisation]/[module]/ [revision]/[artifact].[ext]" When you do not need the default repositories, you must override externalResolvers. It is the combination of resolvers and default repositories. To use the local Ivy repository without the Maven repository, define externalResolvers as follows: externalResolvers <<= resolvers map { rs => Resolver.withDefaultResolvers(rs, mavenCentral = false) } Summary In this article, we have seen how dependency management tools such as Maven and Ivy work and how SBT handles project dependencies. This article also talked about the different options that SBT provides to handle your project dependencies and configuring resolvers for the module on which your project has a dependency. Resources for Article : Further resources on this subject: So, what is Play? [Article] Play! Framework 2 – Dealing with Content [Article] Integrating Scala, Groovy, and Flex Development with Apache Maven [Article]
Read more
  • 0
  • 0
  • 14945

article-image-introducing-boost-c-libraries
Packt
14 Sep 2015
22 min read
Save for later

Introducing the Boost C++ Libraries

Packt
14 Sep 2015
22 min read
 In this article written by John Torjo and Wisnu Anggoro, authors of the book Boost.Asio C++ Network Programming - Second Edition, the authors state that "Many programmers have used libraries since this simplifies the programming process. Because they do not need to write the function from scratch anymore, using a library can save much code development time". In this article, we are going to get acquainted with Boost C++ libraries. Let us prepare our own compiler and text editor to prove the power of Boost libraries. As we do so, we will discuss the following topics: Introducing the C++ standard template library Introducing Boost C++ libraries Setting up Boost C++ libraries in MinGW compiler Building Boost C++ libraries Compiling code that contains Boost C++ libraries (For more resources related to this topic, see here.) Introducing the C++ standard template library The C++ Standard Template Library (STL) is a generic template-based library that offers generic containers among other things. Instead of dealing with dynamic arrays, linked lists, binary trees, or hash tables, programmers can easily use an algorithm that is provided by STL. The STL is structured by containers, iterators, and algorithms, and their roles are as follows: Containers: Their main role is to manage the collection of objects of certain kinds, such as arrays of integers or linked lists of strings. Iterators: Their main role is to step through the element of the collections. The working of an iterator is similar to that of a pointer. We can increment the iterator by using the ++ operator and access the value by using the * operator. Algorithms: Their main role is to process the element of collections. An algorithm uses an iterator to step through all elements. After it iterates the elements, it processes each element, for example, modifying the element. It can also search and sort the element after it finishes iterating all the elements. Let us examine the three elements that structure STL by creating the following code: /* stl.cpp */ #include <vector> #include <iostream> #include <algorithm> int main(void) { int temp; std::vector<int> collection; std::cout << "Please input the collection of integer numbers, input 0 to STOP!n"; while(std::cin >> temp != 0) { if(temp == 0) break; collection.push_back(temp); } std::sort(collection.begin(), collection.end()); std::cout << "nThe sort collection of your integer numbers:n"; for(int i: collection) { std::cout << i << std::endl; } } Name the preceding code stl.cpp, and run the following command to compile it: g++ -Wall -ansi -std=c++11 stl.cpp -o stl Before we dissect this code, let us run it to see what happens. This program will ask users to enter as many as integer, and then it will sort the numbers. To stop the input and ask the program to start sorting, the user has to input 0. This means that 0 will not be included in the sorting process. Since we do not prevent users from entering non-integer numbers such as 3.14, or even the string, the program will soon stop waiting for the next number after the user enters a non-integer number. The code yields the following output: We have entered six integer: 43, 7, 568, 91, 2240, and 56. The last entry is 0 to stop the input process. Then the program starts to sort the numbers and we get the numbers sorted in sequential order: 7, 43, 56, 91, 568, and 2240. Now, let us examine our code to identify the containers, iterators, and algorithms that are contained in the STL. std::vector<int> collection; The preceding code snippet has containers from STL. There are several containers, and we use a vector in the code. A vector manages its elements in a dynamic array, and they can be accessed randomly and directly with the corresponding index. In our code, the container is prepared to hold integer numbers so we have to define the type of the value inside the angle brackets <int>. These angle brackets are also called generics in STL. collection.push_back(temp); std::sort(collection.begin(), collection.end()); The begin() and end() functions in the preceding code are algorithms in STL. They play the role of processing the data in the containers that are used to get the first and the last elements in the container. Before that, we can see the push_back() function, which is used to append an element to the container. for(int i: collection) { std::cout << i << std::endl; } The preceding for block will iterate each element of the integer which is called as collection. Each time the element is iterated, we can process the element separately. In the preceding example, we showed the number to the user. That is how the iterators in STL play their role. #include <vector> #include <algorithm> We include vector definition to define all vector functions and algorithm definition to invoke the sort() function. Introducing the Boost C++ libraries The Boost C++ libraries is a set of libraries to complement the C++ standard libraries. The set contains more than a hundred libraries that we can use to increase our productivity in C++ programming. It is also used when our requirements go beyond what is available in the STL. It provides source code under Boost Licence, which means that it allows us to use, modify, and distribute the libraries for free, even for commercial use. The development of Boost is handled by the Boost community, which consists of C++ developers from around the world. The mission of the community is to develop high-quality libraries as a complement to STL. Only proven libraries will be added to the Boost libraries. For detailed information about Boost libraries go to www.boost.org. And if you want to contribute developing libraries to Boost, you can join the developer mailing list at lists.boost.org/mailman/listinfo.cgi/boost. The entire source code of the libraries is available on the official GitHub page at github.com/boostorg. Advantages of Boost libraries As we know, using Boost libraries will increase programmer productivity. Moreover, by using Boost libraries, we will get advantages such as these: It is open source, so we can inspect the source code and modify it if needed. Its license allows us to develop both open source and close source projects. It also allows us to commercialize our software freely. It is well documented and we can find it libraries all explained along with sample code from the official site. It supports almost any modern operating system, such as Windows and Linux. It also supports many popular compilers. It is a complement to STL instead of a replacement. It means using Boost libraries will ease those programming processes which are not handled by STL yet. In fact, many parts of Boost are included in the standard C++ library. Preparing Boost libraries for MinGW compiler Before we go through to program our C++ application by using Boost libraries, the libraries need to be configured in order to be recognized by MinGW compiler. Here we are going to prepare our programming environment so that our compiler is able use Boost libraries. Downloading Boost libraries The best source from which to download Boost is the official download page. We can go there by pointing our internet browser to www.boost.org/users/download. Find the Download link in Current Release section. At the time of writing, the current version of Boost libraries is 1.58.0, but when you read this article, the version may have changed. If so, you can still choose the current release because the higher version must be compatible with the lower. However, you have to adjust as we're goning to talk about the setting later. Otherwise, choosing the same version will make it easy for you to follow all the instructions in this article. There are four file formats to be choose from for download; they are .zip, .tar.gz, .tar.bz2, and .7z. There is no difference among the four files but their file size. The largest file size is of the ZIP format and the lowest is that of the 7Z format. Because of the file size, Boost recommends that we download the 7Z format. See the following image for comparison: We can see, from the preceding image, the size of ZIP version is 123.1 MB while the size of the 7Z version is 65.2 MB. It means that the size of the ZIP version is almost twice that of the 7Z version. Therefore they suggest that you choose the 7Z format to reduce download and decompression time. Let us choose boost_1_58_0.7z to be downloaded and save it to our local storage. Deploying Boost libraries After we have got boost_1_58_0.7z in our local storage, decompress it using the 7ZIP application and save the decompression files to C:boost_1_58_0. The 7ZIP application can be grabbed from www.7-zip.org/download.html. The directory then should contain file structures as follows: Instead of browsing to the Boost download page and searching for the Boost version manually, we can go directly to sourceforge.net/projects/boost/files/boost/1.58.0. It will be useful when the 1.58.0 version is not the current release anymore. Using Boost libraries Most libraries in Boost are header-only; this means that all declarations and definitions of functions, including namespaces and macros, are visible to the compiler and there is no need to compile them separately. We can now try to use Boost with the program to convert the string into int value as follows: /* lexical.cpp */ #include <boost/lexical_cast.hpp> #include <string> #include <iostream> int main(void) { try { std::string str; std::cout << "Please input first number: "; std::cin >> str; int n1 = boost::lexical_cast<int>(str); std::cout << "Please input second number: "; std::cin >> str; int n2 = boost::lexical_cast<int>(str); std::cout << "The sum of the two numbers is "; std::cout << n1 + n2 << "n"; return 0; } catch (const boost::bad_lexical_cast &e) { std::cerr << e.what() << "n"; return 1; } } Open the Notepad++ application, type the preceding code, and save it as lexical.cpp in C:CPP. Now open the command prompt, point the active directory to C:CPP, and then type the following command: g++ -Wall -ansi lexical.cpp –Ic:boost_1_58_0 -o lexical We have a new option here, which is –I (the "include" option). This option is used along with the full path of the directory to inform the compiler that we have another header directory that we want to include to our code. Since we store our Boost libraries in c: boost_1_58_0, we can use –Ic:boost_1_58_0 as an additional parameter. In lexical.cpp, we apply boost::lexical_cast to convert string type data into int type data. The program will ask the user to input two numbers and will then automatically find the sum of both numbers. If a user inputs an inappropriate number, it will inform that an error has occurred. The Boost.LexicalCast library is provided by Boost for casting data type purpose (converting numeric types such as int, double, or floats into string types, and vice versa). Now let us dissect lexical.cpp to for a more detailed understanding of what it does: #include <boost/lexical_cast.hpp> #include <string> #include <iostream> We include boost/lexical_cast.hpp because the boost::lexical_cast function is declared lexical_cast.hpp header file whilst string header is included to apply std::string function and iostream header is included to apply std::cin, std::cout and std::cerr function. Other functions, such as std::cin and std::cout, and we saw what their functions are so we can skip those lines. #include <boost/lexical_cast.hpp> #include <string> #include <iostream> We used the preceding two separate lines to convert the user-provided input string into the int data type. Then, after converting the data type, we summed up both of the int values. We can also see the try-catch block in the preceding code. It is used to catch the error if user inputs an inappropriate number, except 0 to 9. catch (const boost::bad_lexical_cast &e) { std::cerr << e.what() << "n"; return 1; } The preceding code snippet will catch errors and inform the user what the error message exactly is by using boost::bad_lexical_cast. We call the e.what() function to obtain the string of the error message. Now let us run the application by typing lexical at the command prompt. We will get output like the following: I put 10 for first input and 20 for the second input. The result is 30 because it just sums up both input. But what will happen if I put in a non-numerical value, for instance Packt. Here is the output to try that condition: Once the application found the error, it will ignore the next statement and go directly to the catch block. By using the e.what() function, the application can get the error message and show it to the user. In our example, we obtain bad lexical cast: source type value could not be interpreted as target as the error message because we try to assign the string data to int type variable. Building Boost libraries As we discussed previously, most libraries in Boost are header-only, but not all of them. There are some libraries that have to be built separately. They are: Boost.Chrono: This is used to show the variety of clocks, such as current time, the range between two times, or calculating the time passed in the process. Boost.Context: This is used to create higher-level abstractions, such as coroutines and cooperative threads. Boost.Filesystem: This is used to deal with files and directories, such as obtaining the file path or checking whether a file or directory exists. Boost.GraphParallel: This is an extension to the Boost Graph Library (BGL) for parallel and distributed computing. Boost.IOStreams: This is used to write and read data using stream. For instance, it loads the content of a file to memory or writes compressed data in GZIP format. Boost.Locale: This is used to localize the application, in other words, translate the application interface to user's language. Boost.MPI: This is used to develop a program that executes tasks concurrently. MPI itself stands for Message Passing Interface. Boost.ProgramOptions: This is used to parse command-line options. Instead of using the argv variable in the main parameter, it uses double minus (--) to separate each command-line option. Boost.Python: This is used to parse Python language in C++ code. Boost.Regex: This is used to apply regular expression in our code. But if our development supports C++11, we do not depend on the Boost.Regex library anymore since it is available in the regex header file. Boost.Serialization: This is used to convert objects into a series of bytes that can be saved and then restored again into the same object. Boost.Signals: This is used to create signals. The signal will trigger an event to run a function on it. Boost.System: This is used to define errors. It contains four classes: system::error_code, system::error_category, system::error_condition, and system::system_error. All of these classes are inside the boost namespace. It is also supported in the C++11 environment, but because many Boost libraries use Boost.System, it is necessary to keep including Boost.System. Boost.Thread: This is used to apply threading programming. It provides classes to synchronize access on multiple-thread data. It is also supported in C++11 environments, but it offers extensions, such as we can interrupt thread in Boost.Thread. Boost.Timer: This is used to measure the code performance by using clocks. It measures time passed based on usual clock and CPU time, which states how much time has been spent to execute the code. Boost.Wave: This provides a reusable C preprocessor that we can use in our C++ code. There are also a few libraries that have optional, separately compiled binaries. They are as follows: Boost.DateTime: It is used to process time data; for instance, calendar dates and time. It has a binary component that is only needed if we use to_string, from_string, or serialization features. It is also needed if we target our application in Visual C++ 6.x or Borland. Boost.Graph: It is used to create two-dimensional graphics. It has a binary component that is only needed if we intend to parse GraphViz files. Boost.Math: It is used to deal with mathematical formulas. It has binary components for cmath functions. Boost.Random: It is used to generate random numbers. It has a binary component which is only needed if we want to use random_device. Boost.Test: It is used to write and organize test programs and their runtime execution. It can be used in header-only or separately compiled mode, but separate compilation is recommended for serious use. Boost.Exception: It is used to add data to an exception after it has been thrown. It provides non-intrusive implementation of exception_ptr for 32-bit _MSC_VER==1310 and _MSC_VER==1400, which requires a separately compiled binary. This is enabled by #define BOOST_ENABLE_NON_INTRUSIVE_EXCEPTION_PTR. Let us try to recreate the random number generator. But now we will use the Boost.Random library instead of std::rand() from the C++ standard function. Let us take a look at the following code: /* rangen_boost.cpp */ #include <boost/random/mersenne_twister.hpp> #include <boost/random/uniform_int_distribution.hpp> #include <iostream> int main(void) { int guessNumber; std::cout << "Select number among 0 to 10: "; std::cin >> guessNumber; if(guessNumber < 0 || guessNumber > 10) { return 1; } boost::random::mt19937 rng; boost::random::uniform_int_distribution<> ten(0,10); int randomNumber = ten(rng); if(guessNumber == randomNumber) { std::cout << "Congratulation, " << guessNumber << " is your lucky number.n"; } else { std::cout << "Sorry, I'm thinking about number " << randomNumber << "n"; } return 0; } We can compile the preceding source code by using the following command: g++ -Wall -ansi -Ic:/boost_1_58_0 rangen_boost.cpp -o rangen_boost Now, let us run the program. Unfortunately, for the three times that I ran the program, I always obtained the same random number as follows: As we can see from this example, we always get number 8. This is because we apply Mersenne Twister, a Pseudorandom Number Generator (PRNG), which uses the default seed as a source of randomness so it will generate the same number every time the program is run. And of course it is not the program that we expect. Now, we will rework the program once again, just in two lines. First, find the following line: #include <boost/random/mersenne_twister.hpp> Change it as follows: #include <boost/random/random_device.hpp> Next, find the following line: boost::random::mt19937 rng; Change it as follows: boost::random::random_device rng; Then, save the file as rangen2_boost.cpp and compile the rangen2_boost.cpp file by using the command like we compiled rangen_boost.cpp. The command will look like this: g++ -Wall -ansi -Ic:/boost_1_58_0 rangen2_boost.cpp -o rangen2_boost Sadly, there will be something wrong and the compiler will show the following error message: cc8KWVvX.o:rangen2_boost.cpp:(.text$_ZN5boost6random6detail20generate _uniform_intINS0_13random_deviceEjEET0_RT_S4_S4_N4mpl_5bool_ILb1EEE[_ ZN5boost6random6detail20generate_uniform_intINS0_13random_deviceEjEET 0_RT_S4_S4_N4mpl_5bool_ILb1EEE]+0x24f): more undefined references to boost::random::random_device::operator()()' follow collect2.exe: error: ld returned 1 exit status This is because, as we have discussed earlier, the Boost.Random library needs to be compiled separately if we want to use the random_device attribute. Boost libraries have a system to compile or build Boost itself, called Boost.Build library. There are two steps we have to achieve to install the Boost.Build library. First, run Bootstrap by pointing the active directory at the command prompt to C:boost_1_58_0 and typing the following command: bootstrap.bat mingw We use our MinGW compiler, as our toolset in compiling the Boost library. Wait a second and then we will get the following output if the process is a success: Building Boost.Build engine Bootstrapping is done. To build, run: .b2 To adjust configuration, edit 'project-config.jam'. Further information: - Command line help: .b2 --help - Getting started guide: http://boost.org/more/getting_started/windows.html - Boost.Build documentation: http://www.boost.org/build/doc/html/index.html In this step, we will find four new files in the Boost library's root directory. They are: b2.exe: This is an executable file to build Boost libraries. bjam.exe: This is exactly the same as b2.exe but it is a legacy version. bootstrap.log: This contains logs from the bootstrap process project-config.jam: This contains a setting that will be used in the building process when we run b2.exe. We also find that this step creates a new directory in C:boost_1_58_0toolsbuildsrcenginebin.ntx86 , which contains a bunch of .obj files associated with Boost libraries that needed to be compiled. After that, run the second step by typing the following command at the command prompt: b2 install toolset=gcc Grab yourself a cup of coffee after running that command because it will take about twenty to fifty minutes to finish the process, depending on your system specifications. The last output we will get will be like this: ...updated 12562 targets... This means that the process is complete and we have now built the Boost libraries. If we check in our explorer, the Boost.Build library adds C:boost_1_58_0stagelib, which contains a collection of static and dynamic libraries that we can use directly in our program. bootstrap.bat and b2.exe use msvc (Microsoft Visual C++ compiler) as the default toolset, and many Windows developers already have msvc installed on their machines. Since we have installed GCC compiler, we set the mingw and gcc toolset options in Boost's build. If you also have mvsc installed and want to use it in Boost's build, the toolset options can be omitted. Now, let us try to compile the rangen2_boost.cpp file again, but now with the following command: c:CPP>g++ -Wall -ansi -Ic:/boost_1_58_0 rangen2_boost.cpp - Lc:boost_1_58_0stagelib -lboost_random-mgw49-mt-1_58 - lboost_system-mgw49-mt-1_58 -o rangen2_boost We have two new options here, they are –L and –l. The -L option is used to define the path that contains the library file if it is not in the active directory. The –l option is used to define the name of library file but omitting the first lib word in front of the file name. In this case, the original library file name is libboost_random-mgw49-mt-1_58.a, and we omit the lib phrase and the file extension for option -l. The new file called rangen2_boost.exe will be created in C:CPP. But before we can run the program, we have to ensure that the directory which the program installed has contained the two dependencies library file. These are libboost_random-mgw49-mt-1_58.dll and libboost_system-mgw49-mt-1_58.dll, and we can get them from the library directory c:boost_1_58_0_1stagelib. Just to make it easy for us to run that program, run the following copy command to copy the two library files to C:CPP: copy c:boost_1_58_0_1stageliblibboost_random-mgw49-mt-1_58.dll c:cpp copy c:boost_1_58_0_1stageliblibboost_system-mgw49-mt-1_58.dll c:cpp And now the program should run smoothly. In order to create a network application, we are going to use the Boost.Asio library. We do not find Boost.Asio—the library we are going to use to create a network application—in the non-header-only library. It seems that we do not need to build the boost library since Boost.Asio is header-only library. This is true, but since Boost.Asio depends on Boost.System and Boost.System needs to be built before being used, it is important to build Boost first before we can use it to create our network application. For option –I and –L, the compiler does not care if we use backslash () or slash (/) to separate each directory name in the path because the compiler can handle both Windows and Unix path styles. Summary We saw that Boost C++ libraries were developed to complement the standard C++ library We have also been able to set up our MinGW compiler in order to compile the code which contains Boost libraries and build the binaries of libraries which have to be compiled separately. Please remember that though we can use the Boost.Asio library as a header-only library, it is better to build all Boost libraries by using the Boost.Build library. It will be easy for us to use all libraries without worrying about compiling failure. Resources for Article:   Further resources on this subject: Actors and Pawns[article] What is Quantitative Finance?[article] Program structure, execution flow, and runtime objects [article]
Read more
  • 0
  • 0
  • 13548

article-image-asynchronous-programming-f
Packt
12 Oct 2016
15 min read
Save for later

Asynchronous Programming in F#

Packt
12 Oct 2016
15 min read
In this article by Alfonso Garcia Caro Nunez and Suhaib Fahad, author of the book Mastering F#, sheds light on how writing applications that are non-blocking or reacting to events is increasingly becoming important in this cloud world we live in. A modern application needs to carry out a rich user interaction, communicate with web services, react to notifications, and so on; the execution of reactive applications is controlled by events. Asynchronous programming is characterized by many simultaneously pending reactions to internal or external events. These reactions may or may not be processed in parallel. (For more resources related to this topic, see here.) In .NET, both C# and F# provide asynchronous programming experience through keywords and syntaxes. In this article, we will go through the asynchronous programming model in F#, with a bit of cross-referencing or comparison drawn with the C# world. In this article, you will learn about asynchronous workflows in F# Asynchronous workflows in F# Asynchronous workflows are computation expressions that are setup to run asynchronously. It means that the system runs without blocking the current computation thread when a sleep, I/O, or other asynchronous process is performed. You may be wondering why do we need asynchronous programming and why can't we just use the threading concepts that we did for so long. The problem with threads is that the operation occupies the thread for the entire time that something happens or when a computation is done. On the other hand, asynchronous programming will enable a thread only when it is required, otherwise it will be normal code. There is also lot of marshalling and unmarshalling (junk) code that we will write around to overcome the issues that we face when directly dealing with threads. Thus, asynchronous model allows the code to execute efficiently whether we are downloading a page 50 or 100 times using a single thread or if we are doing some I/O operation over the network and there are a lot of incoming requests from the other endpoint. There is a list of functions that the Async module in F# exposes to create or use these asynchronous workflows to program. The asynchronous pattern allows writing code that looks like it is written for a single-threaded program, but in the internals, it uses async blocks to execute. There are various triggering functions that provide a wide variety of ways to create the asynchronous workflow, which is either a background thread, a .NET framework task object, or running the computation in the current thread itself. In this article, we will use the example of downloading the content of a webpage and modifying the data, which is as follows: let downloadPage (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = req.AsyncGetResponse() use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } downloadPage("https://www.google.com") |> Async.RunSynchronously The preceding function does the following: The async expression, { … }, generates an object of type Async<string> These values are not actual results; rather, they are specifications of tasks that need to run and return a string Async.RunSynchronously takes this object and runs synchronously We just wrote a simple function with asynchronous workflows with relative ease and reason about the code, which is much better than using code with Begin/End routines. One of the most important point here is that the code is never blocked during the execution of the asynchronous workflow. This means that we can, in principle, have thousands of outstanding web requests—the limit being the number supported by the machine, not the number of threads that host them. Using let! In asynchronous workflows, we will use let! binding to enable execution to continue on other computations or threads, while the computation is being performed. After the execution is complete, the rest of the asynchronous workflow is executed, thus simulating a sequential execution in an asynchronous way. In addition to let!, we can also use use! to perform asynchronous bindings; basically, with use!, the object gets disposed when it loses the current scope. In our previous example, we used use! to get the HttpWebResponse object. We can also do as follows: let! resp = req.AsyncGetResponse() // process response We are using let! to start an operation and bind the result to a value, do!, which is used when the return of the async expression is a unit. do! Async.Sleep(1000) Understanding asynchronous workflows As explained earlier, asynchronous workflows are nothing but computation expressions with asynchronous patterns. It basically implements the Bind/Return pattern to implement the inner workings. This means that the let! expression is translated into a call to async. The bind and async.Return function are defined in the Async module in the F# library. This is a compiler functionality to translate the let! expression into computation workflows and, you, as a developer, will never be required to understand this in detail. The purpose of explaining this piece is to understand the internal workings of an asynchronous workflow, which is nothing but a computation expression. The following listing shows the translated version of the downloadPage function we defined earlier: async.Delay(fun() -> let req = HttpWebRequest.Create(url) async.Bind(req.AsyncGetResponse(), fun resp -> async.Using(resp, fun resp -> let respStream = resp.GetResponseStream() async.Using(new StreamReader(respStream), fun sr " -> reader.ReadToEnd() ) ) ) ) The following things are happening in the workflow: The Delay function has a deferred lambda that executes later. The body of the lambda creates an HttpWebRequest and is forwarded in a variable req to the next segment in the workflow. The AsyncGetResponse function is called and a workflow is generated, where it knows how to execute the response and invoke when the operation is completed. This happens internally with the BeginGetResponse and EndGetResponse functions already present in the HttpWebRequest class; the AsyncGetResponse is just a wrapper extension present in the F# Async module. The Using function then creates a closure to dispose the object with the IDisposable interface once the workflow is complete. Async module The Async module has a list of functions that allows writing or consuming asynchronous code. We will go through each function in detail with an example to understand it better. Async.AsBeginEnd It is very useful to expose the F# workflow functionality out of F#, say if we want to use and consume the API's in C#. The Async.AsBeginEnd method gives the possibility of exposing the asynchronous workflows as a triple of methods—Begin/End/Cancel—following the .NET Asynchronous Programming Model (APM). Based on our downloadPage function, we can define the Begin, End, Cancel functions as follows: type Downloader() = let beginMethod, endMethod, cancelMethod = " Async.AsBeginEnd downloadPage member this.BeginDownload(url, callback, state : obj) = " beginMethod(url, callback, state) member this.EndDownload(ar) = endMethod ar member this.CancelDownload(ar) = cancelMethod(ar) Async.AwaitEvent The Async.AwaitEvent method creates an asynchronous computation that waits for a single invocation of a .NET framework event by adding a handler to the event. type MyEvent(v : string) = inherit EventArgs() member this.Value = v; let testAwaitEvent (evt : IEvent<MyEvent>) = async { printfn "Before waiting" let! r = Async.AwaitEvent evt printfn "After waiting: %O" r.Value do! Async.Sleep(1000) return () } let runAwaitEventTest () = let evt = new Event<Handler<MyEvent>, _>() Async.Start <| testAwaitEvent evt.Publish System.Threading.Thread.Sleep(3000) printfn "Before raising" evt.Trigger(null, new MyEvent("value")) printfn "After raising" > runAwaitEventTest();; > Before waiting > Before raising > After raising > After waiting : value The testAwaitEvent function listens to the event using Async.AwaitEvent and prints the value. As the Async.Start will take some time to start up the thread, we will simply call Thread.Sleep to wait on the main thread. This is for example purpose only. We can think of scenarios where a button-click event is awaited and used inside an async block. Async.AwaitIAsyncResult Creates a computation result and waits for the IAsyncResult to complete. IAsyncResult is the asynchronous programming model interface that allows us to write asynchronous programs. It returns true if IAsyncResult issues a signal within the given timeout. The timeout parameter is optional, and its default value is -1 of Timeout.Infinite. let testAwaitIAsyncResult (url: string) = async { let req = HttpWebRequest.Create(url) let aResp = req.BeginGetResponse(null, null) let! asyncResp = Async.AwaitIAsyncResult(aResp, 1000) if asyncResp then let resp = req.EndGetResponse(aResp) use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() else return "" } > Async.RunSynchronously (testAwaitIAsyncResult "https://www.google.com") We will modify the downloadPage example with AwaitIAsyncResult, which allows a bit more flexibility where we want to add timeouts as well. In the preceding example, the AwaitIAsyncResult handle will wait for 1000 milliseconds, and then it will execute the next steps. Async.AwaitWaitHandle Creates a computation that waits on a WaitHandle—wait handles are a mechanism to control the execution of threads. The following is an example with ManualResetEvent: let testAwaitWaitHandle waitHandle = async { printfn "Before waiting" let! r = Async.AwaitWaitHandle waitHandle printfn "After waiting" } let runTestAwaitWaitHandle () = let event = new System.Threading.ManualResetEvent(false) Async.Start <| testAwaitWaitHandle event System.Threading.Thread.Sleep(3000) printfn "Before raising" event.Set() |> ignore printfn "After raising" The preceding example uses ManualResetEvent to show how to use AwaitHandle, which is very similar to the event example that we saw in the previous topic. Async.AwaitTask Returns an asynchronous computation that waits for the given task to complete and returns its result. This helps in consuming C# APIs that exposes task based asynchronous operations. let downloadPageAsTask (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = req.AsyncGetResponse() use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } |> Async.StartAsTask let testAwaitTask (t: Task<string>) = async { let! r = Async.AwaitTask t return r } > downloadPageAsTask "https://www.google.com" |> testAwaitTask |> Async.RunSynchronously;; The preceding function is also downloading the web page as HTML content, but it starts the operation as a .NET task object. Async.FromBeginEnd The FromBeginEnd method acts as an adapter for the asynchronous workflow interface by wrapping the provided Begin/End method. Thus, it allows using large number of existing components that support an asynchronous mode of work. The IAsyncResult interface exposes the functions as Begin/End pattern for asynchronous programming. We will look at the same download page example using FromBeginEnd: let downloadPageBeginEnd (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = Async.FromBeginEnd(req.BeginGetResponse, req.EndGetResponse) use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } The function accepts two parameters and automatically identifies the return type; we will use BeginGetResponse and EndGetResponse as our functions to call. Internally, Async.FromBeginEnd delegates the asynchronous operation and gets back the handle once the EndGetResponse is called. Async.FromContinuations Creates an asynchronous computation that captures the current success, exception, and cancellation continuations. To understand these three operations, let's create a sleep function similar to Async.Sleep using timer: let sleep t = Async.FromContinuations(fun (cont, erFun, _) -> let rec timer = new Timer(TimerCallback(callback)) and callback state = timer.Dispose() cont(()) timer.Change(t, Timeout.Infinite) |> ignore ) let testSleep = async { printfn "Before" do! sleep 5000 printfn "After 5000 msecs" } Async.RunSynchronously testSleep The sleep function takes an integer and returns a unit; it uses Async.FromContinuations to allow the flow of the program to continue when a timer event is raised. It does so by calling the cont(()) function, which is a continuation to allow the next step in the asynchronous flow to execute. If there is any error, we can call erFun to throw the exception and it will be handled from the place we are calling this function. Using the FromContinuation function helps us wrap and expose functionality as async, which can be used inside asynchronous workflows. It also helps to control the execution of the programming with cancelling or throwing errors using simple APIs. Async.Start Starts the asynchronous computation in the thread pool. It accepts an Async<unit> function to start the asynchronous computation. The downloadPage function can be started as follows: let asyncDownloadPage(url) = async { let! result = downloadPage(url) printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.Start   We wrap the function to another async function that returns an Async<unit> function so that it can be called by Async.Start. Async.StartChild Starts a child computation within an asynchronous workflow. This allows multiple asynchronous computations to be executed simultaneously, as follows: let subTask v = async { print "Task %d started" v Thread.Sleep (v * 1000) print "Task %d finished" v return v } let mainTask = async { print "Main task started" let! childTask1 = Async.StartChild (subTask 1) let! childTask2 = Async.StartChild (subTask 5) print "Subtasks started" let! child1Result = childTask1 print "Subtask1 result: %d" child1Result let! child2Result = childTask2 print "Subtask2 result: %d" child2Result print "Subtasks completed" return () } Async.RunSynchronously mainTask Async.StartAsTask Executes a computation in the thread pool and returns a task that will be completed in the corresponding state once the computation terminates. We can use the same example of starting the downloadPage function as a task. let downloadPageAsTask (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = req.AsyncGetResponse() use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } |> Async.StartAsTask let task = downloadPageAsTask("http://www.google.com") prinfn "Do some work" task.Wait() printfn "done"   Async.StartChildAsTask Creates an asynchronous computation from within an asynchronous computation, which starts the given computation as a task. let testAwaitTask = async { print "Starting" let! child = Async.StartChildAsTask <| async { // " Async.StartChildAsTask shall be described later print "Child started" Thread.Sleep(5000) print "Child finished" return 100 } print "Waiting for the child task" let! result = Async.AwaitTask child print "Child result %d" result } Async.StartImmediate Runs an asynchronous computation, starting immediately on the current operating system thread. This is very similar to the Async.Start function we saw earlier: let asyncDownloadPage(url) = async { let! result = downloadPage(url) printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.StartImmediate Async.SwitchToNewThread Creates an asynchronous computation that creates a new thread and runs its continuation in it: let asyncDownloadPage(url) = async { do! Async.SwitchToNewThread() let! result = downloadPage(url) printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.Start   Async.SwitchToThreadPool Creates an asynchronous computation that queues a work item that runs its continuation, as follows: let asyncDownloadPage(url) = async { do! Async.SwitchToNewThread() let! result = downloadPage(url) do! Async.SwitchToThreadPool() printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.Start   Async.SwitchToContext Creates an asynchronous computation that runs its continuation in the Post method of the synchronization context. Let's assume that we set the text from the downloadPage function to a UI textbox, then we will do it as follows: let syncContext = System.Threading.SynchronizationContext() let asyncDownloadPage(url) = async { do! Async.SwitchToContext(syncContext) let! result = downloadPage(url) textbox.Text <- result"} asyncDownloadPage "http://www.google.com" |> Async.Start   Note that in the console applications, the context will be null. Async.Parallel The Parallel function allows you to execute individual asynchronous computations queued in the thread pool and uses the fork/join pattern. let parallel_download() = let sites = ["http://www.bing.com"; "http://www.google.com"; "http://www.yahoo.com"; "http://www.search.com"] let htmlOfSites = Async.Parallel [for site in sites -> downloadPage site ] |> Async.RunSynchronously printfn "%A" htmlOfSites   We will use the same example of downloading HTML content in a parallel way. The preceding example shows the essence of parallel I/O computation The async function, { … }, in the downloadPage function shows the asynchronous computation These are then composed in parallel using the fork/join combinator In this sample, the composition executed waits synchronously for overall result Async.OnCancel A cancellation interruption in the asynchronous computation when a cancellation occurs. It returns an asynchronous computation trigger before being disposed. // This is a simulated cancellable computation. It checks the " token source // to see whether the cancel signal was received. let computation " (tokenSource:System.Threading.CancellationTokenSource) = async { use! cancelHandler = Async.OnCancel(fun () -> printfn " "Canceling operation.") // Async.Sleep checks for cancellation at the end of " the sleep interval, // so loop over many short sleep intervals instead of " sleeping // for a long time. while true do do! Async.Sleep(100) } let tokenSource1 = new " System.Threading.CancellationTokenSource() let tokenSource2 = new " System.Threading.CancellationTokenSource() Async.Start(computation tokenSource1, tokenSource1.Token) Async.Start(computation tokenSource2, tokenSource2.Token) printfn "Started computations." System.Threading.Thread.Sleep(1000) printfn "Sending cancellation signal." tokenSource1.Cancel() tokenSource2.Cancel() The preceding example implements the Async.OnCancel method to catch or interrupt the process when CancellationTokenSource is cancelled. Summary In this article, we went through detail, explanations about different semantics in asynchronous programming with F#, used with asynchronous workflows. We saw a number of functions from the Async module. Resources for Article: Further resources on this subject: Creating an F# Project [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article] Responsive Applications with Asynchronous Programming [article]
Read more
  • 0
  • 0
  • 13428
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-basics-python-absolute-beginners
Packt
19 Jun 2017
5 min read
Save for later

Basics of Python for Absolute Beginners

Packt
19 Jun 2017
5 min read
In this article by Bhaskar Das and Mohit Raj, authors of the book, Learn Python in 7 days, we will learn basics of Python. The Python language had a humble beginning in the late 1980s when a Dutchman, Guido Von Rossum, started working on a fun project that would be a successor to the ABC language with better exception handling and capability to interface with OS Amoeba at Centrum Wiskunde and Informatica. It first appeared in 1991. Python 2.0 was released in the year 2000 and Python 3.0 was released in the year 2008. The language was named Python after the famous British television comedy show Monty Python's Flying Circus, which was one of the favorite television programmes of Guido. Here, we will see why Python has suddenly influenced our lives, various applications that use Python, and Python's implementations. In this article, you will be learning the basic installation steps required to perform on different platforms (that is Windows, Linux and Mac), about environment variables, setting up environment variables, file formats, Python interactive shell, basic syntaxes, and, finally, printing out the formatted output. (For more resources related to this topic, see here.) Why Python? Now you might be suddenly bogged with the question, why Python? According to the Institute of Electrical and Electronics Engineers (IEEE) 2016 ranking, Python ranked third after C and Java. As per Indeed.com's data of 2016, Python job market search ranked fifth. Clearly, all the data points to the ever-rising demand in the job market for Python. It's a cool language if you want to learn it just for fun. Also, you will adore the language if you want to build your career around Python. At the school level, many schools have started including Python programming for kids. With new technologies taking the market by surprise, Python has been playing a dominant role. Whether it's cloud platform, mobile app development, BigData, IoT with Raspberry Pi, or the new Blockchain technology, Python is being seen as a niche language platform to develop and deliver scalable and robust applications. Some key features of the language are: Python programs can run on any platform, you can carry code created in a Windows machine and run it on Mac or Linux Python has a large inbuilt library with prebuilt and portable functionality, known as the standard library Python is an expressive language Python is free and open source Python code is about one third of the size of equivalent C++ and Java code. Python can be both dynamically and strongly typed In dynamically typed, the type of a variable is interpreted at runtime, which means that there is no need to define the type (int, float) of a variable in Python Python applications One of the most famous platform where Python is extensively used is YouTube. Other places where you will find Python being extensively used are special effects in Hollywood movies, drug evolution and discovery, traffic control systems, ERP systems, cloud hosting, e-commerce platform, CRM systems, and whichever field you can think of. Versions At the time of writing this book, the two main versions of the Python programming language available in the market were Python 2.x and Python 3.x. The stable releases at the time of writing this book were Python 2.7.13 and Python 3.6.0. Implementations of Python Major implementations include CPython, Jython, IronPython, MicroPython and PyPy. Installation Here, we will look forward to the installation of Python on three different OS platforms, namely Windows, Linux, and Mac OS. Let's begin with the Windows platform. Installation on Windows platform Python 2.x can be downloaded from https://www.python.org/downloads. The installer is simple and easy to install. Follow these steps to install the setup: Once you click on the setup installer, you will get a small window on your desktop screen as shown. Click onNext: Provide a suitable installation folder to install Python. If you don't provide the installation folder, then the installer will automatically create an installation folder for you as shown in the screenshot shown. Click on Next: After the completion of Step 2, you will get a window to customize Python as shown in the following screenshot. Note that theAdd python.exe to Path option has been markedx. Select this option to add it to system path variable. Click onNext: Finally, clickFinish to complete the installation: Summary So far, we did a walk through on the beginning and brief history of Python. We looked at the various implementations and flavors of Python. You also learned about installing on Windows OS. Hope this article has incited enough interest in Python and serves as your first step in the kingdom of Python, with enormous possibilities! Resources for Article: Further resources on this subject: Layout Management for Python GUI [article] Putting the Fun in Functional Python [article] Basics of Jupyter Notebook and Python [article]
Read more
  • 0
  • 0
  • 12415

article-image-exploring-functions
Packt
16 Jun 2017
12 min read
Save for later

Exploring Functions

Packt
16 Jun 2017
12 min read
In this article by Marius Bancila, author of the book Modern C++ Programming Cookbook covers the following recipes: Defaulted and deleted functions Using lambdas with standard algorithms (For more resources related to this topic, see here.) Defaulted and deleted functions In C++, classes have special members (constructors, destructor and operators) that may be either implemented by default by the compiler or supplied by the developer. However, the rules for what can be default implemented are a bit complicated and can lead to problems. On the other hand, developers sometimes want to prevent objects to be copied, moved or constructed in a particular way. That is possible by implementing different tricks using these special members. The C++11 standard has simplified many of these by allowing functions to be deleted or defaulted in the manner we will see below. Getting started For this recipe, you need to know what special member functions are, and what copyable and moveable means. How to do it... Use the following syntax to specify how functions should be handled: To default a function use =default instead of the function body. Only special class member functions that have defaults can be defaulted. struct foo { foo() = default; }; To delete a function use =delete instead of the function body. Any function, including non-member functions, can be deleted. struct foo { foo(foo const &) = delete; }; void func(int) = delete; Use defaulted and deleted functions to achieve various design goals such as the following examples: To implement a class that is not copyable, and implicitly not movable, declare the copy operations as deleted. class foo_not_copiable { public: foo_not_copiable() = default; foo_not_copiable(foo_not_copiable const &) = delete; foo_not_copiable& operator=(foo_not_copiable const&) = delete; }; To implement a class that is not copyable, but it is movable, declare the copy operations as deleted and explicitly implement the move operations (and provide any additional constructors that are needed). class data_wrapper { Data* data; public: data_wrapper(Data* d = nullptr) : data(d) {} ~data_wrapper() { delete data; } data_wrapper(data_wrapper const&) = delete; data_wrapper& operator=(data_wrapper const &) = delete; data_wrapper(data_wrapper&& o) :data(std::move(o.data)) { o.data = nullptr; } data_wrapper& operator=(data_wrapper&& o) { if (this != &o) { delete data; data = std::move(o.data); o.data = nullptr; } return *this; } }; To ensure a function is called only with objects of a specific type, and perhaps prevent type promotion, provide deleted overloads for the function (the example below with free functions can also be applied to any class member functions). template <typename T> void run(T val) = delete; void run(long val) {} // can only be called with long integers How it works... A class has several special members that can be implemented by default by the compiler. These are the default constructor, copy constructor, move constructor, copy assignment, move assignment and destructor. If you don't implement them, then the compiler does it, so that instances of a class can be created, moved, copied and destructed. However, if you explicitly provide one or more, then the compiler will not generate the others according to the following rules: If a user defined constructor exists, the default constructor is not generated by default. If a user defined virtual destructor exists, the default constructor is not generated by default. If a user-defined move constructor or move assignment operator exist, then the copy constructor and copy assignment operator are not generated by default. If a user defined copy constructor, move constructor, copy assignment operator, move assignment operator or destructor exist, then the move constructor and move assignment operator are not generated by default. If a user defined copy constructor or destructor exists, then the copy assignment operator is generated by default. If a user-defined copy assignment operator or destructor exists, then the copy constructor is generated by default. Note that the last two are deprecated rules and may no longer be supported by your compiler. Sometimes developers need to provide empty implementations of these special members or hide them in order to prevent the instances of the class to be constructed in a specific manner. A typical example is a class that is not supposed to be copyable. The classical pattern for this is to provide a default constructor and hide the copy constructor and copy assignment operators. While this works, the explicitly defined default constructor makes the class to no longer be considered trivial and therefore a POD type (that can be constructed with reinterpret_cast). The modern alternative to this is using deleted function as shown in the previous section. When the compiler encounters the =default in the definition of a function it will provide the default implementation. The rules for special member functions mentioned earlier still apply. Functions can be declared =default outside the body of a class if and only if they are inlined. class foo     {      public:      foo() = default;      inline foo& operator=(foo const &);     };     inline foo& foo::operator=(foo const &) = default;     When the compiler encounters the =delete in the definition of a function it will prevent the calling of the function. However, the function is still considered during overload resolution and only if the deleted function is the best match the compiler generates an error. For example, giving the previously defined overloads for function run() only calls with long integers are possible. Calls with arguments of any other type, including int, for which an automatic type promotion to long exists, would determine a deleted overload to be considered the best match and therefore the compiler will generate an error: run(42); // error, matches a deleted overload     run(42L); // OK, long integer arguments are allowed     Note that previously declared functions cannot be deleted, as the =delete definition must be the first declaration in a translation unit: void forward_declared_function();     // ...     void forward_declared_function() = delete; // error     The rule of thumb (also known as The Rule of Five) for class special member functions is: if you explicitly define any of copy constructor, move constructor, copy assignment, move assignment or destructor then you must either explicitly define or default all of them. Using lambdas with standard algorithms One of the most important modern features of C++ is lambda expressions, also referred as lambda functions or simply lambdas. Lambda expressions enable us to define anonymous function objects that can capture variables in the scope and be invoked or passed as arguments to functions. Lambdas are useful for many purposes and in this recipe, we will see how to use them with standard algorithms. Getting ready In this recipe, we discuss standard algorithms that take an argument that is a function or predicate that is applied to the elements it iterates through. You need to know what unary and binary functions are, and what are predicates and comparison functions. You also need to be familiar with function objects because lambda expressions are syntactic sugar for function objects. How to do it... Prefer to use lambda expressions to pass callbacks to standard algorithms instead of functions or function objects: Define anonymous lambda expressions in the place of the call if you only need to use the lambda in a single place. auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](int const n) {return n > 0; }); Define a named lambda, that is, assigned to a variable (usually with the auto specifier for the type), if you need to call the lambda in multiple places. auto ispositive = [](int const n) {return n > 0; }; auto positives = std::count_if( std::begin(numbers), std::end(numbers), ispositive); Use generic lambda expressions if you need lambdas that only differ in their argument types (available since C++14). auto positives = std::count_if( std::begin(numbers), std::end(numbers), [](auto const n) {return n > 0; }); How it works... The non-generic lambda expression shown above takes a constant integer and returns true if it is greater than 0, or false otherwise. The compiler defines an unnamed function object with the call operator having the signature of the lambda expression. struct __lambda_name__     {     bool operator()(int const n) const { return n > 0; }     };     The way the unnamed function object is defined by the compiler depends on the way we define the lambda expression, that can capture variables, use the mutable specifier or exception specifications or may have a trailing return type. The __lambda_name__ function object shown earlier is actually a simplification of what the compiler generates because it also defines a default copy and move constructor, a default destructor, and a deleted assignment operator. It must be well understood that the lambda expression is actually a class. In order to call it, the compiler needs to instantiate an object of the class. The object instantiated from a lambda expression is called a lambda closure. In the next example, we want to count the number of elements in a range that are greater or equal to 5 and less or equal than 10. The lambda expression, in this case, will look like this: auto numbers = std::vector<int>{ 0, 2, -3, 5, -1, 6, 8, -4, 9 };     auto start{ 5 };     auto end{ 10 };     auto inrange = std::count_if(      std::begin(numbers), std::end(numbers),      [start,end](int const n) {return start <= n && n <= end;});     This lambda captures two variables, start and end, by copy (that is, value). The result unnamed function object created by the compiler looks very much like the one we defined above. With the default and deleted special members mentioned earlier, the class looks like this: class __lambda_name_2__     {    int start_; int end_; public: explicit __lambda_name_2__(int const start, int const end) : start_(start), end_(end) {}    __lambda_name_2__(const __lambda_name_2__&) = default;    __lambda_name_2__(__lambda_name_2__&&) = default;    __lambda_name_2__& operator=(const __lambda_name_2__&)     = delete;    ~__lambda_name_2__() = default;      bool operator() (int const n) const    { return start_ <= n && n <= end_; }     };     The lambda expression can capture variables by copy (or value) or by reference, and different combinations of the two are possible. However, it is not possible to capture a variable multiple times and it is only possible to have & or = at the beginning of the capture list. A lambda can only capture variables from an enclosing function scope. It cannot capture variables with static storage duration (that means variables declared in namespace scope or with the static or external specifier). The following table shows various combinations for the lambda captures semantics. Lambda Description [](){} Does not capture anything [&](){} Captures everything by reference [=](){} Captures everything by copy [&x](){} Capture only x by reference [x](){} Capture only x by copy [&x...](){} Capture pack extension x by reference [x...](){} Capture pack extension x by copy [&, x](){} Captures everything by reference except for x that is captured by copy [=, &x](){} Captures everything by copy except for x that is captured by reference [&, this](){} Captures everything by reference except for pointer this that is captured by copy (this is always captured by copy) [x, x](){} Error, x is captured twice [&, &x](){} Error, everything is captured by reference, cannot specify again to capture x by reference [=, =x](){} Error, everything is captured by copy, cannot specify again to capture x by copy [&this](){} Error, pointer this is always captured by copy [&, =](){} Error, cannot capture everything both by copy and by reference The general form of a lambda expression, as of C++17, looks like this:  [capture-list](params) mutable constexpr exception attr -> ret { body }    All parts shown in this syntax are actually optional except for the capture list, that can, however, be empty, and the body, that can also be empty. The parameter list can actually be omitted if no parameters are needed. The return type does not need to be specified as the compiler can infer it from the type of the returned expression. The mutable specifier (that tells the compiler the lambda can actually modify variables captured by copy), the constexpr specifier (that tells the compiler to generate a constexpr call operator) and the exception specifiers and attributes are all optional. The simplest possible lambda expression is []{}, though it is often written as [](){}. There's more... There are cases when lambda expressions only differ in the type of their arguments. In this case, the lambdas can be written in a generic way, just like templates, but using the auto specifier for the type parameters (no template syntax is involved). Summary Functions are a fundamental concept in programming; regardless the topic we discussed we end up writing functions. This article contains recipes related to functions. This article, however, covers modern language features related to functions and callable objects. Resources for Article: Further resources on this subject: Understanding the Dependencies of a C++ Application [article] Boost.Asio C++ Network Programming [article] Application Development in Visual C++ - The Tetris Application [article]
Read more
  • 0
  • 0
  • 12317

article-image-go-programming-control-flow
Packt
10 Aug 2016
13 min read
Save for later

Go Programming Control Flow

Packt
10 Aug 2016
13 min read
In this article by Vladimir Vivien author of the book Learning Go programming explains some basic control flow of Go programming language. Go borrows several of the control flow syntax from its C-family of languages. It supports all of the expected control structures including if-else, switch, for-loop, and even goto. Conspicuously absent though are while or do-while statements. The following topics examine Go's control flow elements. Some of which you may already be familiar and others that bring new set of functionalities not found in other languages. The if statement The switch statement The type Switch (For more resources related to this topic, see here.) The If Statement The if-statement, in Go, borrows its basic structural form from other C-like languages. The statement conditionally executes a code-block when the Boolean expression that follows the if keyword which evaluates to true as illustrated in the following abbreviated program that displays information about the world currencies. import "fmt" type Currency struct { Name string Country string Number int } var CAD = Currency{ Name: "Canadian Dollar", Country: "Canada", Number: 124} var FJD = Currency{ Name: "Fiji Dollar", Country: "Fiji", Number: 242} var JMD = Currency{ Name: "Jamaican Dollar", Country: "Jamaica", Number: 388} var USD = Currency{ Name: "US Dollar", Country: "USA", Number: 840} func main() { num0 := 242 if num0 > 100 || num0 < 900 { mt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } } func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } The if statement in Go looks similar to other languages. However, it sheds a few syntactic rules while enforcing new ones. The parentheses, around the test expression, are not necessary. While the following if-statement will compile, it is not idiomatic: if (num0 > 100 || num0 < 900) { fmt.Println("Currency: ", num0) printCurr(num0) } Use instead: if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } The curly braces for the code block are always required. The following snippet will not compile: if num0 > 100 || num0 < 900 printCurr(num0) However, this will compile: if num0 > 100 || num0 < 900 {printCurr(num0)} It is idiomatic, however, to write the if statement on multiple lines (no matter how simple the statement block may be). This encourages good style and clarity. The following snippet will compile with no issues: if num0 > 100 || num0 < 900 {printCurr(num0)} However, the preferred idiomatic layout for the statement is to use multiple lines as follows: if num0 > 100 || num0 < 900 { printCurr(num0) } The if statement may include an optional else block which is executed when the expression in the if block evaluates to false. The code in the else block must be wrapped in curly braces using multiple lines as shown in the following. if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } The else keyword may be immediately followed by another if statement forming an if-else-if chain as used in function printCurr() from the source code listed earlier. if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) The if-else-if statement chain can grow as long as needed and may be terminated by an optional else statement to express all other untested conditions. Again, this is done in the printCurr() function which tests four conditions using the if-else-if blocks. Lastly, it includes an else statement block to catch any other untested conditions: func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } In Go, however, the idiomatic and cleaner way to write such a deep if-else-if code block is to use an expressionless switch statement. This is covered later in the section on SwitchStatement. If Statement Initialization The if statement supports a composite syntax where the tested expression is preceded by an initialization statement. At runtime, the initialization is executed before the test expression is evaluated as illustrated in this code snippet (from the program listed earlier). if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } The initialization statement follows normal variable declaration and initialization rules. The scope of the initialized variables is bound to the if statement block beyond which they become unreachable. This is a commonly used idiom in Go and is supported in other flow control constructs covered in this article. Switch Statements Go also supports a switch statement similarly to that found in other languages such as C or Java. The switch statement in Go achieves multi-way branching by evaluating values or expressions from case clauses as shown in the following abbreviated source code: import "fmt" type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, Curr{"EUR", "Euro", "Greece", 978}, Curr{"HTG", "Gourde", "Haiti", 332}, ... } func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } func isDollar2(curr Curr) bool { dollars := []Curr{currencies[2], currencies[6], currencies[9]} switch curr { default: return false case dollars[0]: fallthrough case dollars[1]: fallthrough case dollars[2]: return true } return false } func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } func main() { curr := Curr{"EUR", "Euro", "Italy", 978} if isDollar(curr) { fmt.Printf("%+v is Dollar currencyn", curr) } else if isEuro(curr) { fmt.Printf("%+v is Euro currencyn", curr) } else { fmt.Println("Currency is not Dollar or Euro") } dol := Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344} if isDollar2(dol) { fmt.Println("Dollar currency found:", dol) } } The switch statement in Go has some interesting properties and rules that make it easy to use and reason about. Semantically, Go's switch-statement can be used in two contexts: An expression-switch statement A type-switch statement The break statement can be used to escape out of a switch code block early The switch statement can include a default case when no other case expressions evaluate to a match. There can only be one default case and it may be placed anywhere within the switch block. Using Expression Switches Expression switches are flexible and can be used in many contexts where control flow of a program needs to follow multiple path. An expression switch supports many attributes as outlined in the following bullets. Expression switches can test values of any types. For instance, the following code snippet (from the previous program listing) tests values of struct type Curr. func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } The expressions in case clauses are evaluated from left to right, top to bottom, until a value (or expression) is found that is equal to that of the switch expression. Upon encountering the first case that matches the switch expression, the program will execute the statements for the case block and then immediately exist the switch block. Unlike other languages, the Go case statement does not need to use a break to avoid falling through the next case. For instance, calling isDollar(Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}) will match the second case statement in the function above. The code will set result to true and exist the switch code block immediately. Case clauses can have multiple values (or expressions) separated by commas with logical OR operator implied between them. For instance, in the following snippet, the switch expression curr is tested against values currencies[2], currencies[4], or currencies[10] using one case clause until a match is found. func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } The switch statement is the cleaner and preferred idiomatic approach to writing complex conditional statements in Go. This is evident when the snippet above is compared to the following which does the same comparison using if statements. func isEuro(curr Curr) bool { if curr == currencies[2] || curr == currencies[4], curr == currencies[10]{ return true }else{ return false } } Fallthrough Cases There is no automatic fall through in Go's case clause as it does in the C or Java switch statements. Recall that a switch block that will exit after executing its first matching case. The code must explicitly place the fallthrough keyword, as the last statement in a case block, to force the execution flow to fall through the successive case block. The following code snippet shows a switch statement with a fallthrough in each case block. func isDollar2(curr Curr) bool { switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}: fallthrough case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: fallthrough case Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } } When a case is matched, the fallthrough statements cascade down to the first statement of the successive case block. So if curr = Curr{"AUD", "Australian Dollar", "Australia", 36}, the first case will be matched. Then the flow cascades down to the first statement of the second case block which is also a fallthrough statement. This causes the first statement, return true, of the third case block to execute. This is functionally equivalent to following snippet. switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}, Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } Expressionless Switches Go supports a form of the switch statement that does not specify an expression. In this format, each case expression must evaluate to a Boolean value true. The following abbreviated source code illustrates the uses of an expressionless switch statement as listed in function find(). The function loops through the slice of Curr values to search for a match based on field values in the struct passed in: import ( "fmt" "strings" ) type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, ... } func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } Notice in the previous example, the switch statement in function find() does not include an expression. Each case expression is separated by a comma and must be evaluated to a Boolean value with an implied OR operator between each case. The previous switch statement is equivalent to the following use of if statement to achieve the same logic. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] if strings.Contains(c.Currency, name) || strings.Contains(c.Name, name) || strings.Contains(c.Country, name){ fmt.Println("Found", c) } } } Switch Initializer The switch keyword may be immediately followed by a simple initialization statement where variables, local to the switch code block, may be declared and initialized. This convenient syntax uses a semicolon between the initializer statement and the switch expression to declare variables which may appear anywhere in the switch code block. The following code sample shows how this is done by initializing two variables name and curr as part of the switch declaration. func assertEuro(c Curr) bool { switch name, curr := "Euro", "EUR"; { case c.Name == name: return true case c.Currency == curr: return true } return false } The previous code snippet uses an expressionless switch statement with an initializer. Notice the trailing semicolon to indicate the separation between the initialization statement and the expression area for the switch. In the example, however, the switch expression is empty. Type Switches Given Go's strong type support, it should be of little surprise that the language supports the ability to query type information. The type switch is a statement that uses the Go interface type to compare underlying type information of values (or expressions). A full discussion on interface types and type assertion is beyond the scope of this section. For now all you need to know is that Go offers the type interface{}, or empty interface, as a super type that is implemented by all other types in the type system. When a value is assigned type interface{}, it can be queried using the type switch as, shown in function findAny() in following code snippet, to query information about its underlying type. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } func findNumber(num int) { for _, curr := range currencies { if curr.Number == num { fmt.Println("Found", curr) } } } func findAny(val interface{}) { switch i := val.(type) { case int: findNumber(i) case string: find(i) default: fmt.Printf("Unable to search with type %Tn", val) } } func main() { findAny("Peso") findAny(404) findAny(978) findAny(false) } The function findAny() takes an interface{} as its parameter. The type switch is used to determine the underlying type and value of the variable val using the type assertion expression: switch i := val.(type) Notice the use of the keyword type in the type assertion expression. Each case clause will be tested against the type information queried from val.(type). Variable i will be assigned the actual value of the underlying type and is used to invoke a function with the respective value. The default block is invoked to guard against any unexpected type assigned to the parameter val parameter. Function findAny may then be invoked with values of diverse types, as shown in the following code snippet. findAny("Peso") findAny(404) findAny(978) findAny(false) Summary This article gave a walkthrough of the mechanism of control flow in Go including if, switch statements. While Go’s flow control constructs appear simple and easy to use, they are powerful and implement all branching primitives expected for a modern language. Resources for Article: Further resources on this subject: Game Development Using C++ [Article] Boost.Asio C++ Network Programming [Article] Introducing the Boost C++ Libraries [Article]
Read more
  • 0
  • 0
  • 12171

Packt
03 Sep 2015
10 min read
Save for later

Learning RSLogix 5000 – Buffering I/O Module Input and Output Values

Packt
03 Sep 2015
10 min read
 In the following article by Austin Scott, the author of Learning RSLogix 5000 Programming, you will be introduced to the high performance, asynchronous nature of the Logix family of controllers and the requirement for the buffering I/O module data it drives. You will learn various techniques for the buffering I/O module values in RSLogix 5000 and Studio 5000 Logix Designer. You will also learn about the IEC Languages that do not require the input or output module buffering techniques to be applied to them. In order to understand the need for buffering, let's start by exploring the evolution of the modern line of Rockwell Automation Controllers. (For more resources related to this topic, see here.) ControlLogix controllers The ControlLogix controller was first launched in 1997 as a replacement for Allen Bradley's previous large-scale control platform. The PLC-5. ControlLogix represented a significant technological step forward, which included a 32-bit ARM-6 RISC-core microprocessor and the ABrisc Boolean processor combined with a bus interface on the same silicon chip. At launch, the Series 5 ControlLogix controllers (also referred to as L5 and ControlLogix 5550, which has now been superseded by the L6 and L7 series controllers) were able to execute code three times faster than PLC-5. The following is an illustration of the original ControlLogix L5 Controller: ControlLogix Logix L5 Controller The L5 controller is considered to be a PAC (Programmable Automation Controller) rather than a traditional PLC (Programmable Logic Controller), due to its modern design, power, and capabilities beyond a traditional PLC (such as motion control, advanced networking, batching and sequential control). ControlLogix represented a significant technological step forward for Rockwell Automation, but this new technology also presented new challenges for automation professionals. ControlLogix was built using a modern asynchronous operating model rather than the more traditional synchronous model used by all the previous generations of controllers. The asynchronous operating model requires a different approach to real-time software development in RSLogix 5000 (now known in version 20 and higher as Studio 5000 Logix Designer). Logix operating cycle The entire Logix family of controllers (ControlLogix and CompactLogix) have diverged from the traditional synchronous PLC scan architecture in favor of a more efficient asynchronous operation. Like most modern computer systems, asynchronous operation allows the Logix controller to handle multiple Tasks at the same time by slicing the processing time between each task. The continuous update of information in an asynchronous processor creates some programming challenges, which we will explore in this article. The following diagram illustrates the difference between synchronous and asynchronous operation. Synchronous versus Asynchronous Processor Operation Addressing module I/O data Individual channels on a module can be referenced in your Logix Designer / RSLogix 5000 programs using it's address. An address gives the controller directions to where it can find a particular piece of information about a channel on one of your modules. The following diagram illustrates the components of an address in RsLogix 5000 or Studio 5000 Logix Designer: The components of an I/O Module Address in Logix Module I/O tags can be viewed using the Controller Tags window, as the following screen shot illustrates. I/O Module Tags in Studio 5000 Logix Designer Controller Tags Window Using the module I/O tags, input and output module data can be directly accessed anywhere within a logic routine. However, it is recommended that we buffer module I/O data before we evaluate it in Logic. Otherwise, due to the asynchronous tag value updates in our I/O modules, the state of our process values could change part way through logic execution, thus creating unpredictable results. In the next section, we will introduce the concept of module I/O data buffering. Buffering module I/O data In the olden days of PLC5s and SLC500s, before we had access to high-performance asynchronous controllers like the ControlLogix, SoftLogix and CompactLogix families, program execution was sequential (synchronous) and very predictable. In asynchronous controllers, there are many activities happening at the same time. Input and output values can change in the middle of a program scan and put the program in an unpredictable state. Imagine a program starting a pump in one line of code and closing a valve directly in front of that pump in the next line of code, because it detected a change in process conditions. In order to address this issue, we use a technique call buffering and, depending on the version of Logix you are developing on, there are a few different methods of achieving this. Buffering is a technique where the program code does not directly access the real input or output tags on the modules during the execution of a program. Instead, the input and output module tags are copied at the beginning of a programs scan to a set of base tags that will not change state during the program's execution. Think of buffering as taking a snapshot of the process conditions and making decisions on those static values rather than live values that are fluctuating every millisecond. Today, there is a rule in most automation companies that require programmers to write code that "Buffers I/O" data to base tags that will not change during a programs execution. The two widely accepted methods of buffering are: Buffering to base tags Program parameter buffering (only available in the Logix version 24 and higher) Do not underestimate the importance of buffering a program's I/O. I worked on an expansion project for a process control system where the original programmers had failed to implement buffering. Once a month, the process would land in a strange state, which the program could not recover from. The operators had attributed these problem to "Gremlins" for years, until I identified and corrected the issue. Buffering to base tags Logic can be organized into manageable pieces and executed based on different intervals and conditions. The buffering to base tags practice takes advantage of Logix's ability to organize code into routines. The default ladder logic routine that is created in every new Logix project is called MainRoutine. The recommended best practice for buffering tags in ladder logic is to create three routines: One for reading input values and buffering them One for executing logic One for writing the output values from the buffered values The following ladder logic excerpt is from MainRoutine of a program that implements Input and Output Buffering: MainRoutine Ladder Logic Routine with Input and Output Buffering Subroutine Calls The following ladder logic is taken from the BufferInputs routine and demonstrates the buffering of digital input module tag values to Local tags prior to executing our PumpControl routine: Ladder Logic Routine with Input Module Buffering After our input module values have been buffered to Local tags, we can execute our processlogic in our PumpControl routine without having to worry about our values changing in the middle of the routine's execution. The following ladder logic code determines whether all the conditions are met to run a pump: Pump Control Ladder Logic Routine Finally, after all of our Process Logic has finished executing, we can write the resulting values to our digital output modules. The following ladder logic BufferOutputs, routine copies the resulting RunPump value to the digital output module tag. Ladder Logic Routine with Output Module Buffering We have now buffered our module inputs and module outputs in order to ensure they do not change in the middle of a program execution and potentially put our process into an undesired state. Buffering Structured Text I/O module values Just like ladder logic, Structured Text I/O module values should be buffered at the beginning of a routine or prior to executing a routine in order to prevent the values from changing mid-execution and putting the process into a state you could not have predicted. Following is an example of the ladder logic buffering routines written in Structured Text (ST)and using the non-retentive assignment operator: (* I/O Buffering in Structured Text Input Buffering *) StartPump [:=] Local:2:I.Data[0].0; HighPressure [:=] Local:2:I.Data[0].1; PumpStartManualOverride [:=] Local:2:I.Data[0].2; (* I/O Buffering in Structured Text Output Buffering *) Local:3:O.Data[0].0 [:=] RunPump; Function Block Diagram (FBD) and Sequential Function Chart (SFC) I/O module buffering Within Rockwell Automation's Logix platform, all of the supported IEC languages (ladder logic, structured text, function block, and sequential function chart) will compile down to the same controller bytecode language. The available functions and development interface in the various Logix programming languages are vastly different. Function Block Diagrams (FBD) and Sequential Function Charts (SFC) will always automatically buffer input values prior to executing Logic. Once a Function Block Diagram or a Sequential Function Chart has completed the execution of their logic, they will write all Output Module values at the same time. There is no need to perform Buffering on FBD or SFC routines, as it is automatically handled. Buffering using program parameters A program parameter is a powerful new feature in Logix that allows the association of dynamic values to tags and programs as parameters. The importance of program parameters is clear by the way they permeate the user interface in newer versions of Logix Designer (version 24 and higher). Program parameters are extremely powerful, but the key benefit to us for using them is that they are automatically buffered. This means that we could have effectively created the same result in one ladder logic rung rather than the eight we created in the previous exercise. There are four types of program parameters: Input: This program parameter is automatically buffered and passed into a program on each scan cycle. Output: This program parameter is automatically updated at the end of a program (as a result of executing that program) on each scan cycle. It is similar to the way we buffered our output module value in the previous exercise. InOut: This program parameter is updated at the start of the program scan and the end of the program scan. It is also important to note that, unlike the input and output parameters; the InOut parameter is passed as a pointer in memory. A pointer shares a piece of memory with other processes rather than creating a copy of it, which means that it is possible for an InOut parameter to change its value in the middle of a program scan. This makes InOut program parameters unsuitable for buffering when used on their own. Public: This program parameterbehaves like a normal controller tag and can be connected to input, output, and InOut parameters. it is similar to the InOut parameter, public parameters that are updated globally as their values are changed. This makes program parameters unsuitable for buffering, when used on their own. Primarily public program parameters are used for passing large data structures between programs on a controller. In Logix Designer version 24 and higher, a program parameter can be associated with a local tag using the Parameters and Local Tags in the Control Organizer (formally called "Program Tags"). The module input channel can be associated with a base tag within your program scope using the Parameter Connections. Add the module input value as a parameter connection. The previous screenshot demonstrates how we would associate the input module channel with our StartPump base tag using the Parameter Connection value. Summary In this article, we explored the asynchronous nature of the Logix family of controllers. We learned the importance of buffering input module and output module values for ladder logic routines and structured text routines. We also learned that, due to the way Function Block Diagrams (FBD/FB) and Sequential Function Chart (SFC) Routines execute, there is no need to buffer input module or output module tag values. Finally, we introduced the concept of buffering tags using program parameters in version 24 and high of Studio 5000 Logix Designer. Resources for Article: Further resources on this subject: vROps – Introduction and Architecture[article] Ensuring Five-star Rating in the MarketPlace[article] Building Ladder Diagram programs (Simple) [article]
Read more
  • 0
  • 0
  • 11961
article-image-roslyn-cookbook
Packt
20 Feb 2018
6 min read
Save for later

Consuming Diagnostic Analyzers in .NET projects

Packt
20 Feb 2018
6 min read
We know how to write diagnostic analyzers to analyze and report issues about .NET source code and contribute them to the .NET developer community. In this article by the author Manish Vasani, of the book Roslyn Cookbook, we will show you how to search, install, view and configure the analyzers that have already been published by various analyzer authors on NuGet and VS Extension gallery. We will cover the following recipes: (For more resources related to this topic, see here.) Searching and installing analyzers through the NuGet package manager. Searching and installing VSIX analyzers through the VS extension gallery. Viewing and configuring analyzers in solution explorer in Visual Studio. Using ruleset file and ruleset editor to configure analyzers. Diagnostic analyzers are extensions to the Roslyn C# compiler and Visual Studio IDE to analyze user code and report diagnostics. User will see these diagnostics in the error list after building the project from Visual Studio and even when building the project on the command line. They will also see the diagnostics live while editing the source code in the Visual Studio IDE. Analyzers can report diagnostics to enforce specific code styles, improve code quality and maintenance, recommend design guidelines or even report very domain specific issues which cannot be covered by the core compiler. Analyzers can be installed to a .NET project either as a NuGet package or as a VSIX. To get a better understanding of these packaging schemes and learn about the differences in the analyzer experience when installed as a NuGet package versus a VSIX. Analyzers are supported on various different flavors of .NET standard, .NET core and .NET framework projects, for example, class library, console app, etc. Searching and installing analyzers through the NuGet package manager In this recipe we will show you how to search and install analyzer NuGet packages in the NuGet package manager in Visual Studio and see how the analyzer diagnostics from an installed NuGet package light up in project build and as live diagnostics during code editing in Visual Studio. Getting ready You will need to have Visual Studio 2017 installed on your machine to this recipe. You can install a free community version of Visual Studio 2017 from https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15.  How to do it… Create a C# class library project, say ClassLibrary, in Visual Studio 2017. In solution explorer, right click on the solution or project node and execute Manage NuGet Packages command.  This brings up the NuGet Package Manager, which can be used to search and install NuGet packages to the solution or project. In the search bar type the following text to find NuGet packages tagged as analyzers: Tags:"analyzers" Note that some of the well known packages are tagged as analyzer, so you may also want to search:Tags:"analyzer" Check or uncheck the Include prerelease checkbox to the right of the search bar to search or hide the prerelease analyzer packages respectively. The packages are listed based on the number of downloads, with the highest downloaded package at the top. Select a package to install, say System.Runtime.Analyzers, and pick a specific version, say 1.1.0, and click Install. Click on I Accept button on the License Acceptance dialog to install the NuGet package. Verify the installed analyzer(s) show up under the Analyzers node in the solution explorer. Verify the project file has a new ItemGroup with the following analyzer references from the installed analyzer package: <ItemGroup> <Analyzer Include="..packagesSystem.Runtime.Analyzers.1.1.0analyzersdotnetcsSystem.Runtime.Analyzers.dll" /> <Analyzer Include="..packagesSystem.Runtime.Analyzers.1.1.0analyzersdotnetcsSystem.Runtime.CSharp.Analyzers.dll" /> </ItemGroup> Add the following code to your C# project: namespace ClassLibrary { public class MyAttribute : System.Attribute { } } Verify the analyzer diagnostic from the installed analyzer is shown in the error list: Open a Visual Studio 2017 Developer Command Prompt and build the project to verify that the analyzer is executed on the command line build and the analyzer diagnostic is reported: Create a new C# project in VS2017 and add the same code to it as step 9 and verify no analyzer diagnostic shows up in error list or command line, confirming that the analyzer package was only installed to the selected project in steps 1-6. Note that CA1018 (Custom attribute should have AttributeUsage defined) has been moved to a separate analyzer assembly in future versions of FxCop/System.Runtime.Analyzers package. It is recommended that you install Microsoft.CodeAnalysis.FxCopAnalyzers NuGet package to get the latest group of Microsoft recommended analyzers. Searching and installing VSIX analyzers through the VS extension gallery In this recipe we will show you how to search and install analyzer VSIX packages in the Visual Studio Extension manager and see how the analyzer diagnostics from an installed VSIX light up as live diagnostics during code editing in Visual Studio. Getting ready You will need to have Visual Studio 2017 installed on your machine to this recipe. You can install a free community version of Visual Studio 2017 from https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15. How to do it… Create a C# class library project, say ClassLibrary, in Visual Studio 2017. From the top level menu, execute Tools | Extensions and Updates Navigate to Online | Visual Studio Marketplace on the left tab of the dialog to view the available VSIXes in the Visual Studio extension gallery/marketplace. Search analyzers in the search text box in the upper right corner of the dialog and download an analyzer VSIX, say Refactoring Essentials for Visual Studio. Once the download completes, you will get a message at the bottom of the dialog that the install will be scheduled to execute once Visual Studio and related windows are closed. Close the dialog and then close the Visual Studio instance to start the install. In the VSIX Installer dialog, click Modify to start installation. The subsequent message prompts you to kill all the active Visual Studio and satellite processes. Save all your relevant work in all the open Visual Studio instances, and click End Tasks to kill these processes and install the VSIX. After installation, restart VS, click Tools | Extensions And Updates, and verify Refactoring Essentials VSIX is installed. Create a new C# project with the following source code and verify analyzer diagnostic RECS0085 (Redundant array creation expression) in the error list: namespace ClassLibrary { public class Class1 { void Method() { int[] values = new int[] { 1, 2, 3 }; } } } Build the project from Visual Studio 2017 or command line and confirm no analyzer diagnostic shows up in the Output Window or the command line respectively, confirming that the VSIX analyzer did not execute as part of the build. Resources for Article: Further resources on this subject: C++, SFML, Visual Studio, and Starting the first game [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article] Creating efficient reports with Visual Studio [article]
Read more
  • 0
  • 0
  • 11880

article-image-games-and-exercises
Packt
09 Aug 2017
3 min read
Save for later

Games and Exercises

Packt
09 Aug 2017
3 min read
In this article by Shishira Bhat and Ravi Wray, authors of the book, Learn Java in 7 days, we will study the following concepts: Making an object as the return type for a method Making an object as the parameter for a method (For more resources related to this topic, see here.) Let’s start this article by revisiting the reference variablesand custom data types: In the preceding program, p is a variable of datatype,Pen. Yes! Pen is a class, but it is also a datatype, a custom datatype. The pvariable stores the address of the Penobject, which is in heap memory. The pvariable is a reference that refers to a Penobject. Now, let’s get more comfortable by understanding and working with examples. How to return an Object from a method? In this section, let’s understand return types. In the following code, methods returnthe inbuilt data types (int and String), and the reason is explained after each method, as follows: int add () { int res = (20+50); return res; } The addmethod returns the res(70) variable, which is of the int type. Hence, the return type must be int: String sendsms () { String msg = "hello"; return msg; } The sendsmsmethod returns a variable by the name of msg, which is of the String type. Hence, the return type is String. The data type of the returning value and the return type must be the same. In the following code snippet, the return type of the givenPenmethod is not an inbuilt data type. However, the return type is a class (Pen) Let’s understand the following code: The givePen ()methodreturns a variable (reference variable) by the name of p, which is of the Pen type. Hence, the return type is Pen: In the preceding program, tk is a variable of the Ticket type. The method returns tk; hence, the return type of the method is Ticket. A method accepting an object (parameter) After seeing how a method can return an object/reference, let's understand how a method can take an object/reference as the input,that is, parameter. We already understood that if a method takes parameter(s), then we need to pass argument(s). Example In the preceding program,the method takestwo parameters,iandk. So, while calling/invoking the method, we need to pass two arguments, which are 20.5 and 15. The parameter type andthe argument type must be the same. Remember thatwhen class is the datatype, then object is the data. Consider the following example with respect toa non-primitive/class data type andthe object as its data: In the preceding code, the Kid class has the eat method, which takes ch as a parameter of the Chocolatetype, that is,the data type of ch is Chocolate, which is a class. When class is the data type then the object of that class is an actual data or argument. Hence,new Chocolate() is passed as an argument to the eat method. Let's see one more example: The drink method takes wtr as the parameter of the type,Water, which is a class/non-primitive type; hence, the argument must be an object of theWater class. Summary In this article we have learned what to return when a class is a return type for a method and what to pass as an argument for a method when a class is a parameter for the method.  Resources for Article: Further resources on this subject: Saying Hello to Java EE [article] Getting Started with Sorting Algorithms in Java [article] Debugging Java Programs using JDB [article]
Read more
  • 0
  • 0
  • 11438

article-image-hello-c-welcome-net-core
Packt
11 Jan 2017
10 min read
Save for later

Hello, C#! Welcome, .NET Core!

Packt
11 Jan 2017
10 min read
In this article by Mark J. Price, author of the book C# 7 and .NET Core: Modern Cross-Platform Development-Second Edition, we will discuss about setting up your development environment; understanding the similarities and differences between .NET Core, .NET Framework, .NET Standard Library, and .NET Native. Most people learn complex topics by imitation and repetition rather than reading a detailed explanation of theory. So, I will not explain every keyword and step. This article covers the following topics: Setting up your development environment Understanding .NET (For more resources related to this topic, see here.) Setting up your development environment Before you start programming, you will need to set up your Interactive Development Environment (IDE) that includes a code editor for C#. The best IDE to choose is Microsoft Visual Studio 2017, but it only runs on the Windows operating system. To develop on alternative operating systems such as macOS and Linux, a good choice of IDE is Microsoft Visual Studio Code. Using alternative C# IDEs There are alternative IDEs for C#, for example, MonoDevelop and JetBrains Project Rider. They each have versions available for Windows, Linux, and macOS, allowing you to write code on one operating system and deploy to the same or a different one. For MonoDevelop IDE, visit http://www.monodevelop.com/ For JetBrains Project Rider, visit https://www.jetbrains.com/rider/ Cloud9 is a web browser-based IDE, so it's even more cross-platform than the others. Here is the link: https://c9.io/web/sign-up/free Linux and Docker are popular server host platforms because they are relatively lightweight and more cost-effectively scalable when compared to operating system platforms that are more for end users, such as Windows and macOS. Using Visual Studio 2017 on Windows 10 You can use Windows 7 or later to run code, but you will have a better experience if you use Windows 10. If you don't have Windows 10, and then you can create a virtual machine (VM) to use for development. You can choose any cloud provider, but Microsoft Azure has preconfigured VMs that include properly licensed Windows 10 and Visual Studio 2017. You only pay for the minutes your VM is running, so it is a way for users of Linux, macOS, and older Windows versions to have all the benefits of using Visual Studio 2017. Since October 2014, Microsoft has made a professional-quality edition of Visual Studio available to everyone for free. It is called the Community Edition. Microsoft has combined all its free developer offerings in a program called Visual Studio Dev Essentials. This includes the Community Edition, the free level of Visual Studio Team Services, Azure credits for test and development, and free training from Pluralsight, Wintellect, and Xamarin. Installing Microsoft Visual Studio 2017 Download and install Microsoft Visual Studio Community 2017 or later: https://www.visualstudio.com/vs/visual-studio-2017/. Choosing workloads On the Workloads tab, choose the following: Universal Windows Platform development .NET desktop development Web development Azure development .NET Core and Docker development On the Individual components tab, choose the following: Git for Windows GitHub extension for Visual Studio Click Install. You can choose to install everything if you want support for languages such as C++, Python, and R. Completing the installation Wait for the software to download and install. When the installation is complete, click Launch. While you wait for Visual Studio 2017 to install, you can jump to the Understanding .NET section in this article. Signing in to Visual Studio The first time that you run Visual Studio 2017, you will be prompted to sign in. If you have a Microsoft account, for example, a Hotmail, MSN, Live, or Outlook e-mail address, you can use that account. If you don't, then register for a new one at the following link: https://signup.live.com/ You will see the Visual Studio user interface with the Start Page open in the central area. Like most Windows desktop applications, Visual Studio has a menu bar, a toolbar for common commands, and a status bar at the bottom. On the right is the Solution Explorer window that will list your open projects: To have quick access to Visual Studio in the future, right-click on its entry in the Windows taskbar and select Pin this program to taskbar. Using older versions of Visual Studio The free Community Edition has been available since Visual Studio 2013 with Update 4. If you want to use a free version of Visual Studio older than 2013, then you can use one of the more limited Express editions. A lot of the code in this book will work with older versions if you bear in mind when the following features were introduced: Year C# Features 2005 2 Generics with <T> 2008 3 Lambda expressions with => and manipulating sequences with LINQ (from, in, where, orderby, ascending, descending, select, group, into) 2010 4 Dynamic typing with dynamic and multithreading with Task 2012 5 Simplifying multithreading with async and await 2015 6 string interpolation with $"" and importing static types with using static 2017 7 Tuples (with deconstruction), patterns, out variables, literal improvements Understanding .NET .NET Framework, .NET Core, .NET Standard Library, and .NET Native are related and overlapping platforms for developers to build applications and services upon. Understanding the .NET Framework platform Microsoft's .NET Framework is a development platform that includes a Common Language Runtime (CLR) that manages the execution of code and a rich library of classes for building applications. Microsoft designed the .NET Framework to have the possibility of being cross-platform, but Microsoft put their implementation effort into making it work best with Windows. Practically speaking, the .NET Framework is Windows-only. Understanding the Mono and Xamarin projects Third parties developed a cross-platform .NET implementation named the Mono project (http://www.mono-project.com/). Mono is cross-platform, but it fell well behind the official implementation of .NET Framework. It has found a niche as the foundation of the Xamarin mobile platform. Microsoft purchased Xamarin and now includes what used to be an expensive product for free with Visual Studio 2017. Microsoft has renamed the Xamarin Studio development tool to Visual Studio for the Mac. Xamarin is targeted at mobile development and building cloud services to support mobile apps. Understanding the .NET Core platform Today, we live in a truly cross-platform world. Modern mobile and cloud development have made Windows a much less important operating system. So, Microsoft has been working on an effort to decouple the .NET Framework from its close ties with Windows. While rewriting .NET to be truly cross-platform, Microsoft has taken the opportunity to refactor .NET to remove major parts that are no longer considered core. This new product is branded as the .NET Core, which includes a cross-platform implementation of the CLR known as CoreCLR and a streamlined library of classes known as CoreFX. Streamlining .NET .NET Core is much smaller than the current version of the .NET Framework because a lot has been removed. For example, Windows Forms and Windows Presentation Foundation (WPF) can be used to build graphical user interface (GUI) applications, but they are tightly bound to Windows, so they have been removed from the .NET Core. The latest technology for building Windows apps is the Universal Windows Platform (UWP). ASP.NET Web Forms and Windows Communication Foundation (WCF) are old web applications and service technologies that fewer developers choose to use today, so they have also been removed from the .NET Core. Instead, developers prefer to use ASP.NET MVC and ASP.NET Web API. These two technologies have been refactored and combined into a new product that runs on the .NET Core named ASP.NET Core. The Entity Framework (EF) 6.x is an object-relational mapping technology for working with data stored in relational databases such as Oracle and Microsoft SQL Server. It has gained baggage over the years, so the cross-platform version has been slimmed down and named Entity Framework Core. Some data types in .NET that are included with both the .NET Framework and the .NET Core have been simplified by removing some members. For example, in the .NET Framework, the File class has both a Close and Dispose method, and either can be used to release the file resources. In .NET Core, there is only the Dispose method. This reduces the memory footprint of the assembly and simplifies the API you have to learn. The .NET Framework 4.6 is about 200 MB. The .NET Core is about 11 MB. Eventually, the .NET Core may grow to a similar larger size. Microsoft's goal is not to make the .NET Core smaller than the .NET Framework. The goal is to componentize .NET Core to support modern technologies and to have fewer dependencies so that deployment requires only those components that your application really needs. Understanding the .NET Standard The situation with .NET today is that there are three forked .NET platforms, all controlled by Microsoft: .NET Framework, Xamarin, and .NET Core. Each have different strengths and weaknesses. This has led to the problem that a developer must learn three platforms, each with annoying quirks and limitations. So, Microsoft is working on defining the .NET Standard 2.0: a set of APIs that all .NET platforms must implement. Today, in 2016, there is the .NET Standard 1.6, but only .NET Core 1.0 supports it; .NET Framework and Xamarin do not! .NET Standard 2.0 will be implemented by .NET Framework, .NET Core, and Xamarin. For .NET Core, this will add many of the missing APIs that developers need to port old code written for .NET Framework to the new cross-platform .NET Core. .NET Standard 2.0 will probably be released towards the end of 2017, so I hope to write a third edition of this book for when that's finally released. The future of .NET The .NET Standard 2.0 is the near future of .NET, and it will make it much easier for developers to share code between any flavor of .NET, but we are not there yet. For cross-platform development, .NET Core is a great start, but it will take another version or two to become as mature as the current version of the .NET Framework. This book will focus on the .NET Core, but will use the .NET Framework when important or useful features have not (yet) been implemented in the .NET Core. Understanding the .NET Native compiler Another .NET initiative is the .NET Native compiler. This compiles C# code to native CPU instructions ahead-of-time (AoT) rather than using the CLR to compile IL just-in-time (JIT) to native code later. The .NET Native compiler improves execution speed and reduces the memory footprint for applications. It supports the following: UWP apps for Windows 10, Windows 10 Mobile, Xbox One, HoloLens, and Internet of Things (IoT) devices such as Raspberry Pi Server-side web development with ASP.NET Core Console applications for use on the command line Comparing .NET technologies The following table summarizes and compares the .NET technologies: Platform Feature set C# compiles to Host OSes .NET Framework Mature and extensive IL executed by a runtime Windows only Xamarin Mature and limited to mobile features iOS, Android, Windows Mobile .NET Core Brand-new and somewhat limited Windows, Linux, macOS, Docker .NET Native Brand-new and very limited Native code Summary In this article, we have learned how to set up the development environment, and discussed in detail about .NET technologies. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Reactive Programming with C# [article] Functional Programming in C# [article]
Read more
  • 0
  • 0
  • 11259
article-image-unleashing-creativity-mit-app-inventor-2
Packt
14 Jan 2016
9 min read
Save for later

Unleashing Creativity with MIT App Inventor 2

Packt
14 Jan 2016
9 min read
This article by Felicia Kamriani and Dr. Krishnendu Roy, authors of the book App Inventor 2 Essentials, covers what is MIT App Inventor 2 and why you should learn to use it? There are apps for just about everything—entertainment, socializing, dining, traveling, philanthropy, shopping, education, navigation, and so on. And just about everyone with a smartphone or tablet is using them to make their lives easier or better. But you have decided to move from just using mobile apps to creating mobile apps. Congratulations! Thanks to MIT App Inventor 2, mobile app development is no longer exclusively the realm of experienced software programmers. It empowers anyone with an idea to create technology. This book offers people of all ages a step-by-step guide to creating mobile apps with App Inventor. While this visual programming language is an ideal tool for people who have little or no coding experience, don’t be fooled into thinking that the capabilities of MIT App Inventor 2 are basic! The simple drag-and-drop blocks format is actually a powerful software language capable of creating complex and sophisticated mobile apps. The purpose of this article is to provide an overview of MIT App Inventor 2, and of your new role as a mobile app developer. You are in for more skill building than you ever imagined! Of course, you will learn to code mobile apps, but there are countless other valuable skills weaved into the mobile app building process. Most significantly, you will learn to think differently, master the design-thinking process, become a problem solver, and be resourceful. This article also offers tips on brainstorming app ideas and design principles. Lastly, it reveals the potential of MIT App Inventor 2 and showcases an array of mobile apps so that you, a budding app designer, can begin thinking about the full spectrum of possibilities. (For more resources related to this topic, see here.) What is MIT App Inventor 2? MIT App Inventor 2 is a free, drag-and-drop, blocks-based visual programming language that enables people, regardless of coding experience, to create mobile apps for Android devices. In 2008, iPhones and Android phones had just hit the market and MIT professor Hal Abelson had the idea to create an easy-to-use programming language to make mobile apps that would harness the power of the emerging smartphone technology. Equipped with fast processors, large memory storage, and sensors, smartphones were enabling people to monitor and interact with their environment like never before. Abelson’s goal was to democratize the mobile app development process by making it easy for anyone to create mobile apps that were meaningful and important to them. While on sabbatical at Google in Mountain View, CA, Abelson worked with engineer Mark Friedman to create App Inventor (yes, it was originally called Google App Inventor!). In 2011, Abelson brought App Inventor to MIT and together with the Media Lab and CSAIL (the Computer Science and Artificial Intelligence Lab) created the Center for Mobile Learning. In December 2013, Abelson and his team of developers launched MIT App Inventor 2, (from here on referred to as MIT App Inventor) an even easier to use web-based application version with an Integrated Development Environment (IDE). This means that you can see your app come to life on your smartphone as you are building it. All you need is a computer (Mac or PC), an internet connection (or a USB connection), a Google Gmail account and an Android device (phone or tablet). But, if you don’t have an Android device, don’t worry! You can still create apps with the on-screen Emulator. The MIT App Inventor (http://appinventor.mit.edu/) browser includes a Designer Screen, a graphical user interface (GUI) where you create the look and feel of the app (choosing which components you want it to include) and the Blocks Editor, where you add behavior to the app by coding with colorful blocks. Users build apps by dragging components and blocks from menu bars onto a workspace and a connected Android device (or Emulator) displays progress in real time. All the apps are saved on the MIT server and once completed, they can be can be shared on the MIT App Inventor Gallery, submitted to app contests (such as MIT App of the Month) or uploaded to the Google Play store (or other app marketplaces) for sharing or selling. To date, MIT App Inventor has empowered millions of people to become creators of technology by learning to be mobile app developers. And now, you will become one of them! Understanding your role as a mobile app developer Since you are reading this book, it is safe to assume that not only do you regularly use mobile apps, but on occasion, you have also had the thought, “I wish there were an app for that!” Now, with the help of MIT App Inventor and this guidebook to mobile app development, you will soon be able to say, “I can create an app for that!” In embracing your new role as a mobile app developer, you will not just be learning how to code, but you will also learn an array of other valuable skills. You will learn to think differently. Every time you open an app, you will start looking at it from the developer’s perspective rather than just as a user. You will start noticing what functions are logical and smooth and which are choppy and unintuitive. You will learn to get inspiration from your environment. What type of app could make the attendance process at my club/class/meeting more streamlined or efficient? What app idea could help solve the problem of inaccurate inventory at the gym? You will learn to become a data gatherer without even realizing it. When people make comments about apps, your ears will perk up and you will take note. You will start asking questions like why do you prefer Waze to Google Maps? You will learn to think logically so that you can tell the computer in a step-by-step manner how to perform an operation. You will learn to become a problem solver. Any coder will confirm that programming is an iterative process. It’s a continual cycle of coding, troubleshooting and debugging. Trial and error will become second nature, as will taking a step back to figure out why something that just worked a minute ago now seems broken. And, you will learn to assume the role of a designer. It is no longer accurate to merely depict programmers holed up by themselves at a computer creating white text-based code on black screens. Coders of mobile apps are also designers who think about and create attractive and intuitive user interfaces (UIs). Much of the design work happens not at the computer—it includes conversations with potential users, involves pens paper, and post-it notes, and uses story-boards or sketches. Only once you have your app designed on paper do you sit down at the computer to begin coding. And then, you will not find the traditional black and white interface, as the MIT App Inventor platform is interactive and full of colorful blocks that snap together. Brainstorming app ideas Chances are you already have an idea for a mobile app. But if not, how can you think of one? Sometimes, you have so many ideas; it’s hard to narrow them down to just one. The best way to start brainstorming app ideas is by starting with what you know. What’s an app you wish existed? What’s an app you and your friends, co-workers, or family members would use, need, or like? What’s a problem in your community, network, or circle of friends that could be solved with a digital solution? Maybe, you loan out books to each friends, but don’t have a system to keep track of who borrowed what. Maybe, you want to do a clothing swap with people who are your size so you want to post pictures of the items that you have available for trade and you want to view listed items in your size. Maybe, you have a favorite app that you use all the time but wish it just had this one other feature. Maybe, when you meet your friends in a public place, it’s hard to know if they’re nearby without a lot of texting back and forth, so you want to create an app that shows everyone’s location on one screen. The possibilities are endless! The key to successful brainstorming is to write down all of your ideas no matter how wild they are and talk to people about them to get feedback. Input from others is an essential part of the research needed to ensure that your app idea becomes a successful app that people will want, use, and/or buy. On a recent business trip, we had an idea for a travel app because we always seemed to forget at least one essential item. Over breakfast at the hotel, we discussed the app idea with a couple of colleagues and received amazing insight that we hadn’t thought of like a reminder notification to fill any prescriptions well before the trip and a weather component so we could be sure to pack appropriate clothes for each destination. The more people you talk to, the more market research you will conduct and the more defined the overall app concept will be. Summary This article highlights the many other learning outcomes from engaging in the mobile app development process. Taking an app concept and building it out into an actual mobile app is both a concrete and a creative process. Attention to detail and iteration is vital for both code and design to work effectively and synergistically. Whether you’re creating a game to play with your friend, an app to promote philanthropy involvement on campus, or an app to kickstart a recycling program in your neighborhood, the design thinking process is as much a part of app development as coding. Skills such as brainstorming, research, interviewing, synthesizing, ideating, storyboarding, designing, troubleshooting, problem solving, and testing are not only integral to app building, they are also transferrable to other disciplines, helping to unlock creativity and flow in any endeavor. Resources for Article: Further resources on this subject: Google Apps: Surfing the Web [article] Introduction to IT Inventory and Resource Management [article] How to Expand your Knowledge [article]
Read more
  • 0
  • 0
  • 10145

article-image-interfaces
Packt
17 May 2016
19 min read
Save for later

Expert Python Programming: Interfaces

Packt
17 May 2016
19 min read
This article by, Michał Jaworski and Tarek Ziadé, the authors of the book, Expert Python Programming - Second Edition, will mainly focus on interfaces. (For more resources related to this topic, see here.) An interface is a definition of an API. It describes a list of methods and attributes a class should have to implement with the desired behavior. This description does not implement any code but just defines an explicit contract for any class that wishes to implement the interface. Any class can then implement one or several interfaces in whichever way it wants. While Python prefers duck-typing over explicit interface definitions, it may be better to use them sometimes. For instance, explicit interface definition makes it easier for a framework to define functionalities over interfaces. The benefit is that classes are loosely coupled, which is considered as a good practice. For example, to perform a given process, a class A does not depend on a class B, but rather on an interface I. Class B implements I, but it could be any other class. The support for such a technique is built-in in many statically typed languages such as Java or Go. The interfaces allow the functions or methods to limit the range of acceptable parameter objects that implement a given interface, no matter what kind of class it comes from. This allows for more flexibility than restricting arguments to given types or their subclasses. It is like an explicit version of duck-typing behavior: Java uses interfaces to verify a type safety at compile time rather than use duck-typing to tie things together at run time. Python has a completely different typing philosophy to Java, so it does not have native support for interfaces. Anyway, if you would like to have more explicit control on application interfaces, there are generally two solutions to choose from: Use some third-party framework that adds a notion of interfaces Use some of the advanced language features to build your methodology for handling interfaces. Using zope.interface There are a few frameworks that allow you to build explicit interfaces in Python. The most notable one is a part of the Zope project. It is the zope.interface package. Although, nowadays, Zope is not as popular as it used to be, the zope.interface package is still one of the main components of the Twisted framework. The core class of the zope.interface package is the Interface class. It allows you to explicitly define a new interface by subclassing. Let's assume that we want to define the obligatory interface for every implementation of a rectangle: from zope.interface import Interface, Attribute class IRectangle(Interface): width = Attribute("The width of rectangle") height = Attribute("The height of rectangle") def area(): """ Return area of rectangle """ def perimeter(): """ Return perimeter of rectangle """ Some important things to remember when defining interfaces with zope.interface are as follows: The common naming convention for interfaces is to use I as the name suffix. The methods of the interface must not take the self parameter. As the interface does not provide concrete implementation, it should consist only of empty methods. You can use the pass statement, raise NotImplementedError, or provide a docstring (preferred). An interface can also specify the required attributes using the Attribute class. When you have such a contract defined, you can then define new concrete classes that provide implementation for our IRectangle interface. In order to do that, you need to use the implementer() class decorator and implement all of the defined methods and attributes: @implementer(IRectangle) class Square: """ Concrete implementation of square with rectangle interface """ def __init__(self, size): self.size = size @property def width(self): return self.size @property def height(self): return self.size def area(self): return self.size ** 2 def perimeter(self): return 4 * self.size @implementer(IRectangle) class Rectangle: """ Concrete implementation of rectangle """ def __init__(self, width, height): self.width = width self.height = height def area(self): return self.width * self.height def perimeter(self): return self.width * 2 + self.height * 2 It is common to say that the interface defines a contract that a concrete implementation needs to fulfill. The main benefit of this design pattern is being able to verify consistency between contract and implementation before the object is being used. With the ordinary duck-typing approach, you only find inconsistencies when there is a missing attribute or method at runtime. With zope.interface, you can introspect the actual implementation using two methods from the zope.interface.verify module to find inconsistencies early on: verifyClass(interface, class_object): This verifies the class object for existence of methods and correctness of their signatures without looking for attributes verifyObject(interface, instance): This verifies the methods, their signatures, and also attributes of the actual object instance Since we have defined our interface and two concrete implementations, let's verify their contracts in an interactive session: >>> from zope.interface.verify import verifyClass, verifyObject >>> verifyObject(IRectangle, Square(2)) True >>> verifyClass(IRectangle, Square) True >>> verifyObject(IRectangle, Rectangle(2, 2)) True >>> verifyClass(IRectangle, Rectangle) True Nothing impressive. The Rectangle and Square classes carefully follow the defined contract so there is nothing more to see than a successful verification. But what happens when we make a mistake? Let's see an example of two classes that fail to provide full IRectangle interface implementation: @implementer(IRectangle) class Point: def __init__(self, x, y): self.x = x self.y = y @implementer(IRectangle) class Circle: def __init__(self, radius): self.radius = radius def area(self): return math.pi * self.radius ** 2 def perimeter(self): return 2 * math.pi * self.radius The Point class does not provide any method or attribute of the IRectangle interface, so its verification will show inconsistencies already on the class level: >>> verifyClass(IRectangle, Point) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "zope/interface/verify.py", line 102, in verifyClass return _verify(iface, candidate, tentative, vtype='c') File "zope/interface/verify.py", line 62, in _verify raise BrokenImplementation(iface, name) zope.interface.exceptions.BrokenImplementation: An object has failed to implement interface <InterfaceClass __main__.IRectangle> The perimeter attribute was not provided. The Circle class is a bit more problematic. It has all the interface methods defined but breaks the contract on the instance attribute level. This is the reason why, in most cases, you need to use the verifyObject() function to completely verify the interface implementation: >>> verifyObject(IRectangle, Circle(2)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "zope/interface/verify.py", line 105, in verifyObject return _verify(iface, candidate, tentative, vtype='o') File "zope/interface/verify.py", line 62, in _verify raise BrokenImplementation(iface, name) zope.interface.exceptions.BrokenImplementation: An object has failed to implement interface <InterfaceClass __main__.IRectangle> The width attribute was not provided. Using zope.inteface is an interesting way to decouple your application. It allows you to enforce proper object interfaces without the need for the overblown complexity of multiple inheritance, and it also allows to catch inconsistencies early. However, the biggest downside of this approach is the requirement that you explicitly define that the given class follows some interface in order to be verified. This is especially troublesome if you need to verify instances coming from external classes of built-in libraries. zope.interface provides some solutions for that problem, and you can of course handle such issues on your own by using the adapter pattern, or even monkey-patching. Anyway, the simplicity of such solutions is at least arguable. Using function annotations and abstract base classes Design patterns are meant to make problem solving easier and not to provide you with more layers of complexity. The zope.interface is a great concept and may greatly fit some projects, but it is not a silver bullet. By using it, you may soon find yourself spending more time on fixing issues with incompatible interfaces for third-party classes and providing never-ending layers of adapters instead of writing the actual implementation. If you feel that way, then this is a sign that something went wrong. Fortunately, Python supports for building lightweight alternative to the interfaces. It's not a full-fledged solution like zope.interface or its alternatives but it generally provides more flexible applications. You may need to write a bit more code, but in the end you will have something that is more extensible, better handles external types, and may be more future proof. Note that Python in its core does not have explicit notions of interfaces, and probably will never have, but has some of the features that allow you to build something that resembles the functionality of interfaces. The features are: Abstract base classes (ABCs) Function annotations Type annotations The core of our solution is abstract base classes, so we will feature them first. As you probably know, the direct type comparison is considered harmful and not pythonic. You should always avoid comparisons as follows: assert type(instance) == list Comparing types in functions or methods that way completely breaks the ability to pass a class subtype as an argument to the function. The slightly better approach is to use the isinstance() function that will take the inheritance into account: assert isinstance(instance, list) The additional advantage of isinstance() is that you can use a larger range of types to check the type compatibility. For instance, if your function expects to receive some sort of sequence as the argument, you can compare against the list of basic types: assert isinstance(instance, (list, tuple, range)) Such a way of type compatibility checking is OK in some situations but it is still not perfect. It will work with any subclass of list, tuple, or range, but will fail if the user passes something that behaves exactly the same as one of these sequence types but does not inherit from any of them. For instance, let's relax our requirements and say that you want to accept any kind of iterable as an argument. What would you do? The list of basic types that are iterable is actually pretty long. You need to cover list, tuple, range, str, bytes, dict, set, generators, and a lot more. The list of applicable built-in types is long, and even if you cover all of them it will still not allow you to check against the custom class that defines the __iter__() method, but will instead inherit directly from object. And this is the kind of situation where abstract base classes (ABC) are the proper solution. ABC is a class that does not need to provide a concrete implementation but instead defines a blueprint of a class that may be used to check against type compatibility. This concept is very similar to the concept of abstract classes and virtual methods known in the C++ language. Abstract base classes are used for two purposes: Checking for implementation completeness Checking for implicit interface compatibility So, let's assume we want to define an interface which ensures that a class has a push() method. We need to create a new abstract base class using a special ABCMeta metaclass and an abstractmethod() decorator from the standard abc module: from abc import ABCMeta, abstractmethod class Pushable(metaclass=ABCMeta): @abstractmethod def push(self, x): """ Push argument no matter what it means """ The abc module also provides an ABC base class that can be used instead of the metaclass syntax: from abc import ABCMeta, abstractmethod class Pushable(metaclass=ABCMeta): @abstractmethod def push(self, x): """ Push argument no matter what it means """ Once it is done, we can use that Pushable class as a base class for concrete implementation and it will guard us from the instantiation of objects that would have incomplete implementation. Let's define DummyPushable, which implements all interface methods and the IncompletePushable that breaks the expected contract: class DummyPushable(Pushable): def push(self, x): return class IncompletePushable(Pushable): pass If you want to obtain the DummyPushable instance, there is no problem because it implements the only required push() method: >>> DummyPushable() <__main__.DummyPushable object at 0x10142bef0> But if you try to instantiate IncompletePushable, you will get TypeError because of missing implementation of the interface() method: >>> IncompletePushable() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class IncompletePushable with abstract methods push The preceding approach is a great way to ensure implementation completeness of base classes but is as explicit as the zope.interface alternative. The DummyPushable instances are of course also instances of Pushable because Dummy is a subclass of Pushable. But how about other classes with the same methods but not descendants of Pushable? Let's create one and see: >>> class SomethingWithPush: ... def push(self, x): ... pass ... >>> isinstance(SomethingWithPush(), Pushable) False Something is still missing. The SomethingWithPush class definitely has a compatible interface but is not considered as an instance of Pushable yet. So, what is missing? The answer is the __subclasshook__(subclass) method that allows you to inject your own logic into the procedure that determines whether the object is an instance of a given class. Unfortunately, you need to provide it by yourself, as abc creators did not want to constrain the developers in overriding the whole isinstance() mechanism. We got full power over it, but we are forced to write some boilerplate code. Although you can do whatever you want to, usually the only reasonable thing to do in the __subclasshook__() method is to follow the common pattern. The standard procedure is to check whether the set of defined methods are available somewhere in the MRO of the given class: from abc import ABCMeta, abstractmethod class Pushable(metaclass=ABCMeta): @abstractmethod def push(self, x): """ Push argument no matter what it means """ @classmethod def __subclasshook__(cls, C): if cls is Pushable: if any("push" in B.__dict__ for B in C.__mro__): return True return NotImplemented With the __subclasshook__() method defined that way, you can now confirm that the instances that implement the interface implicitly are also considered instances of the interface: >>> class SomethingWithPush: ... def push(self, x): ... pass ... >>> isinstance(SomethingWithPush(), Pushable) True Unfortunately, this approach to the verification of type compatibility and implementation completeness does not take into account the signatures of class methods. So, if the number of expected arguments is different in implementation, it will still be considered compatible. In most cases, this is not an issue, but if you need such fine-grained control over interfaces, the zope.interface package allows for that. As already said, the __subclasshook__() method does not constrain you in adding more complexity to the isinstance() function's logic to achieve a similar level of control. The two other features that complement abstract base classes are function annotations and type hints. Function annotation is the syntax element. It allows you to annotate functions and their arguments with arbitrary expressions. This is only a feature stub that does not provide any syntactic meaning. There is no utility in the standard library that uses this feature to enforce any behavior. Anyway, you can use it as a convenient and lightweight way to inform the developer of the expected argument interface. For instance, consider this IRectangle interface rewritten from zope.interface to abstract the base class: from abc import ( ABCMeta, abstractmethod, abstractproperty ) class IRectangle(metaclass=ABCMeta): @abstractproperty def width(self): return @abstractproperty def height(self): return @abstractmethod def area(self): """ Return rectangle area """ @abstractmethod def perimeter(self): """ Return rectangle perimeter """ @classmethod def __subclasshook__(cls, C): if cls is IRectangle: if all([ any("area" in B.__dict__ for B in C.__mro__), any("perimeter" in B.__dict__ for B in C.__mro__), any("width" in B.__dict__ for B in C.__mro__), any("height" in B.__dict__ for B in C.__mro__), ]): return True return NotImplemented If you have a function that works only on rectangles, let's say draw_rectangle(), you could annotate the interface of the expected argument as follows: def draw_rectangle(rectangle: IRectange): ... This adds nothing more than information for the developer about expected information. And even this is done through an informal contract because, as we know, bare annotations contain no syntactic meaning. However, they are accessible at runtime, so we can do something more. Here is an example implementation of a generic decorator that is able to verify interface from function annotation if it is provided using abstract base classes: def ensure_interface(function): signature = inspect.signature(function) parameters = signature.parameters @wraps(function) def wrapped(*args, **kwargs): bound = signature.bind(*args, **kwargs) for name, value in bound.arguments.items(): annotation = parameters[name].annotation if not isinstance(annotation, ABCMeta): continue if not isinstance(value, annotation): raise TypeError( "{} does not implement {} interface" "".format(value, annotation) ) function(*args, **kwargs) return wrapped Once it is done, we can create some concrete class that implicitly implements the IRectangle interface (without inheriting from IRectangle) and update the implementation of the draw_rectangle() function to see how the whole solution works: class ImplicitRectangle: def __init__(self, width, height): self._width = width self._height = height @property def width(self): return self._width @property def height(self): return self._height def area(self): return self.width * self.height def perimeter(self): return self.width * 2 + self.height * 2 @ensure_interface def draw_rectangle(rectangle: IRectangle): print( "{} x {} rectangle drawing" "".format(rectangle.width, rectangle.height) ) If we feed the draw_rectangle() function with an incompatible object, it will now raise TypeError with a meaningful explanation: >>> draw_rectangle('foo') Traceback (most recent call last): File "<input>", line 1, in <module> File "<input>", line 101, in wrapped TypeError: foo does not implement <class 'IRectangle'> interface But if we use ImplicitRectangle or anything else that resembles the IRectangle interface, the function executes as it should: >>> draw_rectangle(ImplicitRectangle(2, 10)) 2 x 10 rectangle drawing Our example implementation of ensure_interface() is based on the typechecked() decorator from the typeannotations project that tries to provide run-time checking capabilities (refer to https://github.com/ceronman/typeannotations). Its source code might give you some interesting ideas about how to process type annotations to ensure run-time interface checking. The last feature that can be used to complement this interface pattern landscape are type hints. Type hints are described in detail by PEP 484 and were added to the language quite recently. They are exposed in the new typing module and are available from Python 3.5. Type hints are built on top of function annotations and reuse this slightly forgotten syntax feature of Python 3. They are intended to guide type hinting and check for various yet-to-come Python type checkers. The typing module and PEP 484 document aim to provide a standard hierarchy of types and classes that should be used for describing type annotations. Still, type hints do not seem to be something revolutionary because this feature does not come with any type checker built-in into the standard library. If you want to use type checking or enforce strict interface compatibility in your code, you need to create your own tool because there is none worth recommendation yet. This is why we won't dig into details of PEP 484. Anyway, type hints and the documents describing them are worth mentioning because if some extraordinary solution emerges in the field of type checking in Python, it is highly probable that it will be based on PEP 484. Using collections.abc Abstract base classes are like small building blocks for creating a higher level of abstraction. They allow you to implement really usable interfaces but are very generic and designed to handle lot more than this single design pattern. You can unleash your creativity and do magical things but building something generic and really usable may require a lot of work. Work that may never pay off. This is why custom abstract base classes are not used so often. Despite that, the collections.abc module provides a lot of predefined ABCs that allow to verify interface compatibility of many basic Python types. With base classes provided in this module, you can check, for example, whether a given object is callable, mapping, or if it supports iteration. Using them with the isinstance() function is way better than comparing them against the base python types. You should definitely know how to use these base classes even if you don't want to define your own custom interfaces with ABCMeta. The most common abstract base classes from collections.abc that you will use from time to time are: Container: This interface means that the object supports the in operator and implements the __contains__() method Iterable: This interface means that the object supports the iteration and implements the __iter__() method Callable: This interface means that it can be called like a function and implements the __call__() method Hashable: This interface means that the object is hashable (can be included in sets and as key in dictionaries) and implements the __hash__ method Sized: This interface means that the object has size (can be a subject of the len() function) and implements the __len__() method A full list of the available abstract base classes from the collections.abc module is available in the official Python documentation (refer to https://docs.python.org/3/library/collections.abc.html). Summary Design patterns are reusable, somewhat language-specific solutions to common problems in software design. They are a part of the culture of all developers, no matter what language they use. We learned a small part of design patterns in this article. We covered what interfaces are and how they can be used in Python. Resources for Article: Further resources on this subject: Creating User Interfaces [article] Graphical User Interfaces for OpenSIPS 1.6 [article]
Read more
  • 0
  • 0
  • 9477
Modal Close icon
Modal Close icon