Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Languages

135 Articles
article-image-how-python-code-organized
Packt
19 Feb 2016
8 min read
Save for later

How is Python code organized

Packt
19 Feb 2016
8 min read
Python is an easy to learn yet a powerful programming language. It has efficient high-level data structures and effective approach to object-oriented programming. Let's talk a little bit about how Python code is organized. In this paragraph, we'll start going down the rabbit hole a little bit more and introduce a bit more technical names and concepts. Starting with the basics, how is Python code organized? Of course, you write your code into files. When you save a file with the extension .py, that file is said to be a Python module. If you're on Windows or Mac, which typically hide file extensions to the user, please make sure you change the configuration so that you can see the complete name of the files. This is not strictly a requirement, but a hearty suggestion. It would be impractical to save all the code that it is required for software to work within one single file. That solution works for scripts, which are usually not longer than a few hundred lines (and often they are quite shorter than that). A complete Python application can be made of hundreds of thousands of lines of code, so you will have to scatter it through different modules. Better, but not nearly good enough. It turns out that even like this it would still be impractical to work with the code. So Python gives you another structure, called package, which allows you to group modules together. A package is nothing more than a folder, which must contain a special file, __init__.py that doesn't need to hold any code but whose presence is required to tell Python that the folder is not just some folder, but it's actually a package (note that as of Python 3.3 __init__.py is not strictly required any more). As always, an example will make all of this much clearer. I have created an example structure in my project, and when I type in my Linux console: $ tree -v example Here's how a structure of a real simple application could look like: example/ ├── core.py ├── run.py └── util ├── __init__.py ├── db.py ├── math.py └── network.py You can see that within the root of this example, we have two modules, core.py and run.py, and one package: util. Within core.py, there may be the core logic of our application. On the other hand, within the run.py module, we can probably find the logic to start the application. Within the util package, I expect to find various utility tools, and in fact, we can guess that the modules there are called by the type of tools they hold: db.py would hold tools to work with databases, math.py would of course hold mathematical tools (maybe our application deals with financial data), and network.py would probably hold tools to send/receive data on networks. As explained before, the __init__.py file is there just to tell Python that util is a package and not just a mere folder. Had this software been organized within modules only, it would have been much harder to infer its structure. I put a module only example under the ch1/files_only folder, see it for yourself: $ tree -v files_only This shows us a completely different picture: files_only/ ├── core.py ├── db.py ├── math.py ├── network.py └── run.py It is a little harder to guess what each module does, right? Now, consider that this is just a simple example, so you can guess how much harder it would be to understand a real application if we couldn't organize the code in packages and modules. How do we use modules and packages When a developer is writing an application, it is very likely that they will need to apply the same piece of logic in different parts of it. For example, when writing a parser for the data that comes from a form that a user can fill in a web page, the application will have to validate whether a certain field is holding a number or not. Regardless of how the logic for this kind of validation is written, it's very likely that it will be needed in more than one place. For example in a poll application, where the user is asked many question, it's likely that several of them will require a numeric answer. For example: What is your age How many pets do you own How many children do you have How many times have you been married It would be very bad practice to copy paste (or, more properly said: duplicate) the validation logic in every place where we expect a numeric answer. This would violate the DRY (Don't Repeat Yourself) principle, which states that you should never repeat the same piece of code more than once in your application. I feel the need to stress the importance of this principle: you should never repeat the same piece of code more than once in your application (got the irony?). There are several reasons why repeating the same piece of logic can be very bad, the most important ones being: There could be a bug in the logic, and therefore, you would have to correct it in every place that logic is applied. You may want to amend the way you carry out the validation, and again you would have to change it in every place it is applied. You may forget to fix/amend a piece of logic because you missed it when searching for all its occurrences. This would leave wrong/inconsistent behavior in your application. Your code would be longer than needed, for no good reason. Python is a wonderful language and provides you with all the tools you need to apply all the coding best practices. For this particular example, we need to be able to reuse a piece of code. To be able to reuse a piece of code, we need to have a construct that will hold the code for us so that we can call that construct every time we need to repeat the logic inside it. That construct exists, and it's called function. I'm not going too deep into the specifics here, so please just remember that a function is a block of organized, reusable code which is used to perform a task. Functions can assume many forms and names, according to what kind of environment they belong to, but for now this is not important. Functions are the building blocks of modularity in your application, and they are almost indispensable (unless you're writing a super simple script, you'll use functions all the time). Python comes with a very extensive library, as I already said a few pages ago. Now, maybe it's a good time to define what a library is: a library is a collection of functions and objects that provide functionalities that enrich the abilities of a language. For example, within Python's math library we can find a plethora of functions, one of which is the factorial function, which of course calculates the factorial of a number. In mathematics, the factorial of a non-negative integer number N, denoted as N!, is defined as the product of all positive integers less than or equal to N. For example, the factorial of 5 is calculated as: 5! = 5 * 4 * 3 * 2 * 1 = 120 The factorial of 0 is 0! = 1, to respect the convention for an empty product. So, if you wanted to use this function in your code, all you would have to do is to import it and call it with the right input values. Don't worry too much if input values and the concept of calling is not very clear for now, please just concentrate on the import part. We use a library by importing what we need from it, and then we use it. In Python, to calculate the factorial of number 5, we just need the following code: >>> from math import factorial >>> factorial(5) 120 Whatever we type in the shell, if it has a printable representation, will be printed on the console for us (in this case, the result of the function call: 120). So, let's go back to our example, the one with core.py, run.py, util, and so on. In our example, the package util is our utility library. Our custom utility belt that holds all those reusable tools (that is, functions), which we need in our application. Some of them will deal with databases (db.py), some with the network (network.py), and some will perform mathematical calculations (math.py) that are outside the scope of Python's standard math library and therefore, we had to code them for ourselves. Summary In this article, we started to explore the world of programming and that of Python. We saw how Python code can be organized using modules and packages. For more information on Python, refer the following books recomended by Packt Publishing: Learning Python (https://www.packtpub.com/application-development/learning-python) Python 3 Object-oriented Programming - Second Edition (https://www.packtpub.com/application-development/python-3-object-oriented-programming-second-edition) Python Essentials (https://www.packtpub.com/application-development/python-essentials) Resources for Article: Further resources on this subject: Test all the things with Python [article] Putting the Fun in Functional Python [article] Scraping the Web with Python - Quick Start [article]
Read more
  • 0
  • 0
  • 9268

article-image-transactions-for-async-programming-in-javaee
Aaron Lazar
31 Jul 2018
5 min read
Save for later

Using Transactions with Asynchronous Tasks in JavaEE [Tutorial]

Aaron Lazar
31 Jul 2018
5 min read
Threading is a common issue in most software projects, no matter which language or other technology is involved. When talking about enterprise applications, things become even more important and sometimes harder. Using asynchronous tasks could be a challenge: what if you need to add some spice and add a transaction to it? Thankfully, the Java EE environment has some great features for dealing with this challenge, and this article will show you how. This article is an extract from the book Java EE 8 Cookbook, authored by Elder Moraes. Usually, a transaction means something like code blocking. Isn't it awkward to combine two opposing concepts? Well, it's not! They can work together nicely, as shown here. Adding Java EE 8 dependency Let's first add our Java EE 8 dependency: <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Let's first create a User POJO: public class User { private Long id; private String name; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public User(Long id, String name) { this.id = id; this.name = name; } @Override public String toString() { return "User{" + "id=" + id + ", name=" + name + '}'; } } And here is a slow bean that will return User: @Stateless public class UserBean { public User getUser(){ try { TimeUnit.SECONDS.sleep(5); long id = new Date().getTime(); return new User(id, "User " + id); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); long id = new Date().getTime(); return new User(id, "Error " + id); } } } Now we create a task to be executed that will return User using some transaction stuff: public class AsyncTask implements Callable<User> { private UserTransaction userTransaction; private UserBean userBean; @Override public User call() throws Exception { performLookups(); try { userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); return user; } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } } private void performLookups() throws NamingException{ userBean = CDI.current().select(UserBean.class).get(); userTransaction = CDI.current() .select(UserTransaction.class).get(); } } And finally, here is the service endpoint that will use the task to write the result to a response: @Path("asyncService") @RequestScoped public class AsyncService { private AsyncTask asyncTask; @Resource(name = "LocalManagedExecutorService") private ManagedExecutorService executor; @PostConstruct public void init(){ asyncTask = new AsyncTask(); } @GET public void asyncService(@Suspended AsyncResponse response){ Future<User> result = executor.submit(asyncTask); while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } try { response.resume(Response.ok(result.get()).build()); } catch (InterruptedException | ExecutionException ex) { System.err.println(ex.getMessage()); response.resume(Response.status(Response .Status.INTERNAL_SERVER_ERROR) .entity(ex.getMessage()).build()); } } } To try this code, just deploy it to GlassFish 5 and open this URL: http://localhost:8080/ch09-async-transaction/asyncService How the Asynchronous execution works The magic happens in the AsyncTask class, where we will first take a look at the performLookups method: private void performLookups() throws NamingException{ Context ctx = new InitialContext(); userTransaction = (UserTransaction) ctx.lookup("java:comp/UserTransaction"); userBean = (UserBean) ctx.lookup("java:global/ ch09-async-transaction/UserBean"); } It will give you the instances of both UserTransaction and UserBean from the application server. Then you can relax and rely on the things already instantiated for you. As our task implements a Callabe<V> object that it needs to implement the call() method: @Override public User call() throws Exception { performLookups(); try { userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); return user; } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } } You can see Callable as a Runnable interface that returns a result. Our transaction code lives here: userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); And if anything goes wrong, we have the following: } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } Now we will look at AsyncService. First, we have some declarations: private AsyncTask asyncTask; @Resource(name = "LocalManagedExecutorService") private ManagedExecutorService executor; @PostConstruct public void init(){ asyncTask = new AsyncTask(); } We are asking the container to give us an instance from ManagedExecutorService, which It is responsible for executing the task in the enterprise context. Then we call an init() method, and the bean is constructed (@PostConstruct). This instantiates the task. Now we have our task execution: @GET public void asyncService(@Suspended AsyncResponse response){ Future<User> result = executor.submit(asyncTask); while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } try { response.resume(Response.ok(result.get()).build()); } catch (InterruptedException | ExecutionException ex) { System.err.println(ex.getMessage()); response.resume(Response.status(Response. Status.INTERNAL_SERVER_ERROR) .entity(ex.getMessage()).build()); } } Note that the executor returns Future<User>: Future<User> result = executor.submit(asyncTask); This means this task will be executed asynchronously. Then we check its execution status until it's done: while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } And once it's done, we write it down to the asynchronous response: response.resume(Response.ok(result.get()).build()); The full source code of this recipe is at Github. So now, using Transactions with Asynchronous Tasks in JavaEE isn't such a daunting task, is it? If you found this tutorial helpful and would like to learn more, head on to this book Java EE 8 Cookbook. Oracle announces a new pricing structure for Java Design a RESTful web API with Java [Tutorial] How to convert Java code into Kotlin
Read more
  • 0
  • 0
  • 8836

article-image-elegant-restful-client-python-exposing-remote-resources
Xavier Bruhiere
12 Aug 2015
6 min read
Save for later

Elegant RESTful Client in Python for Exposing Remote Resources

Xavier Bruhiere
12 Aug 2015
6 min read
Product Hunt addicts like me might have noticed how often a "developer" tab was available on landing pages. More and more modern products offer a special entry point tailored for coders who want deeper interaction, beyond standard end-user experience. Twitter, Myo, Estimote are great examples of technologies an engineer could leverage for its own tool/product. And Application Programming Interfaces (API) make it possible. Companies design them as a communication contract between the developer and their product. We can discern Representational State Transfer APIs (RESTful) from programmatic ones. The latter usually offer deeper technical integration, while the former tries to abstract most of the product's complexity behind intuitive remote resources (more on that later). The resulting simplicity owes a lot to the HTTP protocol and turns out to be trickier than one thinks. Both RESTful servers and clients often underestimates the value of HTTP historical rules or the challenges behind network failures. I will dump in this article my last experience in building an HTTP+JSON API client. We are going to build a small framework in python to interact with well-designed third party services. One should get out of it a consistent starting point for new projects, like remotely controlling its car ! Stack and Context Before diving in, let's state an important assumption : APIs our client will call are well designed. They enforce RFC standards, conventions and consistent resources. Sometimes, however, real world throws at us ugly interfaces. Always read the documentation (if any) and deal with it. The choice of Python should be seen as a minor implementation consideration. Nevertheless, it will bring us the powerful requests package and a nice repl to manually explore remote services. Its popularity also suggests we are likely to be able to integrate our future package in a future project. To keep things practical, requests will hit Consul HTTP endpoints, providing us with a handy interface for our infrastructure. Consul, as a whole, it is a tool for discovering and configuring services in your infrastructure. Just download the appropriate binary, move it in your $PATH and start a new server : consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node consul-server We also need python 3.4 or 2.7, pip installed and, then, to download the single dependency we mentioned earlier with pip install requests==2.7.0. Now let's have a conversation with an API ! Sending requests APIs exposes resources for manipulation through HTTP verbs. Say we need to retrieve nodes in our cluster, Consul documentation requires us to perform a GET /v1/catalog/nodes. import requests def http_get(resource, payload=None): """ Perform an HTTP GET request against the given endpoint. """ # Avoid dangerous default function argument `{}` payload = payload or {} # versioning an API guarantees compatibility endpoint = '{}/{}/{}'.format('localhost:8500', 'v1', resource) return requests.get( endpoint, # attach parameters to the url, like `&foo=bar` params=payload, # tell the API we expect to parse JSON responses headers={'Accept': 'application/vnd.consul+json; version=1'}) Providing consul is running on the same host, we get the following result. In [4]: res = http_get('catalog/nodes') In [5]: res.json() Out[5]: [{'Address': '172.17.0.1', 'Node': 'consul-server'}] Awesome : a few lines of code gave us a really convenient access to Consul information. Let's leverage OOP to abstract further the nodes resource. Mapping resources The idea is to consider a Catalog class whose attributes are Consul API resources. A little bit of Python magic offers an elegant way to achieve that. class Catalog(object): # url specific path _path = 'catalog' def__getattr__(self, name): """ Extend built-in method to add support for attributes related to endpoints. Example: agent.members runs GET /v1/agent/members """ # Default behavior if name in self.__dict__: returnself.__dict__[name] # Dynamic attribute based on the property name else: return http_get('/'.join([self._path, name])) It might seem a little cryptic if you are not familiar with built-in Python's object methods but the usage is crystal clear : In [47]: catalog_ = Catalog() In [48]: catalog_.nodes.json() Out[48]: [{'Address': '172.17.0.1', 'Node': 'consul-server'}] The really nice benefit with this approach is that we become very productive in supporting new resources. Just rename the previous class ClientFactory and profit. class Status(ClientFactory): _path = 'status' In [58]: status_ = Status() In [59]: status_.peers.json() Out[59]: ['172.17.0.1:8300'] But... what if the resource we call does not exist ? And, although we provide a header with Accept: application/json, what if we actually don't get back a JSON object or reach our rate limit ? Reading responses Let's challenge our current implementation against those questions. In [61]: status_.not_there Out[61]: <Response [404]> In [68]: # ok, that's a consistent response In [69]: # 404 HTTP code means the resource wasn't found on server-side In [69]: status_.not_there.json() --------------------------------------------------------------------------- StopIteration Traceback (most recent call last) ... ValueError: Expecting value: line 1 column 1 (char 0) Well that's not safe at all. We're going to wrap our HTTP calls with a decorator in charge of inspecting the API response. def safe_request(fct): """ Return Go-like data (i.e. actual response and possible error) instead of raising errors. """ def inner(*args, **kwargs): data, error = {}; one try: res = fct(*args, **kwargs) except requests.exceptions.ConnectionErroras error: returnNone, {'message': str(error), 'id': -1} if res.status_code == 200 and res.headers['content-type'] == 'application/json': # expected behavior data = res.json() elif res.status_code == 206 and res.headers['content-type'] == 'application/json': # partial response, return as-is data = res.json() else: # something went wrong error = {'id': res.status_code, 'message': res.reason} return res, error return inner # update our old code @safe_request def http_get(resource): # ... This implementation stills require us to check for errors instead of disposing of the data right away. But we are dealing with network and unexpected failures will happen. Being aware of them without crashing or wrapping every resources with try/catch is a working compromise. In [71]: res, err = status_.not_there In [72]: print(err) {'id': 404, 'message': 'Not Found'} Conclusion We just covered an opinionated python abstraction for programmatically expose remote resources. Subclassing the objects above allows one to quickly interact with new services, through command line tools or interactive prompt. Yet, we only worked with the GET method. Most of the APIs allow resources deletion (DELETE), update (PUT) or creation (POST) to name a few HTTP verbs. Other future work could involve : authentication smarter HTTP code handler when dealing with forbidden, rate limiting, internal server error responses Given the incredible services that emerged lately (IBM Watson, Docker, ...), building API clients is a more and more productive option to develop innovative projects. About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 8413

article-image-reactive-python-asynchronous-programming-rescue-part-1
Xavier Bruhiere
05 Oct 2016
7 min read
Save for later

Reactive Python – Asynchronous programming to the rescue, Part 1

Xavier Bruhiere
05 Oct 2016
7 min read
On the Confluent website, you can find this title: Stream data changes everything From the createors of Kafka, a real-time messaging system, this is not a surprising assertion. Yet, data streaming infrastructures have gained in popularity and many projects require the data to be processed as soon as it shows up. This contributed to the development of famous technologies like Spark Stremaing, Apache Storm and more broadly websockets. This latest piece of software in particular brought real-time data feeds to web applications, trying to solve low-latency connections. Coupled with the asynchronous Node.js, you can build a powerful event-based reactive system. But what about Python? Given the popularity of the language in data science, would it be possible to bring the benefits of this kind of data ingestion? As this two-part post series will show, it turns out that modern Python (Python 3.4 or later) supports asynchronous data streaming apps. Introducing asyncio Python 3.4 introduced in the standard library the module asyncio to provision the language with: Asynchronous I/O, event loop, coroutines and tasks While Python treats functions as first-class objects (meaning you can assign them to variables and pass them as arguments), most developers follow an imperative programming style. It seems on purpose: It requires super human discipline to write readable code in callbacks and if you don’t believe me look at any piece of JavaScript code. - Guido van Rossum So Asyncio is the pythonic answer to asynchronous programming. This paradigm makes a lot of sense for otherwise costly I/O operations or when we need events to trigger code. Scenario For fun and profit, let's build such a project. We will simulate a dummy electrical circuit composed of three components: A clock regularly ticking A board I/O pin randomly choosing to toggle its binary state on clock events A buzzer buzzing when the I/O pin flips to one This set us up with an interesting machine-to-machine communication problem to solve. Note that the code snippets in this post make use of features like async and await introduced in Python 3.5. While it would be possible to backport to Python 3.4, I highly recommend that you follow along with the same version or newer. Anaconda or Pyenv can ease the installation process if necessary. $ python --version Python 3.5.1 $ pip --version pip 8.1.2 Asynchronous webscoket Client/Server Our first step, the clock, will introduce both asyncio and websocket basics. We need a straightforward method that fires tick signals through a websocket and wait for acknowledgement. # filename: sketch.py async def clock(socket, port, tacks=3, delay=1) The async keyword is sugar syntaxing introduced in Python 3.5 to replace the previous @asyncio.coroutine. The official pep 492 explains it all but the tldr : API quality. To simplify websocket connection plumbing, we can take advantage of the eponymous package: pip install websockets==3.5.1. It hides the protocol's complexity behind an elegant context manager. # filename: sketch.py # the path "datafeed" in this uri will be a parameter available in the other side but we won't use it for this example uri = 'ws://{socket}:{port}/datafeed'.format(socket=socket, port=port) # manage asynchronously the connection async with websockets.connect(uri) as ws: for payload in range(tacks): print('[ clock ] > {}'.format(payload)) # send payload and wait for acknowledgement await ws.send(str(payload)) print('[ clock ] < {}'.format(await ws.recv())) time.sleep(delay) The keyword await was introduced with async and replaces the old yield from to read values from asynchronous functions. Inside the context manager the connection stays open and we can stream data to the server we contacted. The server: IOPin At the core of our application are entities capable of speaking to each other directly. To make things fun, we will expose the same API as Arduino sketches, or a setup method that runs once at startup and a loop called when new data is available. # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: factory.py import abc import asyncio import websockets class FactoryLoop(object): """ Glue components to manage the evented-loop model. """ __metaclass__ = abc.ABCMeta def__init__(self, *args, **kwargs): # call user-defined initialization self.setup(*args, **kwargs) def out(self, text): print('[ {} ] {}'.format(type(self).__name__, text)) @abc.abstractmethod def setup(self, *args, **kwargs): pass @abc.abstractmethod async def loop(self, channel, data): pass def run(self, host, port): try: server = websockets.serve(self.loop, host, port) self.out('serving on {}:{}'.format(host, port)) asyncio.get_event_loop().run_until_complete(server) asyncio.get_event_loop().run_forever() exceptOSError: self.out('Cannot bind to this port! Is the server already running?') exceptKeyboardInterrupt: self.out('Keyboard interruption, aborting.') asyncio.get_event_loop().stop() finally: asyncio.get_event_loop().close() The child objects will be required to implement setup and loop, while this class will take care of: Initializing the sketch Registering a websocket server based on a asynchronous callback (loop) Telling the event loop to poll for... events The websockets states the server callback is expected to have the signature on_connection(websocket, path). This is too low-level for our purpose. Instead, we can write a decorator to manage asyncio details, message passing, or error handling. We will only call self.loop with application-level-relevant information: the actual message and the websocket path. # filename: factory.py import functools import websockets def reactive(fn): @functools.wraps(fn) async def on_connection(klass, websocket, path): """Dispatch events and wrap execution.""" klass.out('** new client connected, path={}'.format(path)) # process messages as long as the connection is opened or # an error is raised whileTrue: try: message = await websocket.recv() aknowledgement = await fn(klass, path, message) await websocket.send(aknowledgement or 'n/a') except websockets.exceptions.ConnectionClosed as e: klass.out('done processing messages: {}n'.format(e)) break return on_connection Now we can develop a readable IOPin object. # filename: sketch.py import factory class IOPin(factory.FactoryLoop): """Set an IO pin to 0 or 1 randomly.""" def setup(self, chance=0.5, sequence=3): self.chance = chance self.sequence = chance def state(self): """Toggle state, sometimes.""" return0if random.random() < self.chance else1 @factory.reactive async def loop(self, channel, msg): """Callback on new data.""" self.out('new tick triggered on {}: {}'.format(channel, msg)) bits_stream = [self.state() for _ in range(self.sequence)] self.out('toggling pin state: {}'.format(bits_stream)) # ... # ... toggle pin state here # ... return'acknowledged' We finally need some glue to run both the clock and IOPin and test if the latter toggles its state when the former fires new ticks. The following snippet uses a convenient library, click 6.6, to parse command-line arguments. #! /usr/bin/env python # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: arduino.py import sys import asyncio import click import sketchs @click.command() @click.argument('sketch') @click.option('-s', '--socket', default='localhost', help='Websocket to bind to') @click.option('-p', '--port', default=8765, help='Websocket port to bind to') @click.option('-t', '--tacks', default=5, help='Number of clock ticks') @click.option('-d', '--delay', default=1, help='Clock intervals') def main(sketch, **flags): if sketch == 'clock': # delegate the asynchronous execution to the event loop asyncio.get_event_loop().run_until_complete(sketchs.clock(**flags)) elif sketch == 'iopin': # arguments in the constructor go as is to our `setup` method sketchs.IOPin(chance=0.6).run(flags['socket'], flags['port']) else: print('unknown sketch, please choose clock, iopin or buzzer') return1 return0 if__name__ == '__main__': sys.exit(main()) Don't forget to chmod +x the script and start the server in a first terminal ./arduino.py iopin. When it is listening for connections, start the clock with ./arduino.py clock and watch them communicate! Note that we used here common default host and port so they can find each other. We have a good start with our app, and now in Part 2 we will further explore peer-to-peer communication, service discovery, and the streaming machine-to-machine concept. About the author Xavier Bruhiere is a lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 7553

article-image-testing-your-application-cljstest
Packt
11 May 2016
13 min read
Save for later

Testing Your Application with cljs.test

Packt
11 May 2016
13 min read
In this article written by David Jarvis, Rafik Naccache, and Allen Rohner, authors of the book Learning ClojureScript, we'll take a look at how to configure our ClojureScript application or library for testing. As usual, we'll start by creating a new project for us to play around with: $ lein new figwheel testing (For more resources related to this topic, see here.) We'll be playing around in a test directory. Most JVM Clojure projects will have one already, but since the default Figwheel template doesn't include a test directory, let's make one first (following the same convention used with source directories, that is, instead of src/$PROJECT_NAME we'll create test/$PROJECT_NAME): $ mkdir –p test/testing We'll now want to make sure that Figwheel knows that it has to watch the test directory for file modifications. To do that, we will edit the the dev build in our project.clj project's :cljsbuild map so that it's :source-paths vector includes both src and test. Your new dev build configuration should look like the following: {:id "dev" :source-paths ["src" "test"] ;; If no code is to be run, set :figwheel true for continued automagical reloading :figwheel {:on-jsload "testing.core/on-js-reload"} :compiler {:main testing.core :asset-path "js/compiled/out" :output-to "resources/public/js/compiled/testing.js" :output-dir "resources/public/js/compiled/out" :source-map-timestamp true}} Next, we'll get the old Figwheel REPL going so that we can have our ever familiar hot reloading: $ cd testing $ rlwrap lein figwheel Don't forget to navigate a browser window to http://localhost:3449/ to get the browser REPL to connect. Now, let's create a new core_test.cljs file in the test/testing directory. By convention, most libraries and applications in Clojure and ClojureScript have test files that correspond to source files with the suffix _test. In this project, this means that test/testing/core_test.cljs is intended to contain the tests for src/testing/core.cljs. Let's get started by just running tests on a single file. Inside core_test.cljs, let's add the following code: (ns testing.core-test (:require [cljs.test :refer-macros [deftest is]])) (deftest i-should-fail (is (= 1 0))) (deftest i-should-succeed (is (= 1 1))) This code first requires two of the most important cljs.test macros, and then gives us two simple examples of what a failed test and a successful test should look like. At this point, we can run our tests from the Figwheel REPL: cljs.user=> (require 'testing.core-test) ;; => nil cljs.user=> (cljs.test/run-tests 'testing.core-test) Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 2 tests containing 2 assertions. 1 failures, 0 errors. ;; => nil At this point, what we've got is tolerable, but it's not really practical in terms of being able to test a larger application. We don't want to have to test our application in the REPL and pass in our test namespaces one by one. The current idiomatic solution for this in ClojureScript is to write a separate test runner that is responsible for important executions and then run all of your tests. Let's take a look at what this looks like. Let's start by creating another test namespace. Let's call this one app_test.cljs, and we'll put the following in it: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is]])) (deftest another-successful-test (is (= 4 (count "test")))) We will not do anything remarkable here; it's just another test namespace with a single test that should pass by itself. Let's quickly make sure that's the case at the REPL: cljs.user=> (require 'testing.app-test) nil cljs.user=> (cljs.test/run-tests 'testing.app-test) Testing testing.app-test Ran 1 tests containing 1 assertions. 0 failures, 0 errors. ;; => nil Perfect. Now, let's write a test runner. Let's open a new file that we'll simply call test_runner.cljs, and let's include the following: (ns testing.test-runner (:require [cljs.test :refer-macros [run-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (defn run-all-tests [] (run-tests 'testing.app-test 'testing.core-test)) Again, nothing surprising. We're just making a single function for us that runs all of our tests. This is handy for us at the REPL: cljs.user=> (testing.test-runner/run-all-tests) Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. ;; => nil Ultimately, however, we want something we can run at the command line so that we can use it in a continuous integration environment. There are a number of ways we can go about configuring this directly, but if we're clever, we can let someone else do the heavy lifting for us. Enter doo, the handy ClojureScript testing plugin for Leiningen. Using doo for easier testing configuration doo is a library and Leiningen plugin for running cljs.test in many different JavaScript environments. It makes it easy to test your ClojureScript regardless of whether you're writing for the browser or for the server, and it also includes file watching capabilities such as Figwheel so that you can automatically rerun tests on file changes. The doo project page can be found at https://github.com/bensu/doo. To configure our project to use doo, first we need to add it to the list of plugins in our project.clj file. Modify the :plugins key so that it looks like the following: :plugins [[lein-figwheel "0.5.2"] [lein-doo "0.1.6"] [lein-cljsbuild "1.1.3" :exclusions [[org.clojure/clojure]]]] Next, we will add a new cljsbuild build configuration for our test runner. Add the following build map after the dev build map on which we've been working with until now: {:id "test" :source-paths ["src" "test"] :compiler {:main testing.test-runner :output-to "resources/public/js/compiled/testing_test.js" :optimizations :none}} This configuration tells Cljsbuild to use both our src and test directories, just like our dev profile. It adds some different configuration elements to the compiler options, however. First, we're not using testing.core as our main namespace anymore—instead, we'll use our test runner's namespace, testing.test-runner. We will also change the output JavaScript file to a different location from our compiled application code. Lastly, we will make sure that we pass in :optimizations :none so that the compiler runs quickly and doesn't have to do any magic to look things up. Note that our currently running Figwheel process won't know about the fact that we've added lein-doo to our list of plugins or that we've added a new build configuration. If you want to make Figwheel aware of doo in a way that'll allow them to play nicely together, you should also add doo as a dependency to your project. Once you've done that, exit the Figwheel process and restart it after you've saved the changes to project.clj. Lastly, we need to modify our test runner namespace so that it's compatible with doo. To do this, open test_runner.cljs and change it to the following: (ns testing.test-runner (:require [doo.runner :refer-macros [doo-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (doo-tests 'testing.app-test 'testing.core-test) This shouldn't look too different from our original test runner—we're just importing from doo.runner rather than cljs.test and using doo-tests instead of a custom runner function. The doo-tests runner works very similarly to cljs.test/run-tests, but it places hooks around the tests to know when to start them and finish them. We're also putting this at the top-level of our namespace rather than wrapping it in a particular function. The last thing we're going to need to do is to install a JavaScript runtime that we can use to execute our tests with. Up until now, we've been using the browser via Figwheel, but ideally, we want to be able to run our tests in a headless environment as well. For this purpose. we recommend installing PhantomJS (though other execution environments are also fine). If you're on OS X and have Homebrew installed (http://www.brew.sh), installing PhantomJS is as simple as typing brew install phantomjs. If you're not on OS X or don't have Homebrew, you can find instructions on how to install PhantomJS on the project's website at http://phantomjs.org/. The key thing is that the following should work: $ phantomjs -v 2.0.0 Once you've got PhantomJS installed, you can now invoke your test runner from the command line with the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Let's break down this command. The first part, lein doo, just tells Leiningen to invoke the doo plugin. Next, we have phantom, which tells doo to use PhantomJS as its running environment. The doo plugin supports a number of other environments, including Chrome, Firefox, Internet Explorer, Safari, Opera, SlimerJS, NodeJS, Rhino, and Nashorn. Be aware that if you're interested in running doo on one of these other environments, you may have to configure and install additional software. For instance, if you want to run tests on Chrome, you'll need to install Karma as well as the appropriate Karma npm modules to enable Chrome interaction. Next we have test, which refers to the cljsbuild build ID we set up earlier. Lastly, we have once, which tells doo to just run tests and not to set up a filesystem watcher. If, instead, we wanted doo to watch the filesystem and rerun tests on any changes, we would just use lein doo phantom test. Testing fixtures The cljs.test project has support for adding fixtures to your tests that can run before and after your tests. Test fixtures are useful for establishing isolated states between tests—for instance, you can use tests to set up a specific database state before each test and to tear it down afterward. You can add them to your ClojureScript tests by declaring them with the use-fixtures macro within the testing namespace you want fixtures applied to. Let's see what this looks like in practice by changing one of our existing tests and adding some fixtures to it. Modify app-test.cljs to the following: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is use-fixtures]])) ;; Run these fixtures for each test. ;; We could also use :once instead of :each in order to run ;; fixtures once for the entire namespace instead of once for ;; each individual test. (use-fixtures :each {:before (fn [] (println "Setting up tests...")) :after (fn [] (println "Tearing down tests..."))}) (deftest another-successful-test ;; Give us an idea of when this test actually executes. (println "Running a test...") (is (= 4 (count "test")))) Here, we've added a call to use-fixtures that prints to the console before and after running the test, and we've added a println call to the test itself so that we know when it executes. Now when we run this test, we get the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Setting up tests... Running a test... Tearing down tests... Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Note that our fixtures get called in the order we expect them to. Asynchronous testing Due to the fact that client-side code is frequently asynchronous and JavaScript is single threaded, we need to have a way to support asynchronous tests. To do this, we can use the async macro from cljs.test. Let's take a look at an example using an asynchronous HTTP GET request. First, let's modify our project.clj file to add cljs-ajax to our dependencies. Our dependencies project key should now look something like this: :dependencies [[org.clojure/clojure "1.8.0"] [org.clojure/clojurescript "1.7.228"] [cljs-ajax "0.5.4"] [org.clojure/core.async "0.2.374" :exclusions [org.clojure/tools.reader]]] Next, let's create a new async_test.cljs file in our test.testing directory. Inside it, we will add the following code: (ns testing.async-test (:require [ajax.core :refer [GET]] [cljs.test :refer-macros [deftest is async]])) (deftest test-async (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!"))})) Note that we're not using async in our test at the moment. Let's try running this test with doo (don't forget that you have to add testing.async-test to test_runner.cljs!): $ lein doo phantom test once ... Testing testing.async-test ... Ran 4 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Now, our test here passes, but note that the println async code never fires, and our additional assertion doesn't get called (looking back at our previous examples, since we've added a new is assertion we should expect to see four assertions in the final summary)! If we actually want our test to appropriately validate the error-handler callback within the context of the test, we need to wrap it in an async block. Doing so gives us a test that looks like the following: (deftest test-async (async done (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!") (done))}))) Now, let's try to run our tests again: $ lein doo phantom test once ... Testing testing.async-test Test finished! ... Ran 4 tests containing 4 assertions. 1 failures, 0 errors. Subprocess failed Awesome! Note that this time we see the printed statement from our callback, and we can see that cljs.test properly ran all four of our assertions. Asynchronous fixtures One final "gotcha" on testing—the fixtures we talked about earlier in this article do not handle asynchronous code automatically. This means that if you have a :before fixture that executes asynchronous logic, your test can begin running before your fixture has completed! In order to get around this, all you need to do is to wrap your :before fixture in an async block, just like with asynchronous tests. Consider the following for instance: (use-fixtures :once {:before #(async done ... (done)) :after #(do ...)}) Summary This concludes our section on cljs.test. Testing, whether in ClojureScript or any other language, is a critical software engineering best practice to ensure that your application behaves the way you expect it to and to protect you and your fellow developers from accidentally introducing bugs to your application. With cljs.test and doo, you have the power and flexibility to test your ClojureScript application with multiple browsers and JavaScript environments and to integrate your tests into a larger continuous testing framework. Resources for Article: Further resources on this subject: Clojure for Domain-specific Languages - Design Concepts with Clojure [article] Visualizing my Social Graph with d3.js [article] Improving Performance with Parallel Programming [article]
Read more
  • 0
  • 0
  • 7471

article-image-kotlin-basics
Packt
16 Nov 2016
7 min read
Save for later

Kotlin Basics

Packt
16 Nov 2016
7 min read
In this article by Stephen Samuel and Stefan Bocutiu, the authors of the book Programming Kotlin, it’s time to discover the fundamental building blocks of Kotlin. This article will cover the basic constructs of the language, such as defining variables, control flow syntax, type inference, and smart casting, and its basic types and their hierarchy. (For more resources related to this topic, see here.) For those coming from a Java background, this article will also highlight some of the key differences between Kotlin and Java and how Kotlin’s language features are able to exist on the JVM. For those who are not existing Java programmers, then those differences can be safely skipped. vals and vars Kotlin has two keywords for declaring variables: val and var. A var is a mutable variable—a variable that can be changed to another value by reassigning it. This is equivalent to declaring a variable in Java. val name = “kotlin” Alternatively, the var can be initialized later: var name: String name = “kotlin” Variables defined with var can be reassigned since they are mutable: var name = “kotlin” name = “more kotlin” The val keyword is used to declare a read-only variable. This is equivalent to declaring a final variable in Java. A val must be initialized when created since it cannot be changed later: val name = “kotlin” A read-only variable does not mean the instance itself is automatically immutable. The instance may still allow its member variables to be changed via functions or properties. But the variable itself cannot change its value or be reassigned to another value. Type inference Did you notice in the previous section that the type of the variable was not included when it was initialized? This is different to Java, where the type of the variable must always accompany its declaration. Even though Kotlin is a strongly typed language, we don’t always need to declare types explicitly. The compiler can attempt to figure out the type of an expression from the information included in the expression. A simple val is an easy case for the compiler because the type is clear from the right-hand side. This mechanism is called type inference. This reduces boilerplate while keeping the type safety we expect of a modern language. Values and variables are not the only places where type inference can be used. It can also be used in closures where the type of the parameter(s) can be inferred from the function signature. It can also be used in single-line functions, where the return value can be inferred from the expression in the function, as this example demonstrates: fun plusOne(x: Int) = x + 1 Sometimes, it is helpful to add type inference if the type inferred by the compiler is not exactly what you want: val explicitType: Number = 12.3 Basic types One of the big differences between Kotlin and Java is that in Kotlin, everything is an object. If you come from a Java background, then you will already be aware that in Java, there are special primitive types, which are treated differently from objects. They cannot be used as generic types, do not support method/function calls, and cannot be assigned null. An example is the boolean primitive type. Java introduced wrapper objects to offer a workaround in which primitive types are wrapped in objects so that java.lang. Boolean wraps a boolean in order to smooth over the distinctions. Kotlin removes this necessity entirely from the language by promoting the primitives to full objects. Whenever possible, the Kotlin compiler will map basic types back to JVM primitives for performance reasons. However, the values must sometimes be boxed, such as when the type is nullable or when it is used in generics. Two different values that are boxed might not use the same instance, so referential equality is not guaranteed on boxed values. Numbers The built-in number types are as follows: Type Width long 64 int 32 short 16 byte 8 double 64 float 32 To create a number literal, use one of the following forms: val int = 123 val long = 123456L val double = 12.34 val float = 12.34F val hexadecimal = 0xAB val binary = 0b01010101 You will notice that a long value requires the suffix L and a float, F. The double type is used as the default for floating point numbers, and int for integral numbers. The hexadecimal and binary use the prefixes 0x and 0b respectively. Kotlin does not support the automatic widening of numbers, so conversion must be invoked explicitly. Each number has a function that will convert the value to one of the other number types: val int = 123 val long = int.toLong() val float = 12.34F val double = float.toDouble() The full set of methods for conversions between types is as follows: toByte() toShort() toInt() toLong() toFloat() toDouble() toChar() Unlike Java, there are no built-in bitwise operators, but named functions instead. This can be invoked like operators (except inverse): val leftShift = 1 shl 2 val rightShift = 1 shr 2 val unsignedRightShift = 1 ushr 2 val and = 1 and 0x00001111 val or = 1 and 0x00001111 val xor = 1 xor 0x00001111 val inv = 1.inv() Booleans Booleans are rather standard and support the usual negation, conjunction and disjunction operations. Conjunction and disjunction are lazily evaluated. So if the left-hand side satisfies the clause, then the right-hand side will not be evaluated: val x = 1 val y = 2 val z = 2 val isTrue = x < y && x < z val alsoTrue = x == y || y == z Chars Chars represent a single character. Character literals use single quotes, such as a or Z. Chars also support escaping for the following characters: t, b, n, r, , , \, $. All Unicode characters can be represented using the respective Unicode number, like so: u1234. Note that the char type is not treated as a number, unlike Java. Strings Just as in Java, strings are immutable. String literals can be created using double or triple quotes. Double quotes create an escaped string. In an escaped string, special characters such as newline must be escaped: val string = “string with n new line” Triple quotes create a raw string. In a raw string, no escaping is necessarily, and all characters can be included. val rawString = “““ raw string is super useful for strings that span many lines “““ Strings also provide an iterator function, so they can be used in a for loop. Arrays In Kotlin, we can create an array using the arrayOf() library function: val array = arrayOf(1, 2, 3) Alternatively, we can create an array from an initial size and a function that is used to generate each element: val perfectSquares = Array(10, { k -> k * k }) Unlike Java, arrays are not treated specially by the language and are regular collection classes. Instances of Array provide an iterator function and a size function as well as a get and set function. The get and set functions are also available through bracket syntax like many C style languages: val element1 = array[0] val element2 = array[1] array[2] = 5 To avoid boxing types that will ultimately be represented as primitives in the JVM, Kotlin provides alternative array classes that are specialized for each of the primitive types. This allows performance-critical code to use arrays as efficiently as they would do in plain Java. The provided classes are ByteArray, CharArray, ShortArray, IntArray, LongArray, BooleanArray, FloatArray, and DoubleArray. Comments Comments in Kotlin will come as no surprise to most programmers as they are the same as Java, Javascript, and C, among other languages. Block comments and line comments are supported: // line comment /* A block comment can span many lines */ Packages Packages allow us to split code into namespaces. Any file may begin with a package declaration: package com.packt.myproject class Foo fun bar(): String = “bar” The package name is used to give us the fully-qualified name (FQN) for a class, object, interface, or function. In the previous example, the Foo class has the FQN com.packt.myproject.Foo, and the top-level function bar has the FQN com.packt.myproject.bar. Summary In Kotlin, everything is an object in the sense that we can call member functions and properties on any variable. Some types are built in because their implementation is optimized, but to the user, they look like ordinary classes. In this article, we described most of these types: numbers, characters, booleans, and arrays. Resources for Article: Further resources on this subject: Responsive Applications with Asynchronous Programming [Article] Asynchronous Programming in F# [Article] Go Programming Control Flow [Article]
Read more
  • 0
  • 0
  • 6944
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-neo4j-embedded-database
Packt
09 May 2014
6 min read
Save for later

Working with a Neo4j Embedded Database

Packt
09 May 2014
6 min read
(For more resources related to this topic, see here.) Neo4j is a graph database, which means that it does not use tables and rows to represent data logically; instead, it uses nodes and relationships. Both nodes and relationships can have a number of properties. While relationships must have one direction and one type, nodes can have a number of labels. For example, the following diagram shows three nodes and their relationships, where every node has a label (language or graph database), while relationships have a type (QUERY_LANGUAGE_OF and WRITTEN_IN). The properties used in the graph shown in the following diagram are: name, type, and from. Note that every relation must have exactly one type and one direction, whereas labels for nodes are optional and can be multiple. Neo4j running modes Neo4j can be used in two modes: An embedded database in a Java application; A standalone server via REST In any case, this choice does not affect the way you query and work with the database. It's only an architectural choice driven by the nature of the application (whether a standalone server or a client-server), performance, monitoring, and safety of data. An embedded database An embedded Neo4j database is the best choice for performance. It runs in the same process of the client application that hosts it and stores data in the given path. Thus, an embedded database must be created programmatically. We choose an embedded database for the following reasons: When we use Java as the programming language for our project When our application is standalone Preparing the development environment The fastest way to prepare the IDE for Neo4j is using Maven. Maven is a dependency management and automated building tool. In the following procedure, we will use NetBeans 7.4, but it works in a very similar way with the other IDEs (for Eclipse, you would need the m2eclipse plugin). The procedure is described as follows: Create a new Maven project as shown in the following screenshot: In the next page of the wizard, name the project, set a valid project location, and then click on Finish. After NetBeans has created the project, expand Project Files in the project tree and open the pom.xml file. In the <dependencies> tag, insert the following XML code: <dependencies> <dependency> <groupId>org.neo4j</groupId> <artifactId>neo4j</artifactId> <version>2.0.1</version> </dependency> </dependencies> <repositories> <repository> <id>neo4j</id> <url>http://m2.neo4j.org/content/repositories/releases/</url> <releases> <enabled>true</enabled> </releases> </repository> </repositories>   This code instructs Maven the dependency we are using on our project, that is, Neo4j. The version we have used here is 2.0.1. Of course, you can specify the latest available version. Once saved, the Maven file resolves the dependency, downloads the JAR files needed, and updates the Java build path. Now, the project is ready to use Neo4j and Cypher. Creating an embedded database Creating an embedded database is straightforward. First of all, to create a database, we need a GraphDatabaseFactory class, which can be done with the following code: GraphDatabaseFactory graphDbFactory = new GraphDatabaseFactory();   Then, we can invoke the newEmbeddedDatabase method with the following code: GraphDatabaseService graphDb = graphDbFactory .newEmbeddedDatabase("data/dbName");   Now, with the GraphDatabaseService class, we can fully interact with the database, create nodes, create relationships, set properties and indexes. Invoking Cypher from Java To execute Cypher queries on a Neo4j database, you need an instance of ExecutionEngine; this class is responsible for parsing and running Cypher queries, returning results in a ExecutionResult instance: import org.neo4j.cypher.javacompat.ExecutionEngine; import org.neo4j.cypher.javacompat.ExecutionResult; // ... ExecutionEngine engine = new ExecutionEngine(graphDb); ExecutionResult result = engine.execute("MATCH (e:Employee) RETURN e");   Note that we use the org.neo4j.cypher.javacompat package and not the org.neo4j.cypher package even though they are almost the same. The reason is that Cypher is written in Scala, and Cypher authors provide us with the former package for better Java compatibility. Now with the results, we can do one of the following options: Dumping to a string value Converting to a single column iterator Iterating over the full row Dumping to a string is useful for testing purposes: String dumped = result.dumpToString();   If we print the dumped string to the standard output stream, we will get the following result: Here, we have a single column (e) that contains the nodes. Each node is dumped with all its properties. The numbers between the square brackets are the node IDs, which are the long and unique values assigned by Neo4j on the creation of the node. When the result is single column, or we need only one column of our result, we can get an iterator over one column with the following code: import org.neo4j.graphdb.ResourceIterator; // ... ResourceIterator<Node> nodes = result.columnAs("e");   Then, we can iterate that column in the usual way, as shown in the following code: while(nodes.hasNext()) { Node node = nodes.next(); // do something with node }   However, Neo4j provides a syntax-sugar utility to shorten the code that is to be iterated: import org.neo4j.helpers.collection.IteratorUtil; // ... for (Node node : IteratorUtil.asIterable(nodes)) { // do something with node }   If we need to iterate over a multiple-column result, we would write this code in the following way: ResourceIterator<Map<String, Object>> rows = result.iterator(); for(Map<String,Object> row : IteratorUtil.asIterable(rows)) { Node n = (Node) row.get("e"); try(Transaction t = n.getGraphDatabase().beginTx()) { // do something with node } }   The iterator function returns an iterator of maps, where keys are the names of the columns. Note that when we have to work with nodes, even if they are returned by a Cypher query, we have to work in transaction. In fact, Neo4j requires that every time we work with the database, either reading or writing to the database, we must be in a transaction. The only exception is when we launch a Cypher query. If we launch the query within an existing transaction, Cypher will work as any other operation. No change will be persisted on the database until we commit the transaction, but if we run the query outside any transaction, Cypher will open a transaction for us and will commit changes at the end of the query. Summary We have now completed the setting up of a Neo4j database. We also learned about Cypher pattern matching. Resources for Article: Further resources on this subject: OpenSceneGraph: Advanced Scene Graph Components [Article] Creating Network Graphs with Gephi [Article] Building a bar graph cityscape [Article]
Read more
  • 0
  • 0
  • 6847

article-image-r6-classes-retrieve-live-data-markets-wallets
Pravin Dhandre
23 Apr 2018
11 min read
Save for later

Using R6 classes in R to retrieve live data for markets and wallets

Pravin Dhandre
23 Apr 2018
11 min read
In this tutorial, you will learn to create a simple requester to request external information from an API over the internet. You will also learn to develop exchange and wallet infrastructure using R programming. Creating a simple requester to isolate API calls Now, we will focus on how we actually retrieve live data. This functionality will also be implemented using R6 classes, as the interactions can be complex. First of all, we create a simple Requester class that contains the logic to retrieve data from JSON APIs found elsewhere in the internet and that will be used to get our live cryptocurrency data for wallets and markets. We don't want logic that interacts with external APIs spread all over our classes, so we centralize it here to manage it as more specialized needs come into play later. As you can see, all this object does is offer the public request() method, and all it does is use the formJSON() function from the jsonlite package to call a URL that is being passed to it and send the data it got back to the user. Specifically, it sends it as a dataframe when the data received from the external API can be coerced into dataframe-form. library(jsonlite) Requester <- R6Class( "Requester", public = list( request = function(URL) { return(fromJSON(URL)) } ) ) Developing our exchanges infrastructure Our exchanges have multiple markets inside, and that's the abstraction we will define now. A Market has various private attributes, as we saw before when we defined what data is expected from each file, and that's the same data we see in our constructor. It also offers a data() method to send back a list with the data that should be saved to a database. Finally, it provides setters and getters as required. Note that the setter for the price depends on what units are requested, which can be either usd or btc, to get a market's asset price in terms of US Dollars or Bitcoin, respectively: Market <- R6Class( "Market", public = list( initialize = function(timestamp, name, symbol, rank, price_btc, price_usd) { private$timestamp <- timestamp private$name <- name private$symbol <- symbol private$rank <- rank private$price_btc <- price_btc private$price_usd <- price_usd }, data = function() { return(list( timestamp = private$timestamp, name = private$name, symbol = private$symbol, rank = private$rank, price_btc = private$price_btc, price_usd = private$price_usd )) }, set_timestamp = function(timestamp) { private$timestamp <- timestamp }, get_symbol = function() { return(private$symbol) }, get_rank = function() { return(private$rank) }, get_price = function(base) { if (base == 'btc') { return(private$price_btc) } else if (base == 'usd') { return(private$price_usd) } } ), private = list( timestamp = NULL, name = "", symbol = "", rank = NA, price_btc = NA, price_usd = NA ) ) Now that we have our Market definition, we proceed to create our Exchange definition. This class will receive an exchange name as name and will use the exchange_requester_factory() function to get an instance of the corresponding ExchangeRequester. It also offers an update_markets() method that will be used to retrieve market data with the private markets() method and store it to disk using the timestamp and storage objects being passed to it. Note that instead of passing the timestamp through the arguments for the private markets() method, it's saved as a class attribute and used within the private insert_metadata() method. This technique provides cleaner code, since the timestamp does not need to be passed through each function and can be retrieved when necessary. The private markets() method calls the public markets() method in the ExchangeRequester instance saved in the private requester attribute (which was assigned to by the factory) and applies the private insert_metadata() method to update the timestamp for such objects with the one sent to the public update_markets() method call before sending them to be written to the database: source("./requesters/exchange-requester-factory.R", chdir = TRUE) Exchange <- R6Class( "Exchange", public = list( initialize = function(name) { private$requester <- exchange_requester_factory(name) }, update_markets = function(timestamp, storage) { private$timestamp <- unclass(timestamp) storage$write_markets(private$markets()) } ), private = list( requester = NULL, timestamp = NULL, markets = function() { return(lapply(private$requester$markets(), private$insert_metadata)) }, insert_metadata = function(market) { market$set_timestamp(private$timestamp) return(market) } ) ) Now, we need to provide a definition for our ExchangeRequester implementations. As in the case of the Database, this ExchangeRequester will act as an interface definition that will be implemented by the CoinMarketCapRequester. We see that the ExchangeRequester specifies that all exchange requester instances should provide a public markets() method, and that a list is expected from such a method. From context, we know that this list should contain Market instances. Also, each ExchangeRequester implementation will contain a Requester object by default, since it's being created and assigned to the requester private attribute upon class instantiation. Finally, each implementation will also have to provide a create_market() private method and will be able to use the request() private method to communicate to the Requester method request() we defined previously: source("../../../utilities/requester.R") KNOWN_ASSETS = list( "BTC" = "Bitcoin", "LTC" = "Litecoin" ) ExchangeRequester <- R6Class( "ExchangeRequester", public = list( markets = function() list() ), private = list( requester = Requester$new(), create_market = function(resp) NULL, request = function(URL) { return(private$requester$request(URL)) } ) ) Now we proceed to provide an implementation for CoinMarketCapRequester. As you can see, it inherits from ExchangeRequester, and it provides the required method implementations. Specifically, the markets() public method calls the private request() method from ExchangeRequester, which in turn calls the request() method from Requester, as we have seen, to retrieve data from the private URL specified. If you request data from CoinMarketCap's API by opening a web browser and navigating to the URL shown (https:/​/​api.​coinmarketcap.​com/​v1/​ticker), you will get a list of market data. That is the data that will be received in our CoinMarketCapRequester instance in the form of a dataframe, thanks to the Requester object, and will be transformed into numeric data where appropriate using the private clean() method, so that it's later used to create Market instances with the apply() function call, which in turn calls the create_market() private method. Note that the timestamp is set to NULL for all markets created this way because, as you may remember from our Exchange class, it's set before writing it to the database. There's no need to send the timestamp information all the way down to the CoinMarketCapRequester, since we can simply write at the Exchange level right before we send the data to the database: source("./exchange-requester.R") source("../market.R") CoinMarketCapRequester <- R6Class( "CoinMarketCapRequester", inherit = ExchangeRequester, public = list( markets = function() { data <- private$clean(private$request(private$URL)) return(apply(data, 1, private$create_market)) } ), private = list( URL = "https://api.coinmarketcap.com/v1/ticker", create_market = function(row) { timestamp <- NULL return(Market$new( timestamp, row[["name"]], row[["symbol"]], row[["rank"]], row[["price_btc"]], row[["price_usd"]] )) }, clean = function(data) { data$price_usd <- as.numeric(data$price_usd) data$price_btc <- as.numeric(data$price_btc) data$rank <- as.numeric(data$rank) return(data) } ) ) Finally, here's the code for our exchange_requester_factory(). As you can see, it's basically the same idea we have used for our other factories, and its purpose is to easily let us add more implementations for our ExchangeRequeseter by simply adding else-if statements in it: source("./coinmarketcap-requester.R") exchange_requester_factory <- function(name) { if (name == "CoinMarketCap") { return(CoinMarketCapRequester$new()) } else { stop("Unknown exchange name") } } Developing our wallets infrastructure Now that we are able to retrieve live price data from exchanges, we turn to our Wallet definition. As you can see, it specifies the type of private attributes we expect for the data that it needs to handle, as well as the public data() method to create the list of data that needs to be saved to a database at some point. It also provides getters for email, symbol, and address, and the public pudate_assets() method, which will be used to get and save assets into the database, just as we did in the case of Exchange. As a matter of fact, the techniques followed are exactly the same, so we won't explain them again: source("./requesters/wallet-requester-factory.R", chdir = TRUE) Wallet <- R6Class( "Wallet", public = list( initialize = function(email, symbol, address, note) { private$requester <- wallet_requester_factory(symbol, address) private$email <- email private$symbol <- symbol private$address <- address private$note <- note }, data = function() { return(list( email = private$email, symbol = private$symbol, address = private$address, note = private$note )) }, get_email = function() { return(as.character(private$email)) }, get_symbol = function() { return(as.character(private$symbol)) }, get_address = function() { return(as.character(private$address)) }, update_assets = function(timestamp, storage) { private$timestamp <- timestamp storage$write_assets(private$assets()) } ), private = list( timestamp = NULL, requester = NULL, email = NULL, symbol = NULL, address = NULL, note = NULL, assets = function() { return (lapply ( private$requester$assets(), private$insert_metadata)) }, insert_metadata = function(asset) { timestamp(asset) <- unclass(private$timestamp) email(asset) <- private$email return(asset) } ) ) Implementing our wallet requesters The WalletRequester will be conceptually similar to the ExchangeRequester. It will be an interface, and will be implemented in our BTCRequester and LTCRequester interfaces. As you can see, it requires a public method called assets() to be implemented and to return a list of Asset instances. It also requires a private create_asset() method to be implemented, which should return individual Asset instances, and a private url method that will build the URL required for the API call. It offers a request() private method that will be used by implementations to retrieve data from external APIs: source("../../../utilities/requester.R") WalletRequester <- R6Class( "WalletRequester", public = list( assets = function() list() ), private = list( requester = Requester$new(), create_asset = function() NULL, url = function(address) "", request = function(URL) { return(private$requester$request(URL)) } ) ) The BTCRequester and LTCRequester implementations are shown below for completeness, but will not be explained. If you have followed everything so far, they should be easy to understand: source("./wallet-requester.R") source("../../asset.R") BTCRequester <- R6Class( "BTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/btc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Bitcoin", symbol = "BTC", total = total, address = private$address )) } ) ) source("./wallet-requester.R") source("../../asset.R") LTCRequester <- R6Class( "LTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/ltc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Litecoin", symbol = "LTC", total = total, address = private$address )) } ) ) The wallet_requester_factory() works just as the other factories; the only difference is that in this case, we have two possible implementations that can be returned, which can be seen in the if statement. If we decided to add a WalletRequester for another cryptocurrency, such as Ether, we could simply add the corresponding branch here, and it should work fine: source("./btc-requester.R") source("./ltc-requester.R") wallet_requester_factory <- function(symbol, address) { if (symbol == "BTC") { return(BTCRequester$new(address)) } else if (symbol == "LTC") { return(LTCRequester$new(address)) } else { stop("Unknown symbol") } } Hope you enjoyed this interesting tutorial and were able to retrieve live data for your application. To know more, do check out the R Programming By Example and start handling data efficiently with modular, maintainable and expressive codes. Read More Introduction to R Programming Language and Statistical Environment 20 ways to describe programming in 5 words  
Read more
  • 0
  • 0
  • 6401

article-image-practical-big-data-exploration-spark-and-python
Anant Asthana
06 Jun 2016
6 min read
Save for later

Practical Big Data Exploration with Spark and Python

Anant Asthana
06 Jun 2016
6 min read
The reader of this post should be familiar with basic concepts of Spark, such as the shell and RDDs. Data sizes have increased, but our exploration tools and techniques have not evolved as fast. Traditional Hadoop Map Reduce jobs are cumbersome and time consuming to develop. Also, Pig isn't quite as fully featured and easy to work with. Exploration can mean parsing/analyzing raw text documents, analyzing log files, processing tabular data in various formats, and exploring data that may or may not be correctly formatted. This is where a tool like Spark excels. It provides an interactive shell for quick processing, prototyping, exploring, and slicing and dicing data. Spark works with R, Scala, and Python. In conjunction with Jupyter notebooks, we get a clean web interface to write out python, R, or Scala code backed by a Spark cluster. Jupyter notebook is also a great tool for presenting our findings, since we can do inline visualizations and easily share them as a PDF on GitHub or through a web viewer. The power of this set up is that we make Spark do the heavy lifting while still having the flexibility to test code on a small subset of data via the interactive notebooks. Another powerful capability of Spark is its Data Frames API. After we have cleaned our data (dealt with badly formatted rows that can't be loaded correctly), we can load it as a Data Frame. Once the data is a loaded as a Data Frame, we can use the Spark SQL to explore the data. Since notebooks can be shared, this is also a great way to let the developers do the work of cleaning the data and loading it as a Data Frame. Analysts, data scientists, and the likes can then use this data for their tasks. Data Frames can also be exported as Hive tables, which are commonly used in Hadoop-based warehouses. Examples: For this section, we will be using examples that I have uploaded on GitHub. These examples can be found at here. In addition to the examples, there is also a Docker container for running these examples that have been provided. The container runs Spark in a pseudo-distributed mode, and has Jupyter notebook configured with to run Python/PYspark. The basics: To set this up, in your environment, you need a running spark cluster with Jupyter notebook installed. Jupyter notebook, by default, only has the Python kernel configured. You can download additional kernels for Jupyter notebook to run R and Scala. To run Jupyter notebook with Pyspark, use the following command on your cluster: IPYTHON_OPTS="notebook --pylab inline --notebook-dir=<directory sto store notebooks>" MASTER=local[6] ./bin/pyspark When you start Jupyter notebook in the way we mentioned earlier, it initializes a few critical variables. One of them is the Spark Context (sc), which is used to interact with all spark-related tasks. The other is sqlContext, which is the Spark SQL context. This is used to interact with Spark SQL (create Data Frames, run queries, and so on). You need to understand the following: Log Analysis In this example, we use a log file from Apache Server. The code for this example can be found at here. We load our log file in question using: log_file = sc.textFile("../data/log_file.txt") Spark can load files from HDFS, local filesystem, and S3 natively. Other storage formats libraries can be found freely on the Internet, or you could write you own formats (Blog post for another time). The previous command loads the log file. We then use Python’s native shlex library to split the file into different fields and use the Sparks map command to load them as a Row. An RDD consisting of rows can easily be registered as a DataFrame. How we arrived at this solution is where data exploration comes in. We use the Sparks takeSample method to sample the file and get five rows: log_file.takeSample(True, 5) These sample rows are helpful in determining how to parse and load the file. Once we have written our code to load the file, we can apply it to the dataset using map to create a new RDD consisting of Rows to test code on a subset of data in a similar manner using the take or takeSample methods. The take method sequentially reads rows from the file, so although it is faster, it may not be a good representation of the dataset. The take sample method on the other hand randomly picks sample rows from the file; this has a better representation. To create the new RDD and register it as a DataFrame, we use the following code: schema_DF = splits.map(create_schema).toDF() Once we have created the DataFrame and tested it using take/takeSample to make sure that our loading code is working, we can register it as a table using the following: sqlCtx.registerDataFrameAsTable(schema_DF, 'logs') Once it is registered as a table, we can run SQL queries on the log file: sample = sqlCtx.sql('SELECT * FROM logs LIMIT 10').collect() Note that the collect() method collects the result to the driver’s memory so this may not be feasible for large datasets. Use take/takeSample instead to sample data if your dataset is large. The beauty of using Spark with Jupyter is that all this exploration work takes only a few lines of code. It can be written interactively with all the trial and error we needed, the processed data can be easily shared, and running interactive queries on this data is easy. Last but not least, this can easily scale to massive (GB, TB) data sets. k-means on the Iris dataset In this example, we use data from the Iris dataset, which contains measurements of sepal and petal length and width. This is a popular open source dataset used to showcase classification algorithms. In this case, we use Spark’s k-Means algorithm from the MLlib library of Spark. MLlib is Spark’s machine learning library. The code and the output can be found at here. In this example, we are not going to get into too much detail since some of the concepts are outside the scope of this blog post. This example showcases how we load the Iris dataset and create a DataFrame with it. We then train a k-means classifier on this dataset, and then we visualize our classification results. The power of this is that we did a somewhat complex task of parsing a dataset, creating a DataFrame, training a machine learning classifier, and visualizing the data in an interactive and scalable manner. The repository contains several more examples. Feel free to reach out to me if you have any questions. If you would like to see more posts with practical examples, please let us know. About the Author Anant Asthana is a data scientist and principal architect at Pythian, and he can be found on Github at anantasty.
Read more
  • 0
  • 0
  • 6397

article-image-loops-conditions-and-recursion
Packt
14 Oct 2016
14 min read
Save for later

Loops, Conditions, and Recursion

Packt
14 Oct 2016
14 min read
In this article from Paul Johnson, author of the book Learning Rust, we would take a look at how loops and conditions within any programming language are a fundamental aspect of operation. You may be looping around a list attempting to find when something matches, and when a match occurs, branching out to perform some other task; or, you may just want to check a value to see if it meets a condition. In any case, Rust allows you to do this. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Types of loop available Different types of branching within loops Recursive methods When the semi-colon (;) can be omitted and what it means Loops Rust has essentially three types of loop—for, loop, and while. The for loop This type of loop is very simple to understand, yet rather powerful in operation. It is simple. In that, we have a start value, an end condition, and some form of value change. Although, the power comes in those two last points. Let's take a simple example to start with—a loop that goes from 0 to 10 and outputs the value: for x in 0..10 { println!("{},", x); } We create a variable x that takes the expression (0..10) and does something with it. In Rust terminology, x is not only a variable but also an iterator, as it gives back a value from a series of elements. This is obviously a very simple example. We can also go down as well, but the syntax is slightly different. In C, you will expect something akin to for (i = 10; i > 0; --i). This is not available in Rust, at least, not in the stable branches. Instead, we will use the rev() method, which is as follows: for x in (0..10).rev() { println!("{},", x); } It is worth noting that, as with the C family, the last number is to be excluded. So, for the first example, the values outputted are 9 to 0; essentially, the program generates the output values from 0 to 10 and then outputs them in reverse. Notice also that the condition is in braces. This is because the second parameter is the condition. In C#, this will be the equivalent of a foreach. In Rust, it will be as follows: for var in condition { // do something } The C# equivalent for the preceding code is: foreach(var t in condition) // do something Using enumerate A loop condition can also be more complex using multiple conditions and variables. For example, the for loop can be tracked using enumerate. This will keep track of how many times the loop has executed, as shown here: for(i, j) in (10..20).enumerate() { println!("loop has executed {} times. j = {}", i, j); } 'The following is the output: The enumeration is given in the first variable with the condition in the second. This example is not of that much use, but where it comes into its own is when looping over an iterator. Say we have an array that we need to iterate over to obtain the values. Here, the enumerate can be used to obtain the value of the array members. However, the value returned in the condition will be a pointer, so a code such as the one shown in the following example will fail to execute (line is a & reference whereas an i32 is expected) fn main() { let my_array: [i32; 7] = [1i32,3,5,7,9,11,13]; let mut value = 0i32; for(_, line) in my_array.iter().enumerate() { value += line; } println!("{}", value); } This can be simply converted back from the reference value, as follows: for(_, line) in my_array.iter().enumerate() { value += *line; } The iter().enumerate() method can equally be used with the Vec type, as shown in the following code: fn main() { let my_array = vec![1i32,3,5,7,9,11,13]; let mut value = 0i32; for(_,line) in my_array.iter().enumerate() { value += *line; } println!("{}", value); } In both cases, the value given at the end will be 49, as shown in the following screenshot: The _ parameter You may be wondering what the _ parameter is. It's Rust, which means that there is an argument, but we'll never do anything with it, so it's a parameter that is only there to ensure that the code compiles. It's a throw-away. The _ parameter cannot be referred to either; whereas, we can do something with linenumber in for(linenumber, line), but we can't do anything with _ in for(_, line). The simple loop A simple form of the loop is called loop: loop { println!("Hello"); } The preceding code will either output Hello until the application is terminated or the loop reaches a terminating statement. While… The while condition is of slightly more use, as you will see in the following code snippet: while (condition) { // do something } Let's take a look at the following example: fn main() { let mut done = 0u32; while done != 32 { println!("done = {}", done); done+=1; } } The preceding code will output done = 0 to done = 31. The loop terminates when done equals 32. Prematurely terminating a loop Depending on the size of the data being iterated over within a loop, the loop can be costly on processor time. For example, say the server is receiving data from a data-logging application, such as measuring values from a gas chromatograph, over the entire scan, it may record roughly half a million data points with an associated time position. For our purposes, we want to add all of the recorded values until the value is over 1.5 and once that is reached, we can stop the loop. Sound easy? There is one thing not mentioned, there is no guarantee that the recorded value will ever reach over 1.5, so how can we terminate the loop if the value is reached? We can do this one of two ways. First is to use a while loop and introduce a Boolean to act as the test condition. In the following example, my_array represents a very small subsection of the data sent to the server. fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9]; let mut counter: usize = 0; let mut result = 0f32; let mut test = false; while test != true { if my_array[counter] > 1.5 { test = true; } else { result += my_array[counter]; counter += 1; } } println!("{}", result); } The result here is 4.4. This code is perfectly acceptable, if slightly long winded. Rust also allows the use of break and continue keywords (if you're familiar with C, they work in the same way). Our code using break will be as follows: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9]; let mut result = 0f32; for(_, value) in my_array.iter().enumerate() { if *value > 1.5 { break; } else { result += *value; } } println!("{}", result); } Again, this will give an answer of 4.4, indicating that the two methods used are the equivalent of each other. If we replace break with continue in the preceding code example, we will get the same result (4.4). The difference between break and continue is that continue jumps to the next value in the iteration rather than jumping out, so if we had the final value of my_array as 1.3, the output at the end should be 5.7. When using break and continue, always keep in mind this difference. While it may not crash the code, mistaking break and continue may lead to results that you may not expect or want. Using loop labels Rust allows us to label our loops. This can be very useful (for example with nested loops). These labels act as symbolic names to the loop and as we have a name to the loop, we can instruct the application to perform a task on that name. Consider the following simple example: fn main() { 'outer_loop: for x in 0..10 { 'inner_loop: for y in 0..10 { if x % 2 == 0 { continue 'outer_loop; } if y % 2 == 0 { continue 'inner_loop; } println!("x: {}, y: {}", x, y); } } } What will this code do? Here x % 2 == 0 (or y % 2 == 0) means that if variable divided by two returns no remainder, then the condition is met and it executes the code in the braces. When x % 2 == 0, or when the value of the loop is an even number, we will tell the application to skip to the next iteration of outer_loop, which is an odd number. However, we will also have an inner loop. Again, when y % 2 is an even value, we will tell the application to skip to the next iteration of inner_loop. In this case, the application will output the following results: While this example may seem very simple, it does allow for a great deal of speed when checking data. Let's go back to our previous example of data being sent to the web service. Recall that we have two values—the recorded data and some other value, for ease, it will be a data point. Each data point is recorded 0.2 seconds apart; therefore, every 5th data point is 1 second. This time, we want all of the values where the data is greater than 1.5 and the associated time of that data point but only on a time when it's dead on a second. As we want the code to be understandable and human readable, we can use a loop label on each loop. The following code is not quite correct. Can you spot why? The code compiles as follows: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9, 1.3, 0.1, 1.6, 0.6, 0.9, 1.1, 1.31, 1.49, 1.5, 0.7]; let my_time = vec![0.2f32, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6, 3.8]; 'time_loop: for(_, time_value) in my_time.iter().enumerate() { 'data_loop: for(_, value) in my_array.iter().enumerate() { if *value < 1.5 { continue 'data_loop; } if *time_value % 5f32 == 0f32 { continue 'time_loop; } println!("Data point = {} at time {}s", *value, *time_value); } } } This example is a very good one to demonstrate the correct operator in use. The issue is the if *time_value % 5f32 == 0f32 line. We are taking a float value and using the modulus of another float to see if we end up with 0 as a float. Comparing any value that is not a string, int, long, or bool type to another is never a good plan; especially, if the value is returned by some form of calculation. We can also not simply use continue on the time loop, so, how can we solve this problem? If you recall, we're using _ instead of a named parameter for the enumeration of the loop. These values are always an integer, therefore if we replace _ for a variable name, then we can use % 5 to perform the calculation and the code becomes: 'time_loop: for(time_enum, time_value) in my_time.iter().enumerate() { 'data_loop: for(_, value) in my_array.iter().enumerate() { if *value < 1.5 { continue 'data_loop; } if time_enum % 5 == 0 { continue 'time_loop; } println!("Data point = {} at time {}s", *value, *time_value); } } The next problem is that the output isn't correct. The code gives the following: Data point = 1.7 at time 0.4s Data point = 1.9 at time 0.4s Data point = 1.6 at time 0.4s Data point = 1.5 at time 0.4s Data point = 1.7 at time 0.6s Data point = 1.9 at time 0.6s Data point = 1.6 at time 0.6s Data point = 1.5 at time 0.6s The data point is correct, but the time is way out and continually repeats. We still need the continue statement for the data point step, but the time step is incorrect. There are a couple of solutions, but possibly the simplest will be to store the data and the time into a new vector and then display that data at the end. The following code gets closer to what is required: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9, 1.3, 0.1, 1.6, 0.6, 0.9, 1.1, 1.31, 1.49, 1.5, 0.7]; let my_time = vec![0.2f32, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6, 3.8]; let mut my_new_array = vec![]; let mut my_new_time = vec![]; 'time_loop: for(t, _) in my_time.iter().enumerate() { 'data_loop: for(v, value) in my_array.iter().enumerate() { if *value < 1.5 { continue 'data_loop; } else { if t % 5 != 0 { my_new_array.push(*value); my_new_time.push(my_time[v]); } } if v == my_array.len() { break; } } } for(m, my_data) in my_new_array.iter().enumerate() { println!("Data = {} at time {}", *my_data, my_new_time[m]); } } We will now get the following output: Data = 1.7 at time 1.4 Data = 1.9 at time 1.6 Data = 1.6 at time 2.2 Data = 1.5 at time 3.4 Data = 1.7 at time 1.4 Yes, we now have the correct data, but the time starts again. We're close, but it's not right yet. We aren't continuing the time_loop loop and we will also need to introduce a break statement. To trigger the break, we will create a new variable called done. When v, the enumerator for my_array, reaches the length of the vector (this is the number of elements in the vector), we will change this from false to true. This is then tested outside of the data_loop. If done == true, break out of the loop. The final version of the code is as follows: fn main() { let my_array = vec![0.6f32, 0.4, 0.2, 0.8, 1.3, 1.1, 1.7, 1.9, 1.3, 0.1, 1.6, 0.6, 0.9, 1.1, 1.31, 1.49, 1.5, 0.7]; let my_time = vec![0.2f32, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6]; let mut my_new_array = vec![]; let mut my_new_time = vec![]; let mut done = false; 'time_loop: for(t, _) in my_time.iter().enumerate() { 'data_loop: for(v, value) in my_array.iter().enumerate() { if v == my_array.len() - 1 { done = true; } if *value < 1.5 { continue 'data_loop; } else { if t % 5 != 0 { my_new_array.push(*value); my_new_time.push(my_time[v]); } else { continue 'time_loop; } } } if done {break;} } for(m, my_data) in my_new_array.iter().enumerate() { println!("Data = {} at time {}", *my_data, my_new_time[m]); } } Our final output from the code is this: Recursive functions The final form of loop to consider is known as a recursive function. This is a function that calls itself until a condition is met. In pseudocode, the function looks like this: float my_function(i32:a) { // do something with a if (a != 32) { my_function(a); } else { return a; } } An actual implementation of a recursive function would look like this: fn recurse(n:i32) { let v = match n % 2 { 0 => n / 2, _ => 3 * n + 1 }; println!("{}", v); if v != 1 { recurse(v) } } fn main() { recurse(25) } The idea of a recursive function is very simple, but we need to consider two parts of this code. The first is the let line in the recurse function and what it means: let v = match n % 2 { 0 => n / 2, _ => 3 * n + 1 }; Another way of writing this is as follows: let mut v = 0i32; if n % 2 == 0 { v = n / 2; } else { v = 3 * n + 1; } In C#, this will equate to the following: var v = n % 2 == 0 ? n / 2 : 3 * n + 1; The second part is that the semicolon is not being used everywhere. Consider the following example: fn main() { recurse(25) } What is the difference between having and not having a semicolon? Rust operates on a system of blocks called closures. The semicolon closes a block. Let's see what that means. Consider the following code as an example: fn main() { let x = 5u32; let y = { let x_squared = x * x; let x_cube = x_squared * x; x_cube + x_squared + x }; let z = { 2 * x; }; println!("x is {:?}", x); println!("y is {:?}", y); println!("z is {:?}", z); } We have two different uses of the semicolon. If we look at the let y line first: let y = { let x_squared = x * x; let x_cube = x_squared * x; x_cube + x_squared + x // no semi-colon }; This code does the following: The code within the braces is processed. The final line, without the semicolon, is assigned to y. Essentially, this is considered as an inline function that returns the line without the semicolon into the variable. The second line to consider is for z: let z = { 2 * x; }; Again, the code within the braces is evaluated. In this case, the line ends with a semicolon, so the result is suppressed and () to z. When it is executed, we will get the following results: In the code example, the line within fn main calling recurse gives the same result with or without the semicolon. Summary In this, we've covered the different types of loops that are available within Rust, as well as gained an understanding of when to use a semicolon and what it means to omit it. We have also considered enumeration and iteration over a vector and array and how to handle the data held within them. Resources for Article: Further resources on this subject: Extra, Extra Collection, and Closure Changes that Rock! [article] Create a User Profile System and use the Null Coalesce Operator [article] Fine Tune Your Web Application by Profiling and Automation [article]
Read more
  • 0
  • 0
  • 6229
article-image-reactive-python-real-time-events-processing
Xavier Bruhiere
04 Oct 2016
8 min read
Save for later

Reactive Python - Real-time events processing

Xavier Bruhiere
04 Oct 2016
8 min read
A recent trend in programming literature promotes functional programming as a sensible alternative to object-oriented programs for many use cases. This subject feeds many discussions and highlights how important program design is as our applications are becoming more and more complex. Although there might be here some seductive intellectual challenge (because yeah, we love to juggle with elegant abstractions), there are also real business values : Building sustainable, maintainable programs Decoupling architecture components for proper team work Limiting bug exposure Better product iteration When developers spot an interesting approach to solve a recurrent issue in our industry, they formalize it as a design pattern. Today, we will discuss a powerful member of this family: the pattern observer. We won't dive into the strict rhetorical details (sorry, not sorry). Instead, we will delve how reactive programming can level up the quality of our work. It's Python Week. That means you can not only save 50% on some of our latest Python products, but you can also pick up a free Python eBook every single day! The scene That was a bold statement; let's illustrate that with a real-world scenario. Say we were tasked to build a monitoring system. We need some way to collect data, analyze it, and take actions when things go unexpected. Anomaly detection is an exciting yet challenging problem. We don't want our data scientists to be bothered by infrastructure failures. And in the same spirit, we need other engineers to focus only on how to react to specific disaster scenarios. The core of our approach consists of two components—a monitoring module firing and forgetting its discoveries on channels and another processing brick intercepting those events with an appropriate response. The UNIX philosophy at its best: do one thing and do it well. We split the infrastructure by concerns and the workers by event types. Assuming that our team defines well-documented interfaces, this is a promising design. The rest of the article will discuss the technical implementation but keep in mind that I/O documentation and proper processing of load estimation are also fundamental. The strategy Our local lab is composed of three elements: The alert module that we will emulate with a simple cli tool, which publishes alert messages. The actual processing unit subscribing to events it knows how to react to. A message broker supporting the Publish / Subscribe (or PUBSUB) pattern. For this purpose, Redis offers a popular, efficient, and rock solid solution. This is highly recommended, but the database isn't designed for this case. NATS, however, presents itself as follows: NATS acts as a central nervous system for distributed systems such as mobile devices, IoT networks, enterprise microservices and cloud native infrastructure. Unlike traditional enterprise messaging systems, NATS provides an always on ‘dial-tone’. Sounds promising! Client libraries are available for major languages, and Apcera, the company sponsoring the technology, has a solid reputation for building reliable distributed systems. Again, we won't delve how processing actually happens, only the orchestration of this three moving parts. The setup Since NATS is a message broker, we need to run a server locally (version 0.8.0 as of today). Gnatsd is the official and scalable first choice. It is written in Go, so we get performances and drop-in binary out of the box. For fans of microservices (as I am), an official Docker image is available for pulling. Also, for lazy ones (as I am), a demo server is already running at nats://demo.nats.io:4222. Services will use Python 3.5.1, but 2.7.10 should do the job with minimal changes. Our scenario is mostly about data analysis and system administration on the backend, and Python has a wide range of tools for both areas. So let's install the requirements: $ pip --version pip 8.1.1 $ pip install -e git+https://github.com/mcuadros/pynats@6851e84eb4b244d22ffae65e9fbf79bd9872a5b3#egg=pynats click==6.6 # for cli integration Thats'all. We are now ready to write services. Publishing events Let's warm up by sending some alerts to the cloud. First, we need to connect to the NATS server: # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: broker.py import pynats def nats_conn(conf): """Connect to nats server from environment variables. The point is to allow easy switching without to change the code. You can read more on this approach stolen from 12 factors apps. """ # the default value comes from docker-compose (https://docs.docker.com/compose/) services link behavior host = conf.get('__BROKER_HOST__', 'nats') port = conf.get('__BROKER_PORT__', 4222) opts = { 'url': conf.get('url', 'nats://{host}:{port}'.format(host=host, port=port)), 'verbose': conf.get('verbose', False) } print('connecting to broker ({opts})'.format(opts=opts)) conn = pynats.Connection(**opts) conn.connect() return conn This should be enough to start our client: # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: observer.py import os import broker def send(channel, msg): # use environment variables for configuration nats = broker.nats_conn(os.environ) nats.publish(channel, msg) nats.close() And right after that, a few lines of code to shape a cli tool: #! /usr/bin/env python # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: __main__.py import click @click.command() @click.argument('command') @click.option('--on', default='some_event', help='messages topic name') def main(command, on): if command == 'send': click.echo('publishing message') observer.send(on, 'Terminator just dropped in our space-time') if__name__ == '__main__': main() chmod +x ./__main__.py gives it execution permission so we can test how our first bytes are doing. $ # `click` package gives us a productive cli interface $ ./__main__.py --help Usage: __main__.py [OPTIONS] COMMAND Options: --on TEXT messages topic name --help Show this message and exit. $ __BROKER_HOST__="demo.nats.io"./__main__.py send --on=click connecting to broker ({'verbose': False, 'url': 'nats://demo.nats.io:4222'}) publishing message ... This is indeed quite poor in feedback, but no exception means that we did connect to the server and published a message. Reacting to events We're done with the heavy lifting! Now that interesting events are flying through the Internet, we can catch them and actually provide business values. Don't forget the point: let the team write reactive programs without worrying how it will be triggered. I found the following snippet to be a readable syntax for such a goal: # filename: __main__.py import observer @observer.On('terminator_detected') def alert_sarah_connor(msg): print(msg.data) As the capitalized letter of On suggests, this is a Python class, wrapping a NATS connection. It aims to call the decorated function whenever a new message goes through the given channel. Here is a naive implementation shamefully ignoring any reasonable error handling and safe connection termination (broker.nats_conn would be much more production-ready as a context manger, but hey, we do things that don't scale, move fast, and break things): # filename: observer.py class On(object): def__init__(self, event_name, **kwargs): self._count = kwargs.pop('count', None) self._event = event_name self._opts = kwargs or os.environ def__call__(self, fn): nats = broker.nats_conn(self._opts) subscription = nats.subscribe(self._event, fn) def inner(): print('waiting for incoming messages') nats.wait(self._count) # we are done nats.unsubscribe(subscription) return nats.close() return inner Instil some life into this file from the __main__.py: # filename: __main__.py @click.command() @click.argument('command') @click.option('--on', default='some_event', help='messages topic name') def main(command, on): if command == 'send': click.echo('publishing message') observer.send(on, 'bad robot detected') elif command == 'listen': try: alert_sarah_connor(): exceptKeyboardInterrupt: click.echo('caught CTRL-C, cleaning after ourselves...') Your linter might complain about the injection of the msg argument in alert_sarah_connor, but no offense, it should just work (tm): $ In a first terminal, listen to messages $ __BROKER_HOST__="demo.nats.io"./__main__.py listen connecting to broker ({'url': 'nats://demo.nats.io:4222', 'verbose': False}) waiting for incoming messages $ And fire up alerts in a second terminal __BROKER_HOST__="demo.nats.io"--on='terminator_detected' The data appears in the first terminal, celebrate! Conclusion Reactive programming implemented with the Publish/Subscribe pattern brings a lot of benefits for events-oriented products. Modular development, decoupled components, scalable distributed infrastructure, single-responsibility principle.One should think about how data flows into the system before diving into the technical details. This kind of approach also gains traction from real-time data processing pipelines (Riemann, Spark, and Kafka). NATS performances, indeed, allow ultra low-latency architectures development without too much of a deployment overhead. We covered in a few lines of Python the basics of a reactive programming design, with a lot of improvement opportunities: events filtering, built-in instrumentation, and infrastructure-wide error tracing. I hope you found in this article the building block to develop upon! About the author Xavier Bruhiere is the lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 6204

article-image-business-layer-java-ee-7-first-look
Packt
13 Nov 2013
7 min read
Save for later

The Business Layer (Java EE 7 First Look)

Packt
13 Nov 2013
7 min read
Enterprise JavaBeans 3.2 The Enterprise JavaBeans 3.2 Specification was developed under JSR 345. This section just gives you an overview of improvements in the API. The complete document specification (for more information) can be downloaded from http://jcp.org/aboutJava/communityprocess/final/jsr345/index.html. The businesslayer of an application is the part of the application that islocated between the presentationlayer and data accesslayer. The following diagram presents a simplified Java EE architecture. As you can see, the businesslayer acts as a bridge between the data access and the presentationlayer. It implements businesslogic of the application. To do so, it can use some specifications such as Bean Validation for data validation, CDifor context and dependency injection, interceptors to intercept processing, and so on. As thislayer can belocated anywhere in the network and is expected to serve more than one user, it needs a minimum of non functional services such as security, transaction, concurrency, and remote access management. With EJBs, the Java EE platform provides to developers the possibility to implement thislayer without worrying about different non functional services that are necessarily required. In general, this specification does not initiate any new major feature. It continues the work started by thelast version, making optional the implementation of certain features that became obsolete and adds slight modification to others. Pruning some features After the pruning process introduced by Java EE 6 from the perspective of removing obsolete features, support for some features has been made optional in Java EE 7 platform, and their description was moved to another document called EJB 3.2 Optional Features for Evaluation. The features involved in this movement are: EJB 2.1 and earlier Entity Bean Component Contract for Container-Managed Persistence EJB 2.1 and earlier Entity Bean Component Contract for Bean-Managed Persistence Client View of EJB 2.1 and earlier Entity Bean EJB QL: Querylanguage for Container-Managed Persistence Query Methods JAX-RPC-based Web Service Endpoints JAX-RPC Web Service Client View The latest improvements in EJB 3.2 For those who have had to use EJB 3.0 and EJB 3.1, you will notice that EJB 3.2 has brought, in fact, only minor changes to the specification. However, some improvements cannot be overlooked since they improve the testability of applications, simplify the development of session beans or Message-Driven Beans, and improve control over the management of the transaction and passivation of stateful beans. Session bean enhancement A session bean is a type of EJB that allows us to implement businesslogic accessible tolocal, remote, or Web Service Client View. There are three types of session beans: stateless for processing without states, stateful for processes that require the preservation of states between different calls of methods, and singleton for sharing a single instance of an object between different clients. The following code shows an example of a stateless session bean to save an entity in the database: @Stateless public class ExampleOfSessionBean { @PersistenceContext EntityManager em; public void persistEntity(Object entity){ em.persist(entity); }} Talking about improvements of session beans, we first note two changes in stateful session beans: the ability to executelife-cycle callback interceptor methods in a user-defined transaction context and the ability to manually disable passivation of stateful session beans. It is possible to define a process that must be executed according to thelifecycle of an EJB bean (post-construct, pre-destroy). Due to the @TransactionAttribute annotation, you can perform processes related to the database during these phases and control how they impact your system. The following code retrieves an entity after being initialized and ensures that all changes made to the persistence context are sent to the database at the time of destruction of the bean. As you can see in the following code, TransactionAttributeType of init() method is NOT_SUPPORTED; this means that the retrieved entity will not be included in the persistence context and any changes made to it will not be saved in the database: @Stateful public class StatefulBeanNewFeatures { @PersistenceContext(type= PersistenceContextType.EXTENDED) EntityManager em; @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED) @PostConstruct public void init(){ entity = em.find(...); } @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) @PreDestroy public void destroy(){ em.flush(); } } The following code demonstrates how to control passivation of the stateful bean. Usually, the session beans are removed from memory to be stored on the disk after a certain time of inactivity. This process requires data to be serialized, but during serialization all transient variables are skipped and restored to the default value of their data type, which is null for object, zero for int, and so on. To prevent theloss of this type of data, you can simply disable the passivation of stateful session beans by passing the false value to the passivationCapable attribute of the @Stateful annotation. @Stateful(passivationCapable = false) public class StatefulBeanNewFeatures { //... } For the sake of simplicity, EJB 3.2 has relaxed the rules to define the defaultlocal or remote business interface of a session bean. The following code shows how a simple interface can be considered aslocal or remote depending on the case: //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Local @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are remote interfaces public interface yellow { ... } public interface green { ... } @Remote @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface @Remote public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface public interface yellow { ... } public interface green { ... } @Remote(yellow.class) @Stateless public class Color implements yellow, green { ... } EJBlite improvements Before EJB 3.1, the implementation of a Java EE application required the use of a full Java EE server with more than twenty specifications. This could be heavy enough for applications that only need some specification (as if you were asked to take a hammer to kill a fl y). To adapt Java EE to this situation, JCP (Java Community Process) introduced the concept of profile and EJBlite. Specifically, EJBlite is a subset of EJBs, grouping essential capabilities forlocal transactional and secured processing. With this concept, it has become possible to make unit tests of an EJB application without using the Java EE server and it is also possible to use EJBs in web applications or Java SE effectively. In addition to the features already present in EJB 3.1, the EJB 3.2 Specification has added support forlocal asynchronous session bean invocations and non persistent EJB Timer Service. This enriches the embeddable EJBContainer, web profiles, and augments the number of testable features in an embeddable EJBContainer. The following code shows an EJB packaged in a WAR archive that contains two methods. The asynchronousMethod() is an asynchronous method that allows you to compare the time gap between the end of a method call on the client side and the end of execution of the method on the server side. The nonPersistentEJBTimerService() method demonstrates how to define a non persistent EJB Timer Service that will be executed every minute while the hour is one o'clock: @Stateless public class EjbLiteSessionBean { @Asynchronous public void asynchronousMethod() { try{ System.out.println("EjbLiteSessionBean - start : "+new Date()); Thread.sleep(1000*10); System.out.println("EjbLiteSessionBean - end : "+new Date()); }catch (Exception ex){ ex.printStackTrace(); } } @Schedule(persistent = false, minute = "*", hour = "1") public void nonPersistentEJBTimerService() { System.out.println("nonPersistentEJBTimerService method executed"); } } Changes made to the TimerService API The EJB 3.2 Specification enhanced the TimerService APiwith a new method called getAllTimers(). This method gives you the ability to access all active timers in an EJB module. The following code demonstrates how to create different types of timers, access their information, and cancel them; it makes use of the getAllTimers() method: @Stateless public class ChangesInTimerAPI implements ChangesInTimerAPILocal { @Resource TimerService timerService; public void createTimer() { //create a programmatic timer long initialDuration = 1000*5; long intervalDuration = 1000*60; String timerInfo = "PROGRAMMATIC TIMER"; timerService.createTimer(initialDuration, intervalDuration, timerInfo); } @Timeout public void timerMethodForProgrammaticTimer() { System.out.println("ChangesInTimerAPI - programmatic timer : "+new Date()); } @Schedule(info = "AUTOMATIC TIMER", hour = "*", minute = "*") public void automaticTimer(){ System.out.println("ChangesInTimerAPI - automatic timer : "+new Date()); } public void getListOfAllTimers(){ Collection alltimers = timerService.getAllTimers(); for(Timer timer : alltimers){ System.out.println("The next time out : "+timer. getNextTimeout()+", " + " timer info : "+timer.getInfo()); timer.cancel(); } } } In addition to this method, the specification has removed the restrictions that required the use of javax.ejb.Timer and javax.ejb.TimerHandlereferences only inside a bean.
Read more
  • 0
  • 0
  • 5900

article-image-reactive-python-asynchronous-programming-rescue-part-2
Xavier Bruhiere
10 Oct 2016
5 min read
Save for later

Reactive Python - Asynchronous programming to the rescue, Part 2

Xavier Bruhiere
10 Oct 2016
5 min read
This two-part series explores asynchronous programming with Python using Asyncio. In Part 1 of this series, we started by building a project that shows how you can use Reactive Python in asynchronous programming. Let’s pick it back up here by exploring peer-to-peer communication and then just touching on service discovery before examining the streaming machine-to-machine concept. Peer-to-peer communication So far we’ve established a websocket connection to process clock events asynchronously. Now that one pin swings between 1's and 0's, let's wire a buzzer and pretend it buzzes on high states (1) and remains silent on low ones (0). We can rephrase that in Python, like so: # filename: sketches.py import factory class Buzzer(factory.FactoryLoop): """Buzz on light changes.""" def setup(self, sound): # customize buzz sound self.sound = sound @factory.reactive async def loop(self, channel, signal): """Buzzing.""" behavior = self.sound if signal == '1' else '...' self.out('signal {} received -> {}'.format(signal, behavior)) return behavior So how do we make them to communicate? Since they share a common parent class, we implement a stream method to send arbitrary data and acknowledge reception with, also, arbitrary data. To sum up, we want IOPin to use this API: class IOPin(factory.FactoryLoop): # [ ... ] @protocol.reactive async def loop(self, channel, msg): # [ ... ] await self.stream('buzzer', bits_stream) return 'acknowledged' Service discovery The first challenge to solve is service discovery. We need to target specific nodes within a fleet of reactive workers. This topic, however, goes past the scope of this post series. The shortcut below will do the job (that is, hardcode the nodes we will start), while keeping us focused on reactive messaging. # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: mesh.py """Provide nodes network knowledge.""" import websockets class Node(object): def __init__(self, name, socket, port): print('[ mesh ] registering new node: {}'.format(name)) self.name = name self._socket = socket self._port = port def uri(self, path): return 'ws://{socket}:{port}/{path}'.format(socket=self._socket, port=self._port, path=path) def connection(self, path=''): # instanciate the same connection as `clock` method return websockets.connect(self.uri(path)) # TODO service discovery def grid(): """Discover and build nodes network.""" # of course a proper service discovery should be used here # see consul or zookkeeper for example # note: clock is not a server so it doesn't need a port return [ Node('clock', 'localhost', None), Node('blink', 'localhost', 8765), Node('buzzer', 'localhost', 8765 + 1) ] Streaming machine-to-machine chat Let's provide FactoryLoop with the knowledge of the grid and implement an asynchronous communication channel. # filename: factory.py (continued) import mesh class FactoryLoop(object): def __init__(self, *args, **kwargs): # now every instance will know about the other ones self.grid = mesh.grid() # ... def node(self, name): """Search for the given node in the grid.""" return next(filter(lambda x: x.name == name, self.grid)) async def stream(self, target, data, channel): self.out('starting to stream message to {}'.format(target)) # use the node webscoket connection defined in mesh.py # the method is exactly the same as the clock async with self.node(target).connection(channel) as ws: for partial in data: self.out('> sending payload: {}'.format(partial)) # websockets requires bytes or strings await ws.send(str(partial)) self.out('< {}'.format(await ws.recv())) We added a bit of debugging lines to better understand how the data flows through the network. Every implementation of the FactoryLoop can both react to events and communicate with other nodes it is aware of. Wrapping up Time to update arduino.py and run our cluster of three reactive workers in three @click.command()# [ ... ]def main(sketch, **flags): # [ ... ] elif sketch == 'buzzer': sketchs.Buzzer(sound='buzz buzz buzz').run(flags['socket'], flags['port']) Launch three terminals or use a tool such as foreman to spawn multiple processes. Either way, keep in mind that you will need to track the scripts output. way, keep in mind that you will need to track the scripts output. $ # start IOPin and Buzzer on the same ports we hardcoded in mesh.py $ ./arduino.py buzzer --port 8766 $ ./arduino.py iopin --port 8765 $ # now that they listen, trigger actions with the clock (targetting IOPin port) $ ./arduino.py clock --port 8765 [ ... ] $ # Profit ! We just saw one worker reacting to a clock and another reacting to randomly generated events. The websocket protocol allowed us to exchange streaming data and receive arbitrary responses, unlocking sophisticated fleet orchestration. While we limited this example to two nodes, a powerful service discovery mechanism could bring to life a distributed network of microservices. By completing this post series, you should now have a better understanding of how to use Python with Asyncio for asynchronous programming. About the author Xavier Bruhiere is a lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high-intensity sports.
Read more
  • 0
  • 0
  • 5890
article-image-building-ladder-diagram-programs-simple
Packt
31 Oct 2013
7 min read
Save for later

Building Ladder Diagram programs (Simple)

Packt
31 Oct 2013
7 min read
(For more resources related to this topic, see here.) There are several editions of RSLogix 5000 available today, which are similar to Microsoft Windows' home and professional versions. The more "basic" (less expensive) editions of RSLogix 5000 have many features disabled. For example, only the full and professional editions, which are more expensive, support the editing of Function Block Diagrams, Graphical Structured Text, and Sequential Function Chart. In my experience, Ladder Logic is the most commonly used language. Refer to http://www.rockwellautomation.com/rockwellsoftware/design/rslogix5000/orderinginfo.html for more on this. Getting ready You will need to have added the cards and tags from the previous recipes to complete this exercise. How to do it... Open Controller Organizer and expand the leaf Tasks | Main Tasks | Main Program. Right-click on Main Program and select New Routine as shown in the following screenshot: Configure a new Ladder Logic program by setting the following values: Name: VALVES Description: Valve Control Program Type: Ladder Diagram For our newly created routine to be executed with each scan of the PLC, we will need to add a reference to it in MainRoutine that is executed with each scan of the MainTask task. Double-click on our MainRoutine program to display the Ladder Logic contained within it. Next, we will add a Jump To Subroutine (JSR) element that will add our newly added Ladder Diagram program to the main task and ensure that it is executed with each scan. Above the Ladder Diagram, there are tab buttons that organize Ladder Elements into Element Groups. Click on the left and right arrows that are on the left side of Element Groups and find the one labeled Program Control. After clicking on the Program Control element group, you will see the JSR element. Click on the JSR element to add it to the current Ladder Logic Rung in MainRoutine. Next, we will make some modifications to the JSR routine so that it calls our newly added Ladder Diagram. Click on the Routine Name parameter of the JSR element and select the VALVES routine from the list as shown in the following screenshot: There are three additional parameters that we are not using as part of the JSR element, which can be removed. Select the Input Par parameter and then click on the Remove Parameter icon in the toolbar above the Ladder Diagram. This icon looks as shown in the following screenshot: Repeat this process for the other optional parameter: Return Par. Now that we have ensured that our newly added Ladder Logic routine will be scanned, we can add the elements to our Ladder Logic routine. Double-click on our VALVES routine in the Controller Organizer tab under the MainTask task. Find the Timer/Counter element group and click on the TON (Timer On Delay) element to add it to our Ladder Diagram. Now we will create the Timer object. Enter the name in the Timer field as FC1001_TON. Right-click on the TIMER object tag name we just entered and select New "FC1001_TON" (or press Ctrl + W). In the New Tag form that appears, enter in the description FAULT TIMER FOR FLOW CONTROL VALVE 1001 and click on OK to create the new TIMER tag. Next, we will configure our TON element to count to five seconds (5,000 milliseconds). Double-click on the Preset parameter and enter in the value 5000, which is in milliseconds. Now, we will need to add the condition that will start the TIMER object. We will be adding a Less Than (LES) element from the Compare element group. Be sure to add the element to the same Ladder Logic Rung as the Timer on Delay element. The LES element will compare the valve position with the valve set point and return true if the values do not match. So set the two parameters of the LES element to the following: FC1001_PV FC1001_SP Now, we will add a second Ladder Logic Rung where a latched fault alarm is triggered after TIMER reaches five seconds. Right-click under the first Ladder Logic Rung and select Add Rung (or press Ctrl + R). Find the Favorites element group and select the Examine On icon as shown in the following screenshot: Click on ? above the Examine On tab and select the TIMER object's Done property, FC1001_TON.DN, as shown in the following screenshot. Now, once the valve values are not equal, and the TIMER has completed its count to five seconds, this Ladder Logic Rung will be activated as shown in the following screenshot: Next, we will add an Output Latched element to this Ladder Logic Rung. Click on the Output Latched element from the Favorites element group with our new rung selected. Click on ? above the Output Latched element and type in the name of a new base tag we are going to add as FC1001_FLT. Press Enter or click on the element to complete the text entry. Right-click on FC1001_FLT and select New "FC1001_FLT" (or press Ctrl + W). Set the following values in the New Tag form that appears: Description: FLOW CONTROL VALVE 1001 POSITION FAULT Type: Base Scope: FirstController Data Type: Bool Click on OK to add the new tag. Our new tag will look like the following screenshot: It is considered bad practice to latch a bit without having the code to unlatch the bit directly below it. Create a new BOOL type tag called ALARM_RESET with the following properties: Name: ALARM_RESET Description: RESET ALARMS Type: Base Scope: FirstController Data Type: BOOL Click on OK to add the new tag. Then add the following coil and OTU to unlatch the fault when the master alarm reset is triggered. Finally, we will add a comment so that we can see what our Ladder Diagram is doing at a glance. Right-click in the far-right area of the first Ladder Logic Rung (where the 0 is) and select Edit Rung Comment (Ctrl + D). Enter the following helpful comment: TRIGGER FAULT IF THE SETPOINT OF THE FLOW CONTROL VALVE 1001 IS NOT EQUAL TO THE VALVE POSITION How it works... We have created our first Ladder Logic Diagram and linked it to the MainTask task. Now, each time that the task is scanned (executed), our Ladder Logic routine will be run from left to right and top to bottom. There's more... More information on Ladder Logic can be found in the Rockwell publication Logix5000 Controllers Ladder Diagram available at http://literature.rockwellautomation.com/idc/groups/literature/documents/pm/1756-pm008_-en-p.pdf. Ladder Logic is the most commonly used programming language in RSLogix 5000. This recipe describes a few more helpful hints to get you started. Understanding Ladder Rung statuses Did you notice the vertical output eeeeeee on the left-hand side of your Ladder Logic Rung? This indicates that an error is present in your Ladder Logic code. After making changes to your controller project, it is a good practice to Verify your project using the drop-down menu item Logic | Verify | Controller. Once Verify has been run, you will see the error pane appear with any errors that it has detected. Element help You can easily get detailed documentation on Ladder Logic Elements, Function Block Diagram Elements, Structured Text Code, and other element types by selecting the object and pressing F1. Copying and pasting Ladder Logic Ladder Logic Rungs and elements can be copied and pasted within your ladder routine. Simply select the rung or element you wish to copy and press Ctrl + C. Then, to paste the rung or element, select the location where you would like to paste it and press Ctrl + V. Summary This article took a first look at creating new routines using ladder logic diagrams. The reader was introduced to the concept of Tasks and also learns how to link routines. In this article, we learned how to navigate the ladder elements that are available, how to find help on each element, and how to create a simple alarm timer using ladder logic. Resources for Article: Further resources on this subject: DirectX graphics diagnostic [Article] Flash 10 Multiplayer Game: Game Interface Design [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article]
Read more
  • 0
  • 0
  • 5883

article-image-using-sprites-animation
Packt
03 Oct 2013
6 min read
Save for later

Using Sprites for Animation

Packt
03 Oct 2013
6 min read
(For more resources related to this topic, see here.) Sprites Let's briefly discuss sprites. In gaming, sprites are usually used for animation sequences; a sprite is a single image in which individual frames of a character animation are stored. We are going use sprites in our animations. If you already have knowledge of graphics design, it's good for you because it is an edge for you to define how you want your game to look like and how you want to define animation sequences in sprites. You can try out tools such as Sprite Maker for making your own sprites with ease; you can get a copy of Sprite Maker at http://www.spriteland.com/sprites/sprite-maker.zip. The following is an sample animation sprite by Marc Russell, which is available for free at http://opengameart.org/content/gfxlib-fuzed you can find other open source sprites at http://opengameart.org/content/platformersidescroller-tiles: The preceding sprite will play the animation of the character moving to the right. The character sequence is well organized using an invisible grid, as shown in the following screenshot: The grid is 32 x 32; the size of our grid is very important in setting up the quads for our game. A quad in LÖVE is a specific part of an image. Because our sprite is a single image file, quads will be used to specify each of the sequences we want to draw per unit time and will be the largest part part of our animation algorithm. Animation The animation algorithm will simply play the sprite like a tape of film; we'll be using a basic technique here as LÖVE doesn't have an official module for that. Some members of the LÖVE forum have come up with different libraries to ease the way we play animations. First of all let us load our file: function love.load() sprite = love.graphics.newImage "sprite.png" end Then we create quads for each part of the sprite by using love.graphics. newQuad(x, y, width, height, sw, sh), where x is the top-left position of the quad along the x axis, y is the top-left position of the quad along the y axis, width is the width of the quad, height is the height of the quad, sw is the sprite's width, and sh is the sprite's height: love.graphics.newQuad(0, 0, 32, 32, 256, 32) --- first quad love.graphics.newQuad(32, 0, 32, 32, 256, 32) --- second quad love.graphics.newQuad(64, 0, 32, 32, 256, 32) --- Third quad love.graphics.newQuad(96, 0, 32, 32, 256, 32) --- Fourth quad love.graphics.newQuad(128, 0, 32, 32, 256, 32) --- Fifth quad love.graphics.newQuad(160, 0, 32, 32, 256, 32) --- Sixth quad love.graphics.newQuad(192, 0, 32, 32, 256, 32) --- Seventh quad love.graphics.newQuad(224, 0, 32, 32, 256, 32) --- Eighth quad The preceding code can be rewritten in a more concise loop as shown in the following code snippet: for i=1,8 do love.graphics.newQuad((i-1)*32, 0, 32, 32, 256, 32) end As advised by LÖVE, we shouldn't state our quads in the draw() or update() functions, because it will cause the quad data to be repeatedly loaded into memory with every frame, which is a bad practice. So what we'll do is pretty simple; we'll load our quad parameters in a table, while love.graphics.newQuad will be referenced locally outside the functions. So the new code will look like the following for the animation in the right direction: local Quad = love.graphics.newQuad function love.load() sprite = love.graphics.newImage "sprite.png" quads = {} quads['right'] ={} quads['left'] = {} for j=1,8 do quads['right'][j] = Quad((j-1)*32, 0, 32, 32, 256, 32); quads['left'][j] = Quad((j-1)*32, 0, 32, 32, 256, 32); -- for the character to face the opposite direction, the quad need to be flipped by using the Quad:flip(x, y) method, where x and why are Boolean. quads.left[j]:flip(true, false) --flip horizontally x = true, y = false end end Now that our animation table is set, it is important that we set a Boolean value for the state of our character. At the start of the game our character is idle, so we set idle to true. Also, there are a number of quads the algorithm should read in order to play our animation. In our case, we have eight quads, so we need a maximum of eight iterations, as shown in the following code snippet: local Quad = love.graphics.newQuad function love.load() character= {} character.player = love.graphics.newImage("sprite.png") character.x = 50 character.y = 50 direction = "right" iteration = 1 max = 8 idle = true timer = 0.1 quads = {} quads['right'] ={} quads['left'] = {} for j=1,8 do quads['right'][j] = Quad((j-1)*32, 0, 32, 32, 256, 32); quads['left'][j] = Quad((j-1)*32, 0, 32, 32, 256, 32); -- for the character to face the opposite direction, the quad need to be flipped by using the Quad:flip(x, y) method, where x and why are Boolean. quads.left[j]:flip(true, false) --flip horizontally x = true, y = false end end Now let us update our motion; if a certain key is pressed, the animation should play; if the key is released, the animation should stop. Also, if the key is pressed, the character should change position. We'll be using the love.keypressed callback function here, as shown in the following code snippet: function love.update(dt) if idle == false then timer = timer + dt if timer > 0.2 then timer = 0.1 -- The animation will play as the iteration increases, so we just write iteration = iteration + 1, also we'll stop reset our iteration at the maximum of 8 with a timer update to keep the animation smooth. iteration = iteration + 1 if love.keyboard.isDown('right') then sprite.x = sprite.x + 5 end if love.keyboard.isDown('left') then sprite.x = sprite.x - 5 end if iteration > max then iteration = 1 end end end end function love.keypressed(key) if quads[key] then direction = key idle = false end end function love.keyreleased(key) if quads[key] and direction == key then idle = true iteration = 1 direction = "right" end end Finally, we can draw our character on the screen. Here we'll be using love.graphics.drawq(image, quad, x, y), where image is the image data, quad will load our quads table, x is the position in x axis and y is the position in the y axis: function love.draw() love.graphics.drawq(sprite.player, quads[direction][iteration], sprite.x, sprite.y) end So let's package our game and run it to see the magic in action by pressing the left or right navigation key: Summary That is all for this article. We have learned how to draw 2D objects on the screen and move the objects in four directions. We have delved into the usage of sprites for animations and how to play these animations with code. Resources for Article: Further resources on this subject: Panda3D Game Development: Scene Effects and Shaders [Article] Microsoft XNA 4.0 Game Development: Receiving Player Input [Article] Introduction to Game Development Using Unity 3D [Article]
Read more
  • 0
  • 0
  • 5818
Modal Close icon
Modal Close icon