Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Programming

81 Articles
article-image-modern-go-development
Xavier Bruhiere
06 Nov 2015
8 min read
Save for later

Modern Go Development

Xavier Bruhiere
06 Nov 2015
8 min read
  The Go language indisputably generates lot of discussions. Bjarne Stroustrup famously said: There are only two kinds of languages: the ones people complain about and the ones nobody uses. Many developers indeed share their usage retrospectives and the flaws they came to hate. No generics, no official tool for vendoring, built-in methods break the rules Go creators want us to endorse. The language ships with a bunch of principals and a strong philosophy. Yet, The Go Gopher is making its way through companies. AWS is releasing its Go SDK, Hashicorp's tools are written in Go, and so are serious databases like InfluxDB or Cockroach. The language doesn't fit everywhere, but its concurrency model, its cross-platform binary format, or its lightning speed are powerful features. For the curious reader, Texlution digs deeper on Why Golang is doomed to succeed. It is also intended to be simple. However, one should gain a clear understanding of the language's conventions and data structures before producing efficient code. In this post, we will carefully setup a Go project to introduce a robust starting point for further development. Tooling Let's kickoff the work with some standard Go project layout. New toys in town try to rethink the way they are organized, but I like to keep things simple as long as it just works. Assuming familiarity with the Go installation and GOPATH mess, we can focus on the code's root directory. ➜ code tree -L 2 . ├── CONTRIBUTING.md ├── CHANGELOG.md ├── Gomfile ├── LICENCE ├── main.go ├── main_test.go ├── Makefile ├── shippable.yml ├── README.md ├── _bin │   ├── gocov │   ├── golint │   ├── gom │   └── gopm └── _vendor    ├── bin    ├── pkg    └── src To begin with, README.md, LICENCE and CONTRIBUTING.md are usual important documents for any code expected to be shared or used. Especially with open source, we should care about and clearly state what the project does, how it works and how one can (and cannot) use it. Writing a Changelog is also a smart step in that direction. Package manager The package manager is certainly a huge matter of discussion among developers. The community was left to build upon the go get tool and many solutions arisen to bring deterministic builds to Go code. While most of them are good enough tools, Godep is the most widely used, but Gom is my personal favorite: Simplicity with explicit declaration and tags # Gomfile gom 'github.com/gin-gonic/gin', :commit => '1a7ab6e4d5fdc72d6df30ef562102ae6e0d18518' gom 'github.com/ogier/pflag', :commit => '2e6f5f3f0c40ab9cb459742296f6a2aaab1fd5dc' Dependency groups # Gomfile (continuation) group :test do # testing libraries gom 'github.com/franela/goblin', :commit => 'd65fe1fe6c54572d261d9a4758b6a18d054c0a2b' gom 'github.com/onsi/gomega', :commit => 'd6c945f9fdbf6cad99e85b0feff591caa268e0db' gom 'github.com/drewolson/testflight', :commit => '20e3ff4aa0f667e16847af315343faa39194274a' # testing tools gom 'golang.org/x/tools/cmd/cover' gom 'github.com/axw/gocov', :commit => '3b045e0eb61013ff134e6752184febc47d119f3a' gom 'github.com/mattn/goveralls', :commit => '263d30e59af990c5f3316aa3befde265d0d43070' gom 'github.com/golang/lint/golint', :commit => '22a5e1f457a119ccb8fdca5bf521fe41529ed005' gom 'golang.org/x/tools/cmd/vet' end Self-contained project # install gom binary go get github.com/mattn/gom # ... write Gomfile ... # install production and development dependencies in `./_vendor` gom -test install We just declared and bundled full requirements under its root directory. This approach plays nicely with trendy containers. # we don't even need Go to be installed # install tooling in ./_bin mkdir _bin && export PATH=$PATH:$PWD/_bin docker run --rm -it --volume $PWD/_bin:/go/bin golang go get -u -t github.com/mattn/gom # asssuming the same Gomfile as above docker run --rm -it --volume $PWD/_bin:/go/bin --volume $PWD:/app -w /app golang gom -test install An application can quickly rely on a significant number of external resources. Dependency managers like Gom offers a simple workflow to avoid breaking-change pitfalls - a widespread curse in our fast paced industry. Helpers The ambitious developer in love with productivity can complete its toolbox with powerful editor settings, an automatic fix, a Go repl, a debugger, and so on. Despite being young, the language comes with a growing set of tools helping developers to produce healthy codebase. Code With basic foundations in place, let's develop a micro server powered by Gin, an impressive web framework I had great experience with. The code below highlights commonly best practices one can use as a starter. // {{ Licence informations }} // {{ build tags }} // Package {{ pkg }} does ... // // More specifically it ... package main import ( // built-in packages "log" "net/http" // third-party packages "github.com/gin-gonic/gin" flag "github.com/ogier/pflag" // project packages placeholder ) // Options stores cli flags type Options struct { // Addr is the server's binding address Addr string } // Hello greets incoming requests // Because exported identifiers appear in godoc, they should be documented correctly func Hello(c *gin.Context) { // follow HTTP REST good practices with an adequate http code and json-formatted response c.JSON(http.StatusOK, gin.H{ "hello": "world" }) } // Handler maps endpoints with callbacks func Handler() *gin.Engine { // gin default instance provides logging and crashing recovery middlewares router := gin.Default() router.GET("/greeting", Hello) return router } func main() { // parse command line flags opts := Options{} flag.StringVar(&opts.Addr, "addr", ":8000", "server address") flag.Parse() if err := Handler().Run(opts.Addr); err != nil { // exit with a message and a code status 1 on errors log.Fatalf("error running server: %vn", err) } } We're going to take a closer look at two important parts this snippet is missing : error handling and interfaces' benefits. Errors One tool we could have mentioned above is errcheck, which checks that you checked errors. While it sometimes produces cluttered code, Go error handling strategy enforces rigorous development : When justified, use errors.New("message") to provide a helpful output. If one needs custom arguments to produce a sophisticated message, use fmt.Errorf("math: square root of negative number %g", f) For even more specific errors, let's create new ones: type CustomError struct { arg int prob string } // Usage: return -1, &CustomError{arg, "can't work with it"} func (e *CustomError) Error() string { return fmt.Sprintf("%d - %s", e.arg, e.prob) } Interfaces Interfaces in Go unlock many patterns. In the gold age of components, we can leverage them for API composition and proper testing. The following example defines a Project structure with a Database attribute. type Database interface { Write(string, string) error Read(string) (string, error) } type Project Structure { db Database } func main() { db := backend.MySQL() project := &Project{ db: db } } Project doesn't care of the underlying implementation of the db object it receives, as long as this object implements Database interface (i.e. implements read and write signatures). Meaning, given a clear contract between components, one can switch Mysql and Postgre backends without modifying the parent object. Apart from this separation of concern, we can mock a Database and inject it to avoid heavy integration tests. Hopefully this tiny, carefully written snippet should not hide too much horrors and we're going to build it with confidence. Build We didn't join a Test Driven Development style but let's catch up with some unit tests. Go provides a full-featured testing package but we are going to level up the game thanks to a complementary combo. Goblin is a thin framework featuring Behavior-driven development close to the awesome Mocha for node.js. It also features an integration with Gomega, which brings us fluent assertions. Finally testflight takes care of managing the HTTP server for pseudo-integration tests. // main_test.go package main import ( "testing" . "github.com/franela/goblin" . "github.com/onsi/gomega" "github.com/drewolson/testflight" ) func TestServer(t *testing.T) { g := Goblin(t) //special hook for gomega RegisterFailHandler(func(m string, _ ...int) { g.Fail(m) }) g.Describe("ping handler", func() { g.It("should return ok status", func() { testflight.WithServer(Handler(), func( r*testflight.Requester) { res := r.Get("/greeting") Expect(res.StatusCode).To(Equal(200)) }) }) }) } This combination allows readable tests to produce readable output. Given the crowd of developers who scan tests to understand new code, we added an interesting value to the project. It would certainly attract even more kudos with a green test-suite. The following pipeline of commands try to validate a clean, bug-free, code smell-free, future-proof and coffee-maker code. # lint the whole project package golint ./... # run tests and produce a cover report gom test -covermode=count -coverprofile=c.out # make this report human-readable gocov convert c.out | gocov report # push the reslut to https://coveralls.io/ goveralls -coverprofile=c.out -repotoken=$TOKEN Conclusion Countless posts conclude this way, but I'm excited to state that we merely scratched the surface of proper Go coding. The language exposes flexible primitives and unique characteristics one will learn the hard way one experimentation after another. Being able to trade a single binary against a package repository address is such an example, like JavaScript support. This article introduced methods to kick-start Go projects, manage dependencies, organize code, offered guidelines and testing suite. Tweak this opinionated guide to your personal taste, and remember to write simple, testable code. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 10142

article-image-application-flow-generators
Wesley Cho
07 Oct 2015
5 min read
Save for later

Application Flow With Generators

Wesley Cho
07 Oct 2015
5 min read
Oftentimes, developers like to fall back to using events to enforce the concept of a workflow, a rigid diagram of business logic that branches according to the application state and/or user choices (or in psuedo-formal terms, a tree-like uni-directional flow graph). This graph may contain circular flows until the user meets the criteria to continue. One example of this is user authentication to access an application, where a natural circular logic arises of returning back to the login form until the user submits a correct user/password combination - upon login, the application may decide to display a bunch of welcome messages pointing to various pieces of functionality & giving quick explanations. However, eventing has a problem - it doesn’t centralize the high level business logic with obvious branching, so in an application of moderate complexity, developers may scramble to figure out what callbacks are supposed to trigger when, and in what order. Even worse, the logic can be split across multiple files, creating a multifile spaghetti that makes it hard to find the meatball (or other preferred food of reader interest). It can be a taxing operation for developers, which in turn hampers productivity. Enter Generators Generators are an exciting new feature in ES6 that is mysterious to more inexperienced developers. They allow one to create a natural loop that blocks up until the yielded expression, which has some natural applications such as in pagination for handling infinite scrolling lists such as a Facebook newsfeed or Twitter feed. They are currently available in Chrome (and thus io.js, as well as Node.js via --harmony flags) & Firefox, but can be used in other browsers & io.js/Node.js through transpilation of JavaScript from ES6 to ES5 with excellent transpilers such as Traceur or Babel. If you want generator functionality, you can use regenerator. One nice feature of generators is that it is a central source of logic, and thus naturally fits the control structure we would like for encapsulating high level application logic. Here is one example of how one can use generators along with promises: var User = function () { … }; User.authenticate = function* authenticate() { var self = this; while (!this.authenticated) { yield function login(user, password) { return self.api.login(user, password) .then(onSuccess, onFailure); }; } }; function* InitializeApp() { yield* User.authenticate(); yield Router.go(‘home’); } var App = InitializeApp(); var Initialize = { next: function (step, data) { switch step { case ‘login’: return App.next(); ... } ... } }; Here we have application logic that first tries to authenticate the user, then we can implement a login method elsewhere in the application: function login (user, password) { return App.next().value(user, password) .then(function success(data) { return Initialize.next(‘login’, data); }, function failure(data) { return handleLoginError(data); }); } Note the role of the Initialize object - it has a next key whose value is a function that determines what to do next in tandem with App, the “instantiation” of a new generator. In addition, it also makes use of the fact that what we choose to yield with a second generator, which lets us yield a function which can be used to pass data to attempt to login a user. On success, it will set the authenicated flag as true, which in turn will break the user out of the User.authenticate part of the InitializeApp generator, and into the next step, which in this case is to route the user to the homepage. In this case, we are blocking the user from normally navigating to the homepage upon application boot until they complete the authentication step. Explanation of differences The important piece in this code is the InitializeApp generator. Here we have a centralized control structure that clearly displays the high level flow of the application logic. If one knows that there is a particular piece that needs to be modified due to a bug, such as in the authentication piece, it becomes obvious that one must start looking at User.authenticate, and any piece of code that is directly concerned with executing it. This allows the methods to be split off into the appropriate sectors of the codebase, similarly with event listeners & the callbacks that get fired on reception of the event, except we have the additional benefit of seeing what the high level application state is. If you aren't interested in using generators, this can be replicated with promises as well. There is a caveat to this approach though - using generators or promises hardcodes the high level application flow. It can be modified, but these control structures are not as flexible as events. This does not negate the benefits of using events, but it gives another powerful tool to help make long term maintenance easier when designing an application that potentially may be maintained by multiple developers who have no prior knowledge. Conclusion Generators have many use cases that most developers working with JavaScript are not familiar with currently, and it is worth taking the time to understand how they work. Combining them with existing concepts allow some unique patterns to arise that can be of immense use when architecting a codebase, especially with possibilities such as encapsulating branching logic into a more readable format. This should help reduce mental burden, and allow developers to focus on building out rich web applications without the overhead of worrying about potentially missing business logic. About the Author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.  
Read more
  • 0
  • 0
  • 7825

article-image-5-go-libraries-frameworks-and-tools-you-need-to-know
Julian Ursell
24 Jul 2014
4 min read
Save for later

5 Go Libraries, Frameworks, and Tools You Need to Know

Julian Ursell
24 Jul 2014
4 min read
Golang is an exciting new language seeing rapid adoption in an increasing number of high profile domains. Its flexibility, simplicity, and performance makes it an attractive option for fields as diverse as web development, networking, cloud computing, and DevOps. Here are five great tools in the thriving ecosystem of Go libraries and frameworks. Martini Martini is a web framework that touts itself as “classy web development”, offering neat, simplified web application development. It serves static files out of the box, injects existing services in the Go ecosystem smoothly, and is tightly compatible with the HTTP package in the native Go library. Its modular structure and support for dependency injection allows developers to add and remove functionality with ease, and makes for extremely lightweight development. Out of all the web frameworks to appear in the community, Martini has made the biggest splash, and has already amassed a huge following of enthusiastic developers. Gorilla Gorilla is a toolkit for web development with Golang and offers several packages to implement all kinds of web functionality, including URL routing, optionality for cookie and filesystem sessions, and even an implementation with the WebSockets protocol, integrating it tightly with important web development standards. groupcache groupcache is a caching library developed as an alternative (or replacement) to memcached, unique to the Go language, which offers lightning fast data access. It allows developers managing data access requests to vastly improve retrieval time by designating a group of its own peers to distribute cached data. Whereas memcached is prone to producing an overload of database loads from clients, groupcache enables a successful load out of a huge queue of replicated processes to be multiplexed out to all waiting clients. Libraries such as Groupcache have a great value in the Big Data space as they contribute greatly to the capacity to deliver data in real time anywhere in the world, while minimizing potential access pitfalls associated with managing huge volumes of stored data. Doozer Doozer is another excellent tool in the sphere of system and network administration which provides a highly available data store used for the coordination of distributed servers. It performs a similar function to coordination technologies such as ZooKeeper, and allows critical data and configurations to be shared seamlessly and in real time across multiple machines in distributed systems. Doozer allows the maintenance of consistent updates about the status of a system across clusters of physical machines, creating visibility about the role each machine plays and coordinating strategies for failover situations. Technologies like Doozer emphasize how effective the Go language is for developing valuable tools which alleviate complex problems within the realm of distributed system programming and Big Data, where enterprise infrastructures are modeled around the ability to store, harness and protect mission critical information.  GoLearn GoLearn is a new library that enables basic machine learning methods. It currently features several fundamental methods and algorithms, including neural networks, K-Means clustering, naïve Bayesian classification, and linear, multivariate, and logistic regressions. The library is still in development, as are the number of standard packages being written to give Go programmers the ability to develop machine learning applications in the language, such as mlgo, bayesian, probab, and neural-go. Go’s continual expansion into new technological spaces such as machine learning demonstrates how powerful the language is for a variety of different use cases and that the community of Go programmers is starting to generate the kind of development drive seen in other popular general purpose languages like Python. While libraries and packages are predominantly appearing for web development, we can see support growing for data intensive tasks and in the Big Data space. Adoption is already skyrocketing, and the next 3 years will be fascinating to observe as Golang is poised to conquer more and more key territories in the world of technology.
Read more
  • 0
  • 0
  • 7382

article-image-pythons-new-asynchronous-statements-and-expressions
Daniel Arbuckle
12 Oct 2015
5 min read
Save for later

Python’s new asynchronous statements and expressions

Daniel Arbuckle
12 Oct 2015
5 min read
As part of Packt’s Python Week, Daniel Arbuckle, author of our Mastering Python video, explains Python’s journey from generators, to schedulers, to cooperative multithreading and beyond…. My romance with cooperative multithreading in Python began in the December of 2001, with the release of Python version 2.2. That version of Python contained a fascinating new feature: generators. Fourteen years later, generators are old hat to Python programmers, but at the time, they represented a big conceptual improvement. While I was playing with generators, I noticed that they were in essence first-class objects that represented both the code and state of a function call. On top of that, the code could pause an execution and then later resume it. That meant that generators were practically coroutines! I immediately set out to write a scheduler to execute generators as lightweight threads. I wasn’t the only one! While the schedulers that I and others wrote worked, there were some significant limitations imposed on them by the language. For example, back then generators didn’t have a send() method, so it was necessary to come up with some other way of getting data from one generator to another. My scheduler got set aside in favor of more productive projects. Fortunately, that’s not where the story ends. With Python 2.5, Guido van Rossum and Phillip J. Eby added the send() method to generators, turned yield into an expression (it had been a statement before), and made several other changes that made it easier and more practical to treat generators as coroutines, and combine them into cooperatively scheduled threads. Python 3.3 was changed to include yield from expressions, which didn’t make much of a difference to end users of cooperative coroutine scheduling, but made the internals of the schedulers dramatically simpler. The next step in the story is Python 3.4, which included the asyncio coroutine scheduler and asynchronous I/O package in the Python standard library. Cooperative multithreading wasn’t just a clever trick anymore. It was a tool in everyone’s box. All of which brings us to the present, and the recent release of Python 3.5, which includes an explicit coroutine type, distinct from generators, new asynchronous async def, async for, and async with statements, and an await expression that takes the place of yield from for coroutines. So, why does Python need explicit coroutines and new syntax, if generator-based coroutines had gotten good enough for inclusion in the standard library? The short answer is that generators are primarily for iteration, so using them for something else — no matter how well it works conceptually — introduces ambiguities. For example, if you hand a generator to Python’s for loop, it’s not going to treat it as a coroutine, it’s going to treat it as an iterable. There’s another problem, related to Python’s special protocol methods, such as __enter__ and __exit__, which are called by the code in the Python interpreter, leaving the programmer with no opportunity to yield from it. That meant that generator-based coroutines were not compatible with various important bits of Python syntax, such as the with statement. A coroutine couldn’t be called from anything that was called by a special method, whether directly or indirectly, nor was it possible to wait on a future value. The new changes to Python are meant to address these problems. So, what exactly are these changes? async def is used to define a coroutine. Apart from the async keyword, the syntax is almost identical to a normal def statement. The big differences are, first, that coroutines can contain await, async for, and async with syntaxes, and, second, they are not generators, so they’re not allowed to contain yield or yield from expressions. It’s impossible for a single function to be both a generator and a coroutine. await is used to pause the current coroutine until the requested coroutine returns. In other words, an await expression is just like a function call, except that the called function can pause, allowing the scheduler to run some other thread for a while. If you try to use an await expression outside of an async def, Python will raise a SyntaxError. async with is used to interface with an asynchronous context manager, which is just like a normal context manager except that instead of __enter__ and __exit__ methods, it has __aenter__ and __aexit__ coroutine methods. Because they’re coroutines, these methods can do things like wait for data to come in over the network, without blocking the whole program. async for is used to get data from an asynchronous iterable. Asynchronous iterables have an __aiter__ coroutine method, which functions like the normal __iter__ method, but can participate in coroutine scheduling and asynchronous I/O. __aiter__ should return an object with an __anext__ coroutine method, which can participate in coroutine scheduling and asynchronous I/O before returning the next iterated value, or raising StopAsyncIteration. This is Python, all of these new features, and the convenience they represent, are 100% compatible with the existing asyncio scheduler. Further, as long as you use the @asyncio.coroutine decorator, your existing asyncio code is also forward compatible with these features without any overhead.
Read more
  • 0
  • 0
  • 7139

article-image-how-do-web-development-frameworks-drive-changes-programming-languages
Antonio Cucciniellois
11 Apr 2017
5 min read
Save for later

How do web development frameworks drive changes in programming languages?

Antonio Cucciniellois
11 Apr 2017
5 min read
If you have been keeping up with technologies lately through various news sites like Hacker News and in the open source community on GitHub, you've probably noticed it seems as if there have been web development frameworks popping up left and right. It can be overwhelming to follow them all and understand how they are causing things to change in the technology world. As developers, we want to be able to catch a web development trend early and improve our skills with the technology that will be used in the future. Let's discuss the effects of web development frameworks and how they drive change in languages. What is a web development framework? Before we continue, let's make sure that everyone is on the same page regarding what exactly a web development framework is. It is defined as software made to aid in the creation and development of web applications. These framewoeks are usually made to help out with speeding up common tasks that a web developer runs into. There are client-side frameworks, which help with speeding up front-end work, along with helping us create dynamic web pages. Some examples would be React.js and Angular.js. There are also server-side web development frameworks that help with creating a server, handling routes, and much more. A few examples of this kind of framework are Express.js, Django, and Rails. Now we know what exactly web development frameworks are, but how do they affect languages? Effects on languages Quantity StackOverflow recently came out with a new Developer Survey for 2017, in which they interviewed 64,000 developers and asked them a wide variety of questions. One question was about their usage of various languages. Here is a graph of their findings:   Changes_Languages_Image If you look at this graph, it shows you the percentage of developers using a language in the year 2013, and compares it to 2017. Four years may not be a long period of time for other fields, but for software engineers, that is a century. We can see that the usage of languages such as C#, C, and C++ has been decreasing over these years. Meanwhile, Node.js and Python have increased in usage. After doing some research on the number of prominent web development frameworks that have come out for these languages in the past few years, I noticed a couple of things: C had one framework that came out in 2015. C++ had four web development frameworks over those years. On the other hand, Python had 17 frameworks aiding its development, and Node.js had nine frameworks in the past 3 years alone. What does this all mean? Well, it seems as if the languages that received more web development frameworks in order to make development easier for engineers saw an increase in the number of users, while the others, which did not receive as many web development frameworks, ended up being used less and less. Verdict: There must be a correlation at least between the number of web development frameworks and the corresponding language's usage. Ease of use and quality Not all of these frameworks are created equal. Some web development frameworks are extremely easy to use when starting out, and for others, it can take longer to create your first web application. Time is a factor when learning a new framework (or migrating existing software to it), and if it is not easy to pick up that framework, it can be a barrier to its usage. Another factor is having previous experience with the language. A reason why Node.js has an increasing user base is because it is written in JavaScript. If you have done any basic front-end development, you must have used some JavaScript to make your application. When learning a new web framework, if you have to learn a new obscure language that does not have many other uses, it does not make it easier for you, hence slowing down transition time. Lastly, if the framework itself does not actually make web development tasks for the engineer any easier, or not much easier, then that language will end up being used less. Verdict: If the framework is easy to use and speeds up development time, more people will use it and move toward it. Conclusion Overall, there are two clear factors of web development frameworks that drive changes in languages. If there are plenty of frameworks, people will be more likely to use the language that the framework was created for. If the frameworks are simple and really speed up development time, that also will increase the language's usage. If you enjoyed this post, share it on Twitter! Leave a comment down below and let me know your thoughts on the subject.  About the author Antonio Cucciniellois a software engineer with a background in C, C++, and JavaScript (Node.Js) from New Jersey. His most recent project, Edit Docs, is an Amazon Echo skill that allows users to edit Google Drive files using voice. He loves building cool things with software and reading books on self-help and improvement, finance, and entrepreneurship. You can follow him on Twitter (@antocucciniello) and on GitHub. 
Read more
  • 0
  • 0
  • 6902

article-image-where-he-develops-innovative-prototypes-to-support-company-growth-he-is-addicted-to-learning
Xavier Bruhiere
26 Sep 2016
1 min read
Save for later

where he develops innovative prototypes to support company growth. He is addicted to learning

Xavier Bruhiere
26 Sep 2016
1 min read
and practicing high-intensity sports. "
Read more
  • 0
  • 0
  • 6315
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Modal Close icon
Modal Close icon