Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-eric-evans-at-domain-driven-design-europe-2019-explains-the-different-bounded-context-types-and-their-relation-with-microservices
Bhagyashree R
17 Dec 2019
9 min read
Save for later

Eric Evans at Domain-Driven Design Europe 2019 explains the different bounded context types and their relation with microservices

Bhagyashree R
17 Dec 2019
9 min read
The fourth edition of the Domain-Driven Design Europe 2019 conference was held early this year from Jan 31-Feb 1 at Amsterdam. Eric Evans, who is known for his book Domain-Driven Design: Tackling Complexity in Software kick-started the conference with a great talk titled "Language in Context". In his keynote, Evans explained some key Domain-driven design concepts including subdomains, context maps, and bounded context. He introduced some new concepts as well including bubble context, quaint context, patch on patch context, and more. He further talked about the relationship between the bounded context and microservices. Want to learn domain-driven design concepts in a practical way? Check out our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will guide you in involving business stakeholders when choosing the software you are planning to build for them. By figuring out the temporal nature of behavior-driven domain models, you will be able to build leaner, more agile, and modular systems. What is a bounded context? Domain-driven design is a software development approach that focuses on the business domain or the subject area. To solve problems related to that domain, we create domain models which are abstractions describing selected aspects of a domain. The terminology and concepts related to these models only make sense within a context. In domain-driven design, this is called bounded context.  Bounded context is one of the most important concepts in domain-driven design. Evans explained that bounded context is basically a boundary where we eliminate any kind of ambiguity. It is a part of the software where particular terms, definitions, and rules apply in a consistent way. Another important property of the bounded context is that a developer and other people in the team should be able to easily see that “boundary.” They should know whether they are inside or outside of the boundary.  Within this bounded context, we have a canonical context in which we explore different domain models, refine our language and develop ubiquitous language, and try to focus on the core domain. Evans says that though this is a very “tidy” way of creating software, this is not what we see in reality. “Nothing is that tidy! Certainly, none of the large software systems that I have ever been involved with,” he says. He further added that though the concept of bounded context has grabbed the interest of many within the community, it is often “misinterpreted.” Evans has noticed that teams often confuse between bounded context and subdomain. The reason behind this confusion is that in an “ideal” scenario they should coincide. Also, large corporations are known for reorganizations leading to changes in processes and responsibilities. This could result in two teams having to work in the same bounded contexts with an increased risk of ending up with a “big ball of mud.” The different ways of describing bounded contexts In their paper, Big Ball of Mud, Brian Foote and Joseph Yoder describe the big ball of mud as “a haphazardly structured, sprawling, sloppy, duct-tape and baling wire, spaghetti code jungle.” Some of the properties that Evans uses to describe it are incomprehensible interdependencies, inconsistent definitions, incomplete coverage, and risky to change. Needless to say, you would want to avoid the big ball of mud by all accounts. However, if you find yourself in such a situation, Evans says that building the system from the ground up is not an ideal solution. Instead, he suggests going for something called bubble context in which you create a new model that works well next to the already existing models. While the business is run by the big ball of mud, you can do an elegant design within that bubble. Another context that Evans explained was the mature productive context. It is the part of the software that is producing value but probably is built on concepts in the core domain that are outdated. He explained this particular context with an example of a garden. A “tidy young garden” that has been recently planted looks great, but you do not get much value from it. It is only a few months later when the plants start fruition and you get the harvest. Along similar lines, developers should plant seeds with the goal of creating order, but also embrace the chaotic abundance that comes with a mature system. Evans coined another term quaint context for a context that one would consider "legacy". He describes it as an old context that still does useful work but is implemented using old fashioned technology or is not aligned with the current domain vision. Another name he suggests is patch on patch context that also does something useful as it is, but its numerous interdependency “makes change risky and expensive.” Apart from these, there are many other types of context that we do not explicitly label. When you are establishing a boundary, it is good practice to analyze different subdomains and check the ones that are generic and ones that are specific to the business. Here he introduced the generic subdomain context. “Generic here means something that everybody does or a great range of businesses and so forth do. There’s nothing special about our business and we want to approach this is a conventional way. And to do that the best way I believe is to have a context, a boundary in which we address that problem,” he explains. Another generic context Evans mentioned was generic off the shelf (OTS), which can make setting the boundary easier as you are getting something off the shelf. Bounded context types in the microservice architecture Evans sees microservices as the biggest opportunity and risks the software engineering community has had in a long time. Looking at the hype around microservices it is tempting to jump on the bandwagon, but Evans suggests that it is important to see the capabilities microservices provide us to meet the needs of the business. A common misconception people have is that microservices are bounded context, which Evans calls oversimplification. He further shared four kinds of context that involve microservices: Service internal The first one is service internal that describes how a service actually works. Evans believes that this is the type of context that people think of when they say microservice is a bounded context. In this context, a service is isolated from other services and handled by an autonomous team. Though this definitely fits the definition of a bounded context, it is not the only aspect of microservices, Evans notes. If we only use this type, we would end up with a bunch of services that don't know how to interact with each other.  API of Service  The API of service context describes how a service talks to other services. In this context as well, an API is built by an autonomous team and anyone consuming their API is required to conform to them. This implies that all the development decisions are pretty much dictated by the data flow direction, however, Evans think there are other alternatives. Highly influential groups may create an API that other teams must conform to irrespective of the direction data is flowing. Cluster of codesigned services The cluster of codesigned services context refers to the cluster of services designed in close collaboration. Here, the bounded context consists of a cluster of services designed to work with each other to accomplish some tasks. Evans remarks that the internals of the individual services could be very different from the models used in the API. Interchange context The final type is interchange context. According to Evans, the interaction between services must also be modeled. The model will describe messages and definitions to use when services interact with other services. He further notes that there are no services in this context as it is all about messages, schemas, and protocols. How legacy systems can participate in microservices architecture Coming back to legacy systems and how they can participate in a microservices environment, Evans introduced a new concept called Exposed Legacy Asset. He suggests creating an interface that looks like a microservice and interacts with other microservices, but internally interacts with a legacy system. This will help us avoid corrupting the new microservices built and also keeps us from having to change the legacy system. In the end, looking back at 15 years of his book, Domain-Driven Design, he said that we now may need a new definition of domain-driven design. A challenge that he sees is how tight this definition should be. He believes that a definition should share a common vision and language, but also be flexible enough to encourage innovation and improvement. He doesn’t want the domain-driven design to become a club of happy members. He instead hopes for an intellectually honest community of practitioners who are “open to the possibility of being wrong about things.” If you tried to take the domain-driven design route and you failed at some point, it is important to question and reexamine. Finally, he summarized by defining domain-driven design as a set of guiding principles and heuristics. The key principles are focussing on the core domain, exploring models in a creative collaboration of domain experts and software experts, and speaking a ubiquitous language within a bounded context. [box type="shadow" align="" class="" width=""] “Let's practice DDD together, shake it up and renew,” he concludes. [/box] If you want to put these and other domain-driven design principles into practice, grab a copy of our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will help you discover and resolve domain complexity together with business stakeholders and avoid common pitfalls when creating the domain model. You will further study the concept of bounded context and aggregate, and much more. Gabriel Baptista on how to build high-performance software architecture systems with C# and .Net Core You can now use WebAssembly from .NET with Wasmtime! Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist
Read more
  • 0
  • 0
  • 33764

article-image-bizarre-python
Packt
19 Aug 2015
20 min read
Save for later

The strange relationship between objects, functions, generators and coroutines

Packt
19 Aug 2015
20 min read
The strange relationship between objects, functions, generators and coroutines In this article, I’d like to investigate some relationships between functions, objects, generators and coroutines in Python. At a theoretical level, these are very different concepts, but because of Python’s dynamic nature, many of them can appear to be used interchangeably. I discuss useful applications of all of these in my book, Python 3 Object-oriented Programming - Second Edition. In this essay, we’ll examine their relationship in a more whimsical light; most of the code examples below are ridiculous and should not be attempted in a production setting! Let’s start with functions, which are simplest. A function is an object that can be executed. When executed, the function is entered at one place accepting a group of (possibly zero) objects as parameters. The function exits at exactly one place and always returns a single object. Already we see some complications; that last sentence is true, but you might have several counter-questions: What if a function has multiple return statements? Only one of them will be executed in any one call to the function. What if the function doesn’t return anything? Then it it will implicitly return the None object. Can’t you return multiple objects separated by a comma? Yes, but the returned object is actually a single tuple Here’s a look at a function: def average(sequence): avg = sum(sequence) / len(sequence) return avg print(average([3, 5, 8, 7])) which outputs: That’s probably nothing you haven’t seen before. Similarly, you probably know what an object and a class are in Python. I define an object as a collection of data and associated behaviors. A class represents the “template” for an object. Usually the data is represented as a set of attributes and the behavior is represented as a collection of method functions, but this doesn’t have to be true. Here’s a basic object: class Statistics: def __init__(self, sequence): self.sequence = sequence def calculate_average(self): return sum(self.sequence) / len(self.sequence) def calculate_median(self): length = len(self.sequence) is_middle = int(not length % 2) return ( self.sequence[length // 2 - is_middle] + self.sequence[-length // 2]) / 2 statistics = Statistics([5, 2, 3]) print(statistics.calculate_average()) which outputs: This object has one piece of data attached to it: the sequence. It also has two methods besides the initializer. Only one of these methods is used in this particular example, but as Jack Diederich said in his famous Stop Writing Classes talk, a class with only one function besides the initializer should just be a function. So I included a second one to make it look like a useful class (It’s not. The new statistics module in Python 3.4 should be used instead. Never define for yourself that which has been defined, debugged, and tested by someone else). Classes like this are also things you’ve seen before, but with this background in place, we can now look at some bizarre things you might not expect (or indeed, want) to be able to do with a function. For example, did you know that functions are objects? In fact, anything that you can interact with in Python is defined in the source code for the CPython interpreter as a PyObject structure. This includes functions, objects, basic primitives, containers, classes, modules, you name it. This means we can attach attributes to a function just as with any standard object. Ah, but if functions are objects, can you attach functions to functions? Don’t try this at home (and especially don’t do it at work): def statistics(sequence): statistics.sequence = sequence return statistics def calculate_average(): return sum(statistics.sequence) / len(statistics.sequence) statistics.calculate_average = calculate_average print(statistics([1, 5, 8, 4]).calculate_average()) which outputs: This is a pretty crazy example (but we’re just getting started). The statistics function is being set up as an object that has two attributes: sequence is a list and calculate_average is another function object. For fun, the function returns itself so that the print function can call the calculate_average function all in one line. Note that the statistics function here is an object, not a class. Rather than emulating the Statistics class in the previous example, it is more similar to the statistics instance of that class. It is hard to imagine any reason that you would want to write code like this in real life. Perhaps it could be used to implement the Singleton (anti-)pattern popular with some other languages. Since there can only ever be one statistics function, it is not possible to create two distinct instances with two distinct sequence attributes the way we can with the Statistics class. There is generally little need for such code in Python, though, because of its ‘consenting adults’ nature. We can more closely simulate a class by using a function like a constructor: def Statistics(sequence): def self(): return self.average() self.sequence = sequence def average(): return sum(self.sequence) / len(self.sequence) self.average = average return self statistics = Statistics([2, 1, 1]) print(Statistics([1, 4, 6, 2]).average()) print(statistics()) which outputs: That looks an awful lot like Javascript, doesn’t it? The Statistics function acts like a constructor that returns an object (that happens to be a function, named self). That function object has had a couple attributes attached to it, so our function is now an object with both data and behavior. The last three lines show that we can instantiate two separate Statistics “objects” just as if it were a class. Finally, since the statistics object in the last line really is a function, we can even call it directly. It proxies the call through to the average function (or is it a method at this point? I can’t tell anymore) defined on itself. Before we go on, note that this simulated overlapping of functionality does not mean that we are getting exactly the same behavior out of the Python interpreter. While functions are objects, not all objects are functions. The underlying implementation is different, and if you try to do this in production code, you’ll quickly find confusing anomalies. In normal code, the fact that functions can have attributes attached to them is rarely useful. I’ve seen it used for interesting diagnostics or testing, but it’s generally just a hack. However, knowing that functions are objects allows us to pass them around to be called at a later time. Consider this basic partial implementation of an observer pattern: class Observers(list): register_observer = list.append def notify(self): for observer in self: observer() observers = Observers() def observer_one(): print('one was called') def observer_two(): print('two was called') observers.register_observer(observer_one) observers.register_observer(observer_two) observers.notify() which outputs: At line 2, I’ve intentionally reduced the comprehensibility of this code to conform to my initial ‘most of the examples in this article are ridiculous’ thesis. This line creates a new class attribute named register_observer which points to the list.append function. Since the Observers class inherits from the list class, this line essentially creates a shortcut to a method that would look like this: def register_observer(self, item): self.append(item) And this is how you should do it in your code. Nobody’s going to understand what’s going on if you follow my version. The part of this code that you might want to use in real life is the way the callback functions are passed into the registration function at lines 16 and 17. Passing functions around like this is quite common in Python. The alternative, if functions were not objects, would be to create a bunch of classes that have a single method with an uninformative name like execute and pass those around instead. Observers are a bit too useful, though, so in the spirit of keeping things ridiculous, let’s make a silly function that returns a function object: def silly(): print("silly") return silly silly()()()()()() which outputs: Since we’ve seen some ways that functions can (sort of) imitate objects, let’s now make an object that can behave like a function: class Function: def __init__(self, message): self.message = message def __call__(self, name): return "{} says '{}'".format(name, self.message) function = Function("I am a function") print(function('Cheap imitation function')) which outputs: I don’t use this feature often, but it can be useful in a few situations. For example, if you write a function and call it from many different places, but later discover that it needs to maintain some state, you can change the function to an object and implement the __call__ method without changing all the call sites. Or if you have a callback implementation that normally passes functions around, you can use a callable object when you need to store more complicated state. I’ve also seen Python decorators made out of objects when additional state or behavior is required. Now, let’s talk about generators. As you might expect by now, we’ll start with the silliest way to implement generation code. We can use the idea of a function that returns an object to create a rudimentary generatorish object that calculates the Fibonacci sequence: def FibFunction(): a = b = 1 def next(): nonlocal a, b a, b = b, a + b return b return next fib = FibFunction() for i in range(8): print(fib(), end=' ') which outputs: This is a pretty wacky thing to do, but the point is that it is possible to build functions that are able to maintain state between calls. The state is stored in the surrounding closure; we can access that state by referencing them with Python 3’s nonlocal keyword. It is kind of like global except it accesses the state from the surrounding function, not the global namespace. We can, of course, build a similar construct using classic (or classy) object notation: class FibClass(): def __init__(self): self.a = self.b = 1 def __call__(self): self.a, self.b = self.b, self.a + self.b return self.b fib = FibClass() for i in range(8): print(fib(), end=' ') which outputs: Of course, neither of these obeys the iterator protocol. No matter how I wrangle it, I was not able to get FibFunction to work with Python’s builtin next() function, even after looking through the CPython source code for a couple hours. As I mentioned earlier, using the function syntax to build pseudo-objects quickly leads to frustration. However, it’s easy to tweak the object based FibClass to fulfill the iterator protocol: class FibIterator(): def __init__(self): self.a = self.b = 1 def __next__(self): self.a, self.b = self.b, self.a + self.b return self.b def __iter__(self): return self fib = FibIterator() for i in range(8): print(next(fib), end=' ') which outputs: This class is a standard implementation of the iterator pattern. But it’s kind of ugly and verbose. Luckily, we can get the same effect in Python with a function that includes a yield statement to construct a generator. Here’s the Fibonacci sequence as a generator: ef FibGenerator(): a = b = 1 while True: a, b = b, a + b yield b fib = FibGenerator() for i in range(8): print(next(fib), end=' ') print('n', fib) which outputs: The generator version is a bit more readable than the other two implementations. The thing to pay attention to here is that a generator is not a function. The FibGenerator function returns an object as illustrated by the words “generator object” in the last line of output above. Unlike a normal function, a generator function does not execute any code inside it when we call the function. Instead it constructs a generator object and returns that. You could think of this as like an implicit Python decorator; the Python interpreter sees the yield keyword and wraps it in a decorator that returns an object instead. To start the function code executing, we have to use the next function (either explicitly as in the examples, or implicitly by using a for loop or yield from). While a generator is technically an object, it is often convenient to think of the function that creates it as a function that can have data passed in at one place and can return values multiple times. It’s sort of like a generic version of a function (which can have data passed in at one place and return a value at only one place). It is easy to make a generator that behaves not completely unlike a function, by yielding only one value: def average(sequence): yield sum(sequence) / len(sequence) print(next(average([1, 2, 3]))) which outputs: Unfortunately, the call site at line 4 is less readable than a normal function call, since we have to throw that pesky next() in there. The obvious way around this would be to add a __call__ method to the generator but this fails if we try to use attribute assignment or inheritance. There are optimizations that make generators run quickly in C code and also don’t let us assign attributes to them. We can, however, wrap the generator in a function-like object using a ludicrous decorator: def gen_func(func): def wrapper(*args, **kwargs): gen = func(*args, **kwargs) return next(gen) return wrapper @gen_func def average(sequence): yield sum(sequence) / len(sequence) print(average([1, 6, 3, 4])) which outputs: Of course this is an absurd thing to do. I mean, just write a normal function for pity’s sake! But taking this idea a little further, it could be tempting to create a slightly different wrapper: def callable_gen(func): class CallableGen: def __init__(self, *args, **kwargs): self.gen = func(*args, **kwargs) def __next__(self): return self.gen.__next__() def __iter__(self): return self def __call__(self): return next(self) return CallableGen @callable_gen def FibGenerator(): a = b = 1 while True: a, b = b, a + b yield b fib = FibGenerator() for i in range(8): print(fib(), end=' ') which outputs: To completely wrap the generator, we’d need to proxy a few other methods through to the underlying generator including send, close, and throw. This generator wrapper can be used to call a generator any number of times without calling the next function. I’ve been tempted to do this to make my code look cleaner if there are a lot of next calls in it, but I recommend not yielding into this temptation. Coders reading your code, including yourself, will go berserk trying to figure out what that “function call” is doing. Just get used to the next function and ignore this decorator business. So we’ve drawn some parallels between generators, objects and functions. Let’s talk now about one of the more confusing concepts in Python: coroutines. In Python, coroutines are usually defined as “generators that you can send values into”. At an implementation level, this is probably the most sensible definition. In the theoretical sense, however, it is more accurate to define coroutines as constructs that can accept values at one or more locations and return values at one or more locations. Therefore, while in Python it is easy to think of a generator as a special type of function that has yield statements and a coroutine as a special type of generator that we can send data into a different points, a better taxonomy is to think of a coroutine that can accept and return values at multiple locations as a general case, and generators and functions as special types of coroutines that are restricted in where they can accept or return values. So let’s see a coroutine: def LineInserter(lines): out = [] for line in lines: to_append = yield line out.append(line) if to_append is not None: out.append(to_append) return out emily = """I died for beauty, but was scarce Adjusted in the tomb, When one who died for truth was lain In an adjoining room. He questioned softly why I failed? “For beauty,” I replied. “And I for truth,—the two are one; We brethren are,” he said. And so, as kinsmen met a night, We talked between the rooms, Until the moss had reached our lips, And covered up our names. """ inserter = LineInserter(iter(emily.splitlines())) count = 1 try: line = next(inserter) while True: line = next(inserter) if count % 4 else inserter.send('-------') count += 1 except StopIteration as ex: print('n' + 'n'.join(ex.value)) which outputs: The LineInserter object is called a coroutine rather than a generator only because the yield statement is placed on the right side of an assignment operator. Now whenever we yield a line, it stores any value that might have been sent back into the coroutine in the to_append variable. As you can see in the driver code, we can send a value back in using inserter.send. if you instead just use next, the to_append variable gets a value of None. Don’t ask me why next is a function and send is a method when they both do nearly the same thing! In this example, we use the send call to insert a ruler every four lines to separate stanzas in Emily Dickinson’s famous poem. But I used the exact same coroutine in a program that parses the source file for this article. It checks if any line contains the string !#python, and if so, it executes the subsequent code block and inserts the output (see the ‘which outputs’ lines throughout this article) into the article. Coroutines can provide that little extra something when normal ‘one way’ iteration doesn’t quite cut it. The coroutine in the last example is really nice and elegant, but I find the driver code a bit annoying. I think it’s just me, but something about the indentation of a try…catch statement always frustrates me. Recently, I’ve been emulating Python 3.4’s contextlib.suppress context manager to replace except clauses with a callback. For example: def LineInserter(lines): out = [] for line in lines: to_append = yield line out.append(line) if to_append is not None: out.append(to_append) return out from contextlib import contextmanager @contextmanager def generator_stop(callback): try: yield except StopIteration as ex: callback(ex.value) def lines_complete(all_lines): print('n' + 'n'.join(all_lines)) emily = """I died for beauty, but was scarce Adjusted in the tomb, When one who died for truth was lain In an adjoining room. He questioned softly why I failed? “For beauty,” I replied. “And I for truth,—the two are one; We brethren are,” he said. And so, as kinsmen met a night, We talked between the rooms, Until the moss had reached our lips, And covered up our names. """ inserter = LineInserter(iter(emily.splitlines())) count = 1 with generator_stop(lines_complete): line = next(inserter) while True: line = next(inserter) if count % 4 else inserter.send('-------') count += 1 which outputs: The generator_stop now encapsulates all the ugliness, and the context manager can be used in a variety of situations where StopIteration needs to be handled. Since coroutines are undifferentiated from generators, they can emulate functions just as we saw with generators. We can even call into the same coroutine multiple times as if it were a function: def IncrementBy(increment): sequence = yield while True: sequence = yield [i + increment for i in sequence] sequence = [10, 20, 30] increment_by_5 = IncrementBy(5) increment_by_8 = IncrementBy(8) next(increment_by_5) next(increment_by_8) print(increment_by_5.send(sequence)) print(increment_by_8.send(sequence)) print(increment_by_5.send(sequence)) which outputs: Note the two calls to next at lines 9 and 10. These effectively “prime” the generator by advancing it to the first yield statement. Then each call to send essentially looks like a single call to a function. The driver code for this coroutine doesn’t look anything like calling a function, but with some evil decorator magic, we can make it look less disturbing: def evil_coroutine(func): def wrapper(*args, **kwargs): gen = func(*args, **kwargs) next(gen) def gen_caller(arg=None): return gen.send(arg) return gen_caller return wrapper @evil_coroutine def IncrementBy(increment): sequence = yield while True: sequence = yield [i + increment for i in sequence] sequence = [10, 20, 30] increment_by_5 = IncrementBy(5) increment_by_8 = IncrementBy(8) print(increment_by_5(sequence)) print(increment_by_8(sequence)) print(increment_by_5(sequence)) which outputs: The decorator accepts a function and returns a new wrapper function that gets assigned to the IncrementBy variable. Whenever this new IncrementBy is called, it constructs a generator using the original function, and advances it to the first yield statement using next (the priming action from before). It returns a new function that calls send on the generator each time it is called. This function makes the argument default to None so that it can also work if we call next instead of send. The new driver code is definitely more readable, but once again, I would not recommend using this coding style to make coroutines behave like hybrid object/functions. The argument that other coders aren’t going to understand what is going through your head still stands. Plus, since send can only accept one argument, the callable is quite restricted. Before we leave our discussion of the bizarre relationships between these concepts, let’s look at how the stanza processing code could look without an explicit coroutine, generator, function, or object: emily = """I died for beauty, but was scarce Adjusted in the tomb, When one who died for truth was lain In an adjoining room. He questioned softly why I failed? “For beauty,” I replied. “And I for truth,—the two are one; We brethren are,” he said. And so, as kinsmen met a night, We talked between the rooms, Until the moss had reached our lips, And covered up our names. """ for index, line in enumerate(emily.splitlines(), start=1): print(line) if not index % 4: print('------') which outputs: This code is so simple and elegant! This happens to me nearly every time I try to use coroutines. I keep refactoring and simplifying it until I discover that coroutines are making my code less, not more, readable. Unless I am explicitly modeling a state-transition system or trying to do asynchronous work using the terrific asyncio library (which wraps all the possible craziness with StopIteration, cascading exceptions, etc), I rarely find that coroutines are the right tool for the job. That doesn’t stop me from attempting them though, because they are fun. For the record, the LineInserter coroutine actually is useful in the markdown code executor I use to parse this source file. I need to keep track of more transitions between states (am I currently looking for a code block? am I in a code block? Do I need to execute the code block and record the output?) than in the stanza marking example used here. So, it has become clear that in Python, there is more than one way to do a lot of things. Luckily, most of these ways are not very obvious, and there is usually “one, and preferably only one, obvious way to do things”, to quote The Zen Of Python. I hope by this point that you are more confused about the relationship between functions, objects, generators and coroutines than ever before. I hope you now know how to write much code that should never be written. But mostly I hope you’ve enjoyed your exploration of these topics. If you’d like to see more useful applications of these and other Python concepts, grab a copy of Python 3 Object-oriented Programming, Second Edition.
Read more
  • 0
  • 0
  • 33283

article-image-create-local-ubuntu-repository-using-apt-mirror-and-apt-cacher
Packt
05 Oct 2009
7 min read
Save for later

Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher

Packt
05 Oct 2009
7 min read
How can a company or organization minimize bandwidth costs when maintaining multiple Ubuntu installations? With bandwidth becoming the currency of the new millennium, being responsible with the bandwidth you have can be a real concern. In this article by Christer Edwards, we will learn how to create, maintain and make available a local Ubuntu repository mirror, allowing you to save bandwidth and improve network efficiency with each machine you add to your network. (For more resources on Ubuntu, see here.) Background Before I begin, let me give you some background into what prompted me to initially come up with these solutions. I used to volunteer at local university user groups whenever they held "installfests". I always enjoyed offering my support and expertise with new Linux users. I have to say, the distribution that we helped deploy the most was Ubuntu. The fact that Ubuntu's parent company, Canonical, provided us with pressed CDs didn't hurt! Despite having more CDs than we could handle we still regularly ran into the same problem. Bandwidth. Fetching updates and security errata was always our bottleneck, consistently slowing down our deployments. Imagine you are in a conference room with dozens of other Linux users, each of them trying to use the internet to fetch updates. Imagine how slow and bogged down that internet connection would be. If you've ever been to a large tech conference you've probably experienced this first hand! After a few of these "installfests" I started looking for a solution to our bandwidth issue. I knew that all of these machines were downloading the same updates and security errata, therefore duplicating work that another user had already done. It just seemed so inefficient. There had to be a better way. Eventually I came across two solutions, which I will outline below. Apt-Mirror The first method makes use of a tool called “apt-mirror”. It is a perl-based utility for downloading and mirroring the entire contents of a public repository. This may likely include packages that you don't use and will not use, but anything stored in a public repository will also be stored in your mirror. To configure apt-mirror you will need the following: apt-mirror package (sudo aptitude install apt-mirror) apache2 package (sudo aptitude install apache2) roughly 15G of storage per release, per architecture Once these requirements are met you can begin configuring the apt-mirror tool. The main items that you'll need to define are: storage location (base_path) number of download threads (nthreads) release(s) and architecture(s) you want The configuration is done in the /etc/apt/mirror.list file. A default config was put in place when you installed the package, but it is generic. You'll need to update the values as mentioned above. Below I have included a complete configuration which will mirror Ubuntu 8.04 LTS for both 32 and 64 bit installations. It will require nearly 30G of storage space, which I will be putting on a removable drive. The drive will be mounted at /media/STORAGE/. # apt-mirror configuration file#### The default configuration options (uncomment and change to override)###set base_path /media/STORAGE/# set mirror_path $base_path/mirror# set skel_path $base_path/skel# set var_path $base_path/var## set defaultarch <running host architecture>set nthreads 20## 8.04 "hardy" i386 mirrordeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-i386 http://packages.medibuntu.org/ hardy free non-free# 8.04 "hardy" amd64 mirrordeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-amd64 http://packages.medibuntu.org/ hardy free non-free# Cleaning sectionclean http://us.archive.ubuntu.com/clean http://packages.medibuntu.org/ It should be noted that each of the repository lines within the file should begin with deb-i386 or deb-amd64. Formatting may have changed based on the web formatting. Once your configuration is saved you can begin to populate your mirror by running the command: apt-mirror Be warned that, based on your internet connection speeds, this could take quite a long time. Hours, if not more than a day for the initial 30G to download. Each time you run the command after the initial download will be much faster, as only incremental updates are downloaded. You may be wondering if it is possible to cancel the transfer before it is finished and begin again where it left off. Yes, I have done this countless times and I have not run into any issues. You should now have a copy of the repository stored locally on your machine. In order for this to be available to other clients you'll need to share the contents over http. This is why we listed Apache as a requirement above. In my example configuration above I mirrored the repository on a removable drive, and mounted it at /media/STORAGE. What I need to do now is make that address available over the web. This can be done by way of a symbolic link. cd /var/www/sudo ln -s /media/STORAGE/mirror/us.archive.ubuntu.com/ubuntu/ ubuntu The commands above will tell the filesystem to follow any requests for “ubuntu” back up to the mounted external drive, where it will find your mirrored contents. If you have any problems with this linking double-check your paths (as compared to those suggested here) and make sure your link points to the ubuntu directory, which is a subdirectory of the mirror you pulled from. If you point anywhere below this point your clients will not properly be able to find the contents. The additional task of keeping this newly downloaded mirror updated with the latest updates can be automated by way of a cron job. By activating the cron job your machine will automatically run the apt-mirror command on a regular daily basis, keeping your mirror up to date without any additional effort on your part. To activate the automated cron job, edit the file /etc/cron.d/apt-mirror. There will be a sample entry in the file. Simply uncomment the line and (optionally) change the “4” to an hour more convenient for you. If left with its defaults it will run the apt-mirror command, updating and synchronizing your mirror, every morning at 4:00am. The final step needed, now that you have your repository created, shared over http and configured to automatically update, is to configure your clients to use it. Ubuntu clients define where they should grab errata and security updates in the /etc/apt/sources.list file. This will usually point to the default or regional mirror. To update your clients to use your local mirror instead you'll need to edit the /etc/apt/sources.list and comment the existing entries (you may want to revert to them later!) Once you've commented the entries you can create new entries, this time pointing to your mirror. A sample entry pointing to a local mirror might look something like this: deb http://192.168.0.10/ubuntu hardy main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-updates main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-security main restricted universe multiverse Basically what you are doing is recreating your existing entries but replacing the archive.ubuntu.com with the IP of your local mirror. As long as the mirrored contents are made available over http on the mirror-server itself you should have no problems. If you do run into problems check your apache logs for details. By following these simple steps you'll have your own private (even portable!) Ubuntu repository available within your LAN, saving you future bandwidth and avoiding redundant downloads.
Read more
  • 0
  • 0
  • 33055

article-image-writing-perform-test-functions-in-golang-tutorial
Natasha Mathur
10 Jul 2018
9 min read
Save for later

Writing test functions in Golang [Tutorial]

Natasha Mathur
10 Jul 2018
9 min read
Go is a modern programming language built for the 21st-century application development. Hardware and technology have advanced significantly over the past decade, and most of the other languages do not take advantage of these technological advancements.  Go allows us to build network applications that take advantage of concurrency and parallelism made available with multicore systems. Testing is an important part of programming, whether it is in Go or in any other language. Go has a straightforward approach to writing tests, and in this tutorial, we will look at some important tools to help with testing. This tutorial is an excerpt from the book ‘Distributed computing with Go’, written by V.N. Nikhil Anurag. There are certain rules and conventions we need to follow to test our code. They are as follows: Source files and associated test files are placed in the same package/folder The name of the test file for any given source file is <source-file-name>_test.go Test functions need to have the "Test" prefix, and the next character in the function name should be capitalized In the remainder of this tutorial, we will look at three files and their associated tests: variadic.go and variadic_test.go addInt.go and addInt_test.go nil_test.go (there isn't any source file for these tests) Along the way, we will introduce any concepts we might use. variadic.go function In order to understand the first set of tests, we need to understand what a variadic function is and how Go handles it. Let's start with the definition: Variadic function is a function that can accept any number of arguments during function call. Given that Go is a statically typed language, the only limitation imposed by the type system on a variadic function is that the indefinite number of arguments passed to it should be of the same data type. However, this does not limit us from passing other variable types. The arguments are received by the function as a slice of elements if arguments are passed, else nil, when none are passed. Let's look at the code to get a better idea: // variadic.go package main func simpleVariadicToSlice(numbers ...int) []int { return numbers } func mixedVariadicToSlice(name string, numbers ...int) (string, []int) { return name, numbers } // Does not work. // func badVariadic(name ...string, numbers ...int) {} We use the ... prefix before the data type to define a function as a variadic function. Note that we can have only one variadic parameter per function and it has to be the last parameter. We can see this error if we uncomment the line for badVariadic and try to test the code. variadic_test.go We would like to test the two valid functions, simpleVariadicToSlice, and mixedVariadicToSlice, for various rules defined above. However, for the sake of brevity, we will test these: simpleVariadicToSlice: This is for no arguments, three arguments, and also to look at how to pass a slice to a variadic function mixedVariadicToSlice: This is to accept a simple argument and a variadic argument Let's now look at the code to test these two functions: // variadic_test.go package main import "testing" func TestSimpleVariadicToSlice(t *testing.T) { // Test for no arguments if val := simpleVariadicToSlice(); val != nil { t.Error("value should be nil", nil) } else { t.Log("simpleVariadicToSlice() -> nil") } // Test for random set of values vals := simpleVariadicToSlice(1, 2, 3) expected := []int{1, 2, 3} isErr := false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3}") } // Test for a slice vals = simpleVariadicToSlice(expected...) isErr = false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3}") } } func TestMixedVariadicToSlice(t *testing.T) { // Test for simple argument & no variadic arguments name, numbers := mixedVariadicToSlice("Bob") if name == "Bob" && numbers == nil { t.Log("Recieved as expected: Bob, <nil slice>") } else { t.Errorf("Received unexpected values: %s, %s", name, numbers) } } Running tests in variadic_test.go Let's run these tests and see the output. We'll use the -v flag while running the tests to see the output of each individual test: $ go test -v ./{variadic_test.go,variadic.go} === RUN TestSimpleVariadicToSlice --- PASS: TestSimpleVariadicToSlice (0.00s) variadic_test.go:10: simpleVariadicToSlice() -> nil variadic_test.go:26: simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3} variadic_test.go:41: simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3} === RUN TestMixedVariadicToSlice --- PASS: TestMixedVariadicToSlice (0.00s) variadic_test.go:49: Received as expected: Bob, <nil slice> PASS ok command-line-arguments 0.001s addInt.go The tests in variadic_test.go elaborated on the rules for the variadic function. However, you might have noticed that TestSimpleVariadicToSlice ran three tests in its function body, but go test treats it as a single test. Go provides a good way to run multiple tests within a single function, and we shall look them in addInt_test.go. For this example, we will use a very simple function as shown in this code: // addInt.go package main func addInt(numbers ...int) int { sum := 0 for _, num := range numbers { sum += num } return sum } addInt_test.go You might have also noticed in TestSimpleVariadicToSlice that we duplicated a lot of logic, while the only varying factor was the input and expected values. One style of testing, known as Table-driven development, defines a table of all the required data to run a test, iterates over the "rows" of the table and runs tests against them. Let's look at the tests we will be testing against no arguments and variadic arguments: // addInt_test.go package main import ( "testing" ) func TestAddInt(t *testing.T) { testCases := []struct { Name string Values []int Expected int }{ {"addInt() -> 0", []int{}, 0}, {"addInt([]int{10, 20, 100}) -> 130", []int{10, 20, 100}, 130}, } for _, tc := range testCases { t.Run(tc.Name, func(t *testing.T) { sum := addInt(tc.Values...) if sum != tc.Expected { t.Errorf("%d != %d", sum, tc.Expected) } else { t.Logf("%d == %d", sum, tc.Expected) } }) } } Running tests in addInt_test.go Let's now run the tests in this file, and we are expecting each of the row in the testCases table, which we ran, to be treated as a separate test: $ go test -v ./{addInt.go,addInt_test.go} === RUN TestAddInt === RUN TestAddInt/addInt()_->_0 === RUN TestAddInt/addInt([]int{10,_20,_100})_->_130 --- PASS: TestAddInt (0.00s) --- PASS: TestAddInt/addInt()_->_0 (0.00s) addInt_test.go:23: 0 == 0 --- PASS: TestAddInt/addInt([]int{10,_20,_100})_->_130 (0.00s) addInt_test.go:23: 130 == 130 PASS ok command-line-arguments 0.001s nil_test.go We can also create tests that are not specific to any particular source file; the only criteria is that the filename needs to have the <text>_test.go form. The tests in nil_test.go elucidate on some useful features of the language which the developer might find useful while writing tests. They are as follows: httptest.NewServer: Imagine the case where we have to test our code against a server that sends back some data. Starting and coordinating a full blown server to access some data is hard. The http.NewServer solves this issue for us. t.Helper: If we use the same logic to pass or fail a lot of testCases, it would make sense to segregate this logic into a separate function. However, this would skew the test run call stack. We can see this by commenting t.Helper() in the tests and rerunning go test. We can also format our command-line output to print pretty results. We will show a simple example of adding a tick mark for passed cases and cross mark for failed cases. In the test, we will run a test server, make GET requests on it, and then test the expected output versus actual output: // nil_test.go package main import ( "fmt" "io/ioutil" "net/http" "net/http/httptest" "testing" ) const passMark = "u2713" const failMark = "u2717" func assertResponseEqual(t *testing.T, expected string, actual string) { t.Helper() // comment this line to see tests fail due to 'if expected != actual' if expected != actual { t.Errorf("%s != %s %s", expected, actual, failMark) } else { t.Logf("%s == %s %s", expected, actual, passMark) } } func TestServer(t *testing.T) { testServer := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { path := r.RequestURI if path == "/1" { w.Write([]byte("Got 1.")) } else { w.Write([]byte("Got None.")) } })) defer testServer.Close() for _, testCase := range []struct { Name string Path string Expected string }{ {"Request correct URL", "/1", "Got 1."}, {"Request incorrect URL", "/12345", "Got None."}, } { t.Run(testCase.Name, func(t *testing.T) { res, err := http.Get(testServer.URL + testCase.Path) if err != nil { t.Fatal(err) } actual, err := ioutil.ReadAll(res.Body) res.Body.Close() if err != nil { t.Fatal(err) } assertResponseEqual(t, testCase.Expected, fmt.Sprintf("%s", actual)) }) } t.Run("Fail for no reason", func(t *testing.T) { assertResponseEqual(t, "+", "-") }) } Running tests in nil_test.go We run three tests, where two test cases will pass and one will fail. This way we can see the tick mark and cross mark in action: $ go test -v ./nil_test.go === RUN TestServer === RUN TestServer/Request_correct_URL === RUN TestServer/Request_incorrect_URL === RUN TestServer/Fail_for_no_reason --- FAIL: TestServer (0.00s) --- PASS: TestServer/Request_correct_URL (0.00s) nil_test.go:55: Got 1. == Got 1. --- PASS: TestServer/Request_incorrect_URL (0.00s) nil_test.go:55: Got None. == Got None. --- FAIL: TestServer/Fail_for_no_reason (0.00s) nil_test.go:59: + != - FAIL exit status 1 FAIL command-line-arguments 0.003s We looked at how to write test functions in Go, and learned a few interesting concepts when dealing with a variadic function and other useful test functions. If you found this post useful, do check out the book  'Distributed Computing with Go' to learn more about testing, Goroutines, RESTful web services, and other concepts in Go. Why is Go the go-to language for cloud-native development? – An interview with Mina Andrawos Systems programming with Go in UNIX and Linux How to build a basic server-side chatbot using Go
Read more
  • 0
  • 0
  • 32940

article-image-understanding-patterns-and-architecturesin-typescript
Packt
01 Jun 2016
19 min read
Save for later

Understanding Patterns and Architecturesin TypeScript

Packt
01 Jun 2016
19 min read
In this article by Vilic Vane,author of the book TypeScript Design Patterns, we'll study architecture and patterns that are closely related to the language or its common applications. Many topics in this articleare related to asynchronous programming. We'll start from a web architecture for Node.js that's based on Promise. This is a larger topic that has interesting ideas involved, including abstractions of response and permission, as well as error handling tips. Then, we'll talk about how to organize modules with ES module syntax. Due to the limited length of this article, some of the related code is aggressively simplified, and nothing more than the idea itself can be applied practically. (For more resources related to this topic, see here.) Promise-based web architecture The most exciting thing for Promise may be the benefits brought to error handling. In a Promise-based architecture, throwing an error could be safe and pleasant. You don't have to explicitly handle errors when chaining asynchronous operations, and this makes it tougher for mistakes to occur. With the growing usage with ES2015 compatible runtimes, Promise has already been there out of the box. We have actually plenty of polyfills for Promises (including my ThenFail, written in TypeScript) as people who write JavaScript roughly, refer to the same group of people who create wheels. Promises work great with other Promises: A Promises/A+ compatible implementation should work with other Promises/A+ compatible implementations Promises do their best in a Promise-based architecture If you are new to Promise, you may complain about trying Promise with a callback-based project. You may intend to use helpers provided by Promise libraries, such asPromise.all, but it turns out that you have better alternatives,such as the async library. So, the reason that makes you decide to switch should not be these helpers (as there are a lot of them for callbacks).They should be because there's an easier way to handle errors or because you want to take the advantages of ES async and awaitfeatures which are based on Promise. Promisifying existing modules or libraries Though Promises do their best with a Promise-based architecture, it is still possible to begin using Promise with a smaller scope by promisifying existing modules or libraries. Taking Node.js style callbacks as an example, this is how we use them: import * as FS from 'fs';   FS.readFile('some-file.txt', 'utf-8', (error, text) => { if (error) {     console.error(error);     return; }   console.log('Content:', text); }); You may expect a promisified version of readFile to look like the following: FS .readFile('some-file.txt', 'utf-8') .then(text => {     console.log('Content:', text); }) .catch(reason => {     Console.error(reason); }); Implementing the promisified version of readFile can be easy as the following: function readFile(path: string, options: any): Promise<string> { return new Promise((resolve, reject) => {     FS.readFile(path, options, (error, result) => {         if (error) { reject(error);         } else {             resolve(result);         }     }); }); } I am using any here for parameter options to reduce the size of demo code, but I would suggest that you donot useany whenever possible in practice. There are libraries that are able to promisify methods automatically. Unfortunately, you may need to write declaration files yourself for the promisified methods if there is no declaration file of the promisified version that is available. Views and controllers in Express Many of us may have already been working with frameworks such as Express. This is how we render a view or send back JSON data in Express: import * as Path from 'path'; import * as express from 'express';   let app = express();   app.set('engine', 'hbs'); app.set('views', Path.join(__dirname, '../views'));   app.get('/page', (req, res) => {     res.render('page', {         title: 'Hello, Express!',         content: '...'     }); });   app.get('/data', (req, res) => {     res.json({         version: '0.0.0',         items: []     }); });   app.listen(1337); We will usuallyseparate controller from routing, as follows: import { Request, Response } from 'express';   export function page(req: Request, res: Response): void {     res.render('page', {         title: 'Hello, Express!',         content: '...'     }); } Thus, we may have a better idea of existing routes, and we may have controllers managed more easily. Furthermore, automated routing can be introduced so that we don't always need to update routing manually: import * as glob from 'glob';   let controllersDir = Path.join(__dirname, 'controllers');   let controllerPaths = glob.sync('**/*.js', {     cwd: controllersDir });   for (let path of controllerPaths) {     let controller = require(Path.join(controllersDir, path));     let urlPath = path.replace(/\/g, '/').replace(/.js$/, '');       for (let actionName of Object.keys(controller)) {         app.get(             `/${urlPath}/${actionName}`, controller[actionName] );     } } The preceding implementation is certainly too simple to cover daily usage. However, it displays the one rough idea of how automated routing could work: via conventions that are based on file structures. Now, if we are working with asynchronous code that is written in Promises, an action in the controller could be like the following: export function foo(req: Request, res: Response): void {     Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             res.render('foo', {                 post,                 comments             });         }); } We use destructuring of an array within a parameter. Promise.all returns a Promise of an array with elements corresponding to values of resolvablesthat are passed in. (A resolvable means a normal value or a Promise-like object that may resolve to a normal value.) However, this is not enough, we need to handle errors properly. Or in some case, the preceding code may fail in silence (which is terrible). In Express, when an error occurs, you should call next (the third argument that is passed into the callback) with the error object, as follows: import { Request, Response, NextFunction } from 'express';   export function foo( req: Request, res: Response, next: NextFunction ): void {     Promise         // ...         .catch(reason => next(reason)); } Now, we are fine with the correctness of this approach, but this is simply not how Promises work. Explicit error handling with callbacks could be eliminated in the scope of controllers, and the easiest way to do this is to return the Promise chain and hand over to code that was previously performing routing logic. So, the controller could be written like the following: export function foo(req: Request, res: Response) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             res.render('foo', {                 post,                 comments             });         }); } Or, can we make this even better? Abstraction of response We've already been returning a Promise to tell whether an error occurs. So, for a server error, the Promise actually indicates the result, or in other words, the response of the request. However, why we are still calling res.render()to render the view? The returned Promise object could be an abstraction of the response itself. Think about the following controller again: export class Response {}   export class PageResponse extends Response {     constructor(view: string, data: any) { } }   export function foo(req: Request) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             return new PageResponse('foo', {                 post,                 comments             });         }); } The response object that is returned could vary for a different response output. For example, it could be either a PageResponse like it is in the preceding example, a JSONResponse, a StreamResponse, or even a simple Redirection. As in most of the cases, PageResponse or JSONResponse is applied, and the view of a PageResponse can usually be implied with the controller path and action name.It is useful to have these two responses automatically generated from a plain data object with proper view to render with, as follows: export function foo(req: Request) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             return {                 post,                 comments             };         }); } This is how a Promise-based controller should respond. With this idea in mind, let's update the routing code with an abstraction of responses. Previously, we were passing controller actions directly as Express request handlers. Now, we need to do some wrapping up with the actions by resolving the return value, and applying operations that are based on the resolved result, as follows: If it fulfills and it's an instance of Response, apply it to the resobjectthat is passed in by Express. If it fulfills and it's a plain object, construct a PageResponse or a JSONResponse if no view found and apply it to the resobject. If it rejects, call thenext function using this reason. As seen previously,our code was like the following: app.get(`/${urlPath}/${actionName}`, controller[actionName]); Now, it gets a little bit more lines, as follows: let action = controller[actionName];   app.get(`/${urlPath}/${actionName}`, (req, res, next) => {     Promise         .resolve(action(req))         .then(result => {             if (result instanceof Response) {                 result.applyTo(res);             } else if (existsView(actionName)) {                 new PageResponse(actionName, result).applyTo(res);             } else {                 new JSONResponse(result).applyTo(res);             }         })         .catch(reason => next(reason)); });   However, so far we can only handle GET requests as we hardcoded app.get() in our router implementation. The poor view matching logic can hardly be used in practice either. We need to make these actions configurable, and ES decorators could perform a good job here: export default class Controller { @get({     View: 'custom-view-path' })     foo(req: Request) {         return {             title: 'Action foo',             content: 'Content of action foo'         };     } } I'll leave the implementation to you, and feel free to make them awesome. Abstraction of permission Permission plays an important role in a project, especially in systems that have different user groups. For example, a forum. The abstraction of permission should be extendable to satisfy changing requirements, and it should be easy to use as well. Here, we are going to talk about the abstraction of permission in the level of controller actions. Consider the legibility of performing one or more actions a privilege. The permission of a user may consist of several privileges, and usually most of the users at the same level would have the same set of privileges. So, we may have a larger concept, namely groups. The abstraction could either work based on both groups and privileges, or work based on only privileges (groups are now just aliases to sets of privileges): Abstraction that validates based on privileges and groups at the same time is easier to build. You do not need to create a large list of which actions can be performed for a certain group of user, as granular privileges are only required when necessary. Abstraction that validates based on privileges has better control and more flexibility to describe the permission. For example, you can remove a small set of privileges from the permission of a user easily. However, both approaches have similar upper-level abstractions, and they differ mostly on implementations. The general structure of the permission abstractions that we've talked about is like in the following diagram: The participants include the following: Privilege: This describes detailed privilege corresponding to specific actions Group: This defines a set of privileges Permission: This describes what a user is capable of doing, consist of groups that the user belongs to, and the privileges that the user has. Permission descriptor: This describes how the permission of a user works and consists of possible groups and privileges. Expected errors A great concern that was wiped away after using Promises is that we do not need to worry about whether throwing an error in a callback would crash the application most of the time. The error will flow through the Promises chain and if not caught, it will be handled by our router. Errors can be roughly divided as expected errors and unexpected errors. Expected errors are usually caused by incorrect input or foreseeable exceptions, and unexpected errors are usually caused by bugs or other libraries that the project relies on. For expected errors, we usually want to give users a friendly response with readable error messages and codes. So that the user can help themselves searching the error or report to us with useful context. For unexpected errors, we would also want a reasonable response (usually a message described as an unknown error), a detailed server-side log (including real error name, message, stack information, and so on), and even alerts to let the team know as soon as possible. Defining and throwing expected errors The router will need to handle different types of errors, and an easy way to achieve this is to subclass a universal ExpectedError class and throw its instances out, as follows: import ExtendableError from 'extendable-error';   class ExpectedError extends ExtendableError { constructor(     message: string,     public code: number ) {     super(message); } } The extendable-error is a package of mine that handles stack trace and themessage property. You can directly extend Error class as well. Thus, when receiving an expected error, we can safely output the error name and message as part of the response. If this is not an instance of ExpectedError, we can display predefined unknown error messages. Transforming errors Some errors such as errors that are caused by unstable networks or remote services are expected.We may want to catch these errors and throw them out again as expected errors. However, it could be rather trivial to actually do this. A centralized error transforming process can then be applied to reduce the efforts required to manage these errors. The transforming process includes two parts: filtering (or matching) and transforming. These are the approaches to filter errors: Filter by error class: Many third party libraries throws error of certain class. Taking Sequelize (a popular Node.js ORM) as an example, it has DatabaseError, ConnectionError, ValidationError, and so on. By filtering errors by checking whether they are instances of a certain error class, we may easily pick up target errors from the pile. Filter by string or regular expression: Sometimes a library might be throw errors that are instances of theError class itself instead of its subclasses.This makes these errors hard to distinguish from others. In this situation, we can filter these errors by their message with keywords or regular expressions. Filter by scope: It's possible that instances of the same error class with the same error message should result in a different response. One of the reasons may be that the operation throwing a certain error is at a lower-level, but it is being used by upper structures within different scopes. Thus, a scope mark can be added for these errors and make it easier to be filtered. There could be more ways to filter errors, and they are usually able to cooperate as well. By properly applying these filters and transforming errors, we can reduce noises, analyze what's going on within a system,and locate problems faster if they occur. Modularizing project Before ES2015, there are actually a lot of module solutions for JavaScript that work. The most famous two of them might be AMD and CommonJS. AMD is designed for asynchronous module loading, which is mostly applied in browsers. While CommonJSperforms module loading synchronously, and this is the way that the Node.js module system works. To make it work asynchronously, writing an AMD module takes more characters. Due to the popularity of tools, such asbrowserify and webpack, CommonJS becomes popular even for browser projects. Proper granularity of internal modules can help a project keep a healthy structure. Consider project structure like the following: project├─controllers├─core│  │ index.ts│  ││  ├─product│  │   index.ts│  │   order.ts│  │   shipping.ts│  ││  └─user│      index.ts│      account.ts│      statistics.ts│├─helpers├─models├─utils└─views Let's assume that we are writing a controller file that's going to import a module defined by thecore/product/order.ts file. Previously, usingCommonJS style'srequire, we would write the following: const Order = require('../core/product/order'); Now, with the new ES import syntax, this would be like the following: import * as Order from '../core/product/order'; Wait, isn't this essentially the same? Sort of. However, you may have noticed several index.ts files that I've put into folders. Now, in the core/product/index.tsfile, we could have the following: import * as Order from './order'; import * as Shipping from './shipping';   export { Order, Shipping } Or, we could also have the following: export * from './order'; export * from './shipping'; What's the difference? The ideal behind these two approaches of re-exporting modules can vary. The first style works better when we treat Order and Shipping as namespaces, under which the identifier names may not be easy to distinguish from one another. With this style, the files are the natural boundaries of building these namespaces. The second style weakens the namespace property of two files, and then uses them as tools to organize objects and classes under the same larger category. A good thingabout using these files as namespaces is that multiple-level re-exporting is fine, while weakening namespaces makes it harder to understand different identifier names as the number of re-exporting levels grows. Summary In this article, we discussed some interesting ideas and an architecture formed by these ideas. Most of these topics focused on limited examples, and did their own jobs.However, we also discussed ideas about putting a whole system together. Resources for Article: Further resources on this subject: Introducing Object Oriented Programmng with TypeScript [article] Writing SOLID JavaScript code with TypeScript [article] Optimizing JavaScript for iOS Hybrid Apps [article]
Read more
  • 0
  • 0
  • 32879

article-image-quantum-computing-is-poised-to-take-a-quantum-leap-with-industries-and-governments-on-its-side
Natasha Mathur
26 Jul 2018
9 min read
Save for later

Quantum Computing is poised to take a quantum leap with industries and governments on its side

Natasha Mathur
26 Jul 2018
9 min read
“We’re really just, I would say, six months out from having quantum computers that will outperform classical computers in some capacity". -- Joseph Emerson, Quantum Benchmark CEO There are very few people who would say that their life hasn’t been changed by the computers. But, there are certain tasks that even a computer isn’t capable of solving efficiently and that’s where Quantum Computers come into the picture. They are incredibly powerful machines that process information at a subatomic level. Quantum Computing leverages the logic and principles of Quantum Mechanics. Quantum computers operate one million times faster than any other device that you’re currently using. They are a whole different breed of machines that promise to revolutionize computing. How Quantum computers differ from traditional computers? To start with, let’s look at how these computers look like. Unlike your mobile or desktop computing devices, Quantum Computers will never be able to fit in your pocket and you cannot station these at desks. At least, for the next few years. Also, these are fragile computers that look like vacuum cells or tubes with a bunch of lasers shining into them, and the whole apparatus must be kept in temperatures near to absolute zero at all times. IBM Research Secondly, we look at the underlying mechanics of how they each compute. Quantum Computing is different from classic digital computing in the sense that classical computing requires data to be encoded into binary digits (bits), each of which is always present in one of the two definite states (0 or 1). Whereas, in Quantum Computing, data units called Quantum bits or qubits can be present in more than one state at a time, called, “superpositions” of states. Two particles can also exhibit “entanglement,” where changing of one state may instantaneously affect the other. Thirdly, let’s look at what make up these machines i.e., their hardware and software components. A regular computer’s hardware includes components such as a CPU, monitor, and routers for communicating across the computer network. The software includes systems programs and different protocols that run on top of the seven layers namely physical layer, a data-link layer, network layer, transport layer, session layer, presentation layer and the application layer. Now, Quantum computer also consists of hardware and software components. There are Ion traps, semiconductors, vacuum cells, Josephson junction and wire loops on the hardware side. The software and protocol that lies within a Quantum Computer are divided into five different layers namely physical layer, virtual, Quantum Error Correction ( QEC) layer, logical layer, and application layer. Layered Quantum Computer architecture Quantum Computing: A rapidly growing field Quantum Computing is special and is advancing on a large scale. According to a new market research report by marketsandmarkets, the Quantum Computing Market will be worth 495.3 Million USD by 2023. Also, as per ABI research, total revenue generated from quantum computing services will exceed $15 billion by 2028. Also, Matthew Brisse, research VP at Gartner states, “Quantum computing is heavily hyped and evolving at different rates, but it should not be ignored”. Now, there isn’t any certainty from the Quantum Computing world about when it will make the shift from scientific discoveries to applications for the average user, but, there are different areas, like pharmaceuticals, Machine Learning, Security, etc, that are leveraging the potential of Quantum Computing. Apart from these industries, even countries are getting their feet dirty in the field of Quantum Computing, lately, by joining the “Quantum arms race” ( the race for excelling in Quantum Technology). How is the tech industry driving Quantum Computing forward? Both big tech giants like IBM, Google, Microsoft and startups like Rigetti, D-wave and Quantum Benchmark are slowly, but steadily, bringing the Quantum Computing Era a step closer to us. Big Tech’s big bets on Quantum computing IBM established a landmark in the quantum computing world back in November 2017 when it announced its first powerful quantum computer that can handle 50 qubits. The company is also working on making its 20-qubit system available through its cloud computing platform. It is collaborating with startups such as Quantum Benchmark, Zapata Computing, 1Qbit, etc to further accelerate Quantum Computing. Google also announced Bristlecone, the largest 72 qubit quantum computer, earlier this year, which promises to provide a testbed for research into scalability and error rates of qubit technology. Google along with IBM plans to commercialize its quantum computing technologies in the next few years. Microsoft is also working on Quantum Computing with plans to build a topological quantum computer. This is an effort to bring a practical quantum computer quicker for commercial use. It also introduced a language called Q#  ( Q Sharp ), last year, along with tools to help coders develop software for quantum computers "We want to solve today's unsolvable problems and we have an opportunity with a unique, differentiated technology to do that," mentioned Todd HolmDahl, Corporate VP of Microsoft Quantum. Startups leading the quantum computing revolution - Quantum Valley, anyone? Apart from these major tech giants, startups are also stepping up their Quantum Computing game. D-Wave Systems Inc., a Canadian company, was the first one to sell a quantum computer named D-wave one, back in 2011, even though the usefulness of this computer is limited.  These quantum computers are based on adiabatic quantum computation and have been sold to government agencies, aerospace, and cybersecurity vendors. Earlier this year, D-Wave Systems announced that it has completed testing a working prototype of its next-generation processor, as well as the installation of a D-Wave 2000Q system for a customer. Another startup worth mentioning is San Francisco based Rigetti computing, which is racing against Google, Microsoft, IBM, and Intel to build Quantum Computing projects. Rigetti has raised nearly $70 million for development of quantum computers. According to Chad Rigetti, CEO at Rigetti, “This is going to be a very large industry—every major organization in the world will have to have a strategy for how to use this technology”. Lastly, Quantum Benchmark, another Canadian startup, is collaborating with Google to support the development of quantum computing. Quantum Benchmark’s True-Q technology has been integrated with Cirq, Google’s latest open-source quantum computing framework. Looking at the efforts these companies are putting in to boost the growth of Quantum computing, it’s hard to say who will win the quantum race. But one thing is clear, just as Silicon Valley drove product innovation and software adoption during the early days of computing, having many businesses work together and compete with each other, may not be a bad idea for quantum computing. How are countries preparing for the Quantum arms race? Though Quantum Computing hasn’t yet made its way to the consumer market, it’s remarkable progress in research and it’s infinite potential have now caught the attention of nations worldwide. China, the U.S and other great powers, are now joining the Quantum arms race. But, why are these governments so serious about investing in the research and development of Quantum Computing? The answer is simple: Quantum computing will be a huge competitive advantage to the government that finds breakthroughs with their Quantum technology-based innovation as it will be able to cripple militaries, topples the global economy as well as yields spectacular and quick solutions to complex problems. It further has the power to revolutionize everything in today’s world from cellular communications and navigations to sensors and imaging. China continues to leap ahead in the quantum race China has been consistently investing billions of dollars in the field of Quantum Computing. From building its first form of Quantum Computer, back in 2017, to coming out with world’s first unhackable “quantum satellite” in 2016, China is a clearly killing it when it comes to Quantum Computing. Last year, the Chinese government has also funded $10 billion for the construction of the world’s biggest quantum research facility planned to open in 2020. The country plans to develop quantum computers and performs quantum metrology to support the military and national defense efforts. The U.S is pushing hard to win the race Americans are also trying to top up their game and win the Quantum arms race. For instance, the Pentagon is working on applying quantum computing in the future to the U.S. military. They plan to establish highly secure and encrypted communications for satellites along with ensuring accurate navigation that does not need GPS signals. In fact, Congress has proposed $800 million funding to Pentagon’s quantum projects for the next five years. Similarly, the Defense Advanced Research Projects Agency (DARPA) is interested in exploring Quantum Computing to improve the general computing performance as well as artificial intelligence and machine learning systems. Other than that, the U.S. government spends about $200 million per year on quantum research. The UK is stepping up its game The United Kingdom is also not far behind in the Quantum arms race. It has a $400 million program underway for quantum-based sensing and timing. European Union is also planning to invest worth $1 billion spread over 10 years for projects involving scientific research and development of devices for sensing, communication, simulation, and computing. Other great powers such as Canada, Australia, and Israel are also acquainting themselves with the exciting world of Quantum Computing. These governments are confident that whoever makes it first, gets an upper hand for life. The Quantum Computing revolution has begun Quantum Computing has the power to lead revolutionary breakthroughs even though, Quantum computers that can outperform regular computers are not there yet. Also, this is quite a complex technology which a lot of people find hard to understand as the rules of the quantum world different drastically from those of the physical world. While all progress made is constrained to research labs at the moment, it has a lot of potentials, given the intense development, and investments that are going into Quantum Computing from governments and large corporations across the globe. It’s likely that we’ll be able to see the working prototypes of Quantum computers emerge soon. With the Quantum revolution here, the possibilities that lie ahead are limitless. Perhaps we could crack the big bang theory finally!? Or maybe quantum computing powered by AI will just speed up the arrival of singularity. Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation PyCon US 2018 Highlights: Quantum computing, blockchains, and serverless rule!  
Read more
  • 0
  • 1
  • 32825
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-is-scala-3-0-a-new-language-all-together-martin-odersky-its-designer-says-yes-and-no
Bhagyashree R
10 Sep 2019
6 min read
Save for later

Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”

Bhagyashree R
10 Sep 2019
6 min read
At Scala Days Lausanne 2019 in July, Martin Odersky, the lead designer of Scala, gave a tour of the upcoming major version, Scala 3.0. He talked about the roadmap to Scala 3.0, its new features, how its situation is different from Python 2 vs 3, and much more. Roadmap to Scala 3.0 Odersky announced that “Scala 3.0 has almost arrived” since all the features are fleshed out, with implementations in the latest releases of Dotty, the next generation compiler for Scala. The team plans to go into feature freeze and release Scala 3.0 M1 in fall this year. Following that the team will focus on stabilization, complete SIP process, and write specs and user docs. They will also work on community build, compatibility, and migration tasks. All these tasks will take about a year, so we can expect Scala 3.0 release in fall 2020. Scala 2.13 was released in June this year. It was shipped with redesigned collections, updated futures implementation, language changes including literal types, partial unification on by default, by-name implicits, macro annotations, among others. The team is also working simultaneously on its next release, Scala 2.14. Its main focus will be to ease out the migration process from Scala 2 to 3 by defining migration tools, shim libraries, targeted deprecations, and more. What’s new in this major release There is a whole lot of improvements coming in Scala 3.0, some of which Odersky discussed in his talk: Scala will drop the ‘new’ keyword: Starting with Scala 3.0, you will be able to omit ‘new’ from almost all instance creations. With this change, developers will no longer have to define a case class just to get nice constructor calls. Also, this will prevent accidental infinite loops in the cases when ‘apply’ has the same arguments as the constructor. Top-level definitions: In Scala 3.0, top-level definitions will be added as a replacement for package objects. This is because only one package object definition is allowed per package. Also, a trait or class defined in the package object is different from the one defined in the package, which can lead to unexpected behavior. Redesigned enumeration support: Previously, Scala did not provide a very straightforward way to define enums. With this update, developers will have a simple way to define new types with a finite number of values or constructions. They will also be able to add parameters and define fields and methods. Union types: In previous versions, union types can be defined with the help of constructs such as Either or subtyping hierarchies, but these constructs are bulkier. Adding union types to the language will fix Scala’s least upper bounds problem and provide added modelling power. Extension methods: With extension methods, you can define methods that can be used infix without any boilerplate. These will essentially replace implicit classes. Delegates: Implicit is a “bedrock of programming” in Scala. However, they suffer from several limitations. Odersky calls implicit conversions “recipe for disaster” because they tend to interact very badly with each other and add too much implicitness. Delegates will be their simpler and safer alternative. Functions everywhere: In Scala, functions and methods are two different things. While methods are members of classes and objects, functions are objects themselves. Until now, methods were quite powerful as compared to functions. They are defined by properties like dependent, polymorphic, and implicit. With Scala 3.0, these properties will be associated with functions as well. Recent discussions regarding the updates in Scala 3.0 A minimal alternative for scala-reflect and TypeTag Scala 3.0 will drop support for ‘scala-reflect’ and ‘TypeTag’. Also, there hasn't been much discussion about its alternative. However, some developers believe that it is an important feature and is currently in use by many projects including Spark and doobie. Explaining the reason behind dropping the support, a SIP committee member, wrote on the discussion forum, “The goal in Scala 3 is what we had before scala-reflect. For use-cases where you only need an 80% solution, you should be able to accomplish that with straight-up Java reflection. If you need more, TASTY can provide you the basics. However, we don’t think a 100% solution is something folks need, and it’s unclear if there should be a “core” implementation that is not 100%.” Odersky shared that Scala 3.0 has quoted.Type as an alternative to TypeTag. He commented, “Scala 3 has the quoted package, with quoted.Expr as a representation of expressions and quoted.Type as a representation of types. quoted.Type essentially replaces TypeTag. It does not have the same API but has similar functionality. It should be easier to use since it integrates well with quoted terms and pattern matching.” Follow the discussion on Scala Contributors. Python-style indentation (for now experimental) Last month, Odersky proposed to bring indentation based syntax in Scala while also supporting the brace-based. This is because when it was first created, most of the languages used braces. However, with time indentation-based syntax has actually become the conventional syntax. Listing the reasons behind this change, Odersky wrote, The most widely taught language is now (or will be soon, in any case) Python, which is indentation based. Other popular functional languages are also indentation based (e..g Haskell, F#, Elm, Agda, Idris). Documentation and configuration files have shifted from HTML and XML to markdown and yaml, which are both indentation based. So by now indentation is very natural, even obvious, to developers. There's a chance that anything else will increasingly be considered "crufty" Odersky on whether Scala 3.0 is a new language Odersky answers this with both yes and no. Yes, because Scala 3.0 will include several language changes including feature removals. The introduced new constructs will improve user experience and on-boarding dramatically. There will also be a need to rewrite current Scala books to reflect the recent developments. No, because it will still be Scala and all core constructs will still be the same. He concludes, "Between yes and no, I think the fairest answer is to say it is really a process. Scala 3 keeps most constructs of Scala 2.13, alongside the new ones. Some constructs like old implicits and so on will be phased out in the 3.x release train. So, that requires some temporary duplication in the language, but the end result should be a more compact and regular language.” Comparing with Python 2 and 3, Odersky believes that Scala's situation is better because of static typing and binary compatibility. The current version of Dotty can be linked with Scala 2.12 or 2.13 files. He shared that in the future, it will be possible to have a Dotty library module that can then be used by both Scala 2 and 3 modules. Read also: Core Python team confirms sunsetting Python 2 on January 1, 2020 Watch Odersky’s talk to know more in detail. https://www.youtube.com/watch?v=_Rnrx2lo9cw&list=PLLMLOC3WM2r460iOm_Hx1lk6NkZb8Pj6A Other news in programming Implementing memory management with Golang’s garbage collector Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?  
Read more
  • 0
  • 0
  • 32679

article-image-microsoft-technology-evangelist-matthew-weston-on-how-microsoft-powerapps-is-democratizing-app-development-interview
Bhagyashree R
04 Dec 2019
10 min read
Save for later

Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview]

Bhagyashree R
04 Dec 2019
10 min read
In recent years, we have seen a wave of app-building tools coming in that enable users to be creators. Another such powerful tool is Microsoft PowerApps, a full-featured low-code / no-code platform. This platform aims to empower “all developers” to quickly create business web and mobile apps that fit best with their use cases. Microsoft defines “all developers” as citizen developers, IT developers, and Pro developers. To know what exactly PowerApps is and its use cases, we sat with Matthew Weston, the author of Learn Microsoft PowerApps. Weston was kind enough to share a few tips for creating user-friendly and attractive UI/UX. He also shared his opinion on the latest updates in the Power Platform including AI Builder, Portals, and more. [box type="shadow" align="" class="" width=""] Further Learning Weston’s book, Learn Microsoft PowerApps will guide you in creating powerful and productive apps that will help you transform old and inefficient processes and workflows in your organization. In this book, you’ll explore a variety of built-in templates and understand the key difference between types of PowerApps such as canvas and model-driven apps. You’ll learn how to generate and integrate apps directly with SharePoint, gain an understanding of PowerApps key components such as connectors and formulas, and much more. [/box] Microsoft PowerApps: What is it and why is it important With the dawn of InfoPath, a tool for creating user-designed form templates without coding, came Microsoft PowerApps. Introduced in 2016, Microsoft PowerApps aims to democratize app development by allowing users to build cross-device custom business apps without writing code and provides an extensible platform to professional developers. Weston explains, “For me, PowerApps is a solution to a business problem. It is designed to be picked up by business users and developers alike to produce apps that can add a lot of business value without incurring large development costs associated with completely custom developed solutions.” When we asked how his journey started with PowerApps, he said, “I personally ended up developing PowerApps as, for me, it was an alternative to using InfoPath when I was developing for SharePoint. From there I started to explore more and more of its capabilities and began to identify more and more in terms of use cases for the application. It meant that I didn’t have to worry about controls, authentication, or integration with other areas of Office 365.” Microsoft PowerApps is a building block of the Power Platform, a collection of products for business intelligence, app development, and app connectivity. Along with PowerApps, this platform includes Power BI and Power Automate (earlier known as Flow). These sit on the top of the Common Data Service (CDS) that stores all of your structured business data. On top of the CDS, you have over 280 data connectors for connecting to popular services and on-premise data sources such as SharePoint, SQL Server, Office 365, Salesforce, among others. All these tools are designed to work together seamlessly. You can analyze your data with Power BI, then act on the data through web and mobile experiences with PowerApps, or automate tasks in the background using Power Automate. Explaining its importance, Weston said, “Like all things within Office 365, using a single application allows you to find a good solution. Combining multiple applications allows you to find a great solution, and PowerApps fits into that thought process. Within the Power Platform, you can really use PowerApps and Power Automate together in a way that provides a high-quality user interface with a powerful automation platform doing the heavy processing.” “Businesses should invest in PowerApps in order to maximize on the subscription costs which are already being paid as a result of holding Office 365 licenses. It can be used to build forms, and apps which will automatically integrate with the rest of Office 365 along with having the mobile app to promote flexible working,” he adds. What skills do you need to build with PowerApps PowerApps is said to be at the forefront of the “Low Code, More Power” revolution. This essentially means that anyone can build a business-centric application, it doesn’t matter whether you are in finance, HR, or IT. You can use your existing knowledge of PowerPoint or Excel concepts and formulas to build business apps. Weston shared his perspective, “I believe that PowerApps empowers every user to be able to create an app, even if it is only quite basic. Most users possess the ability to write formulas within Microsoft Excel, and those same skills can be transferred across to PowerApps to write formulas and create logic.” He adds, “Whilst the above is true, I do find that you need to have a logical mind in order to really create rich apps using the Power Platform, and this often comes more naturally to developers. Saying that, I have met people who are NOT developers, and have actually taken to the platform in a much better way.” Since PowerApps is a low-code platform, you might be wondering what value it holds for developers. We asked Weston what he thinks about developers investing time in learning PowerApps when there are some great cross-platform development frameworks like React Native or Ionic. Weston said, “For developers, and I am from a development background, PowerApps provides a quick way to be able to create apps for your users without having to write code. You can use formulae and controls which have already been created for you reducing the time that you need to invest in development, and instead you can spend the time creating solutions to business problems.” Along with enabling developers to rapidly publish apps at scale, PowerApps does a lot of heavy lifting around app architecture and also gives you real-time feedback as you make changes to your app. This year, Microsoft also released the PowerApps component framework which enables developers to code custom controls and use them in PowerApps. You also have tooling to use alongside the PowerApps component framework for a better end to end development experience: the new PowerApps CLI and Visual Studio plugins. On the new PowerApps capabilities: AI Builder, Portals, and more At this year’s Microsoft Ignite, the Power Platform team announced almost 400 improvements and new features to the Power Platform. The new AI Builder allows you to build and train your own AI model or choose from pre-built models. Another feature is PowerApps Portals using which organizations can create portals and share them with both internal and external users. We also have UI flows, currently a preview feature that provides Robotic Process Automation (RPA) capabilities to Power Automate. Weston said, “I love to use the Power Platform for integration tasks as much as creating front end interfaces, especially around Power Automate. I have created automations which have taken two systems, which won’t natively talk to each other, and process data back and forth without writing a single piece of code.” Weston is already excited to try UI flows, “I’m really looking forward to getting my hands on UI flows and seeing what I can create with those, especially for interacting with solutions which don’t have the usual web service interfaces available to them.” However, he does have some concerns regarding RPA. “My only reservation about this is that Robotic Process Automation, which this is effectively doing, is usually a deep specialism, so I’m keen to understand how this fits into the whole citizen developer concept,” he shared. Some tips for building secure, user-friendly, and optimized PowerApps When it comes to application development, security and responsiveness are always on the top in the priority list. Weston suggests, “Security is always my number one concern, as that drives my choice for the data source as well as how I’m going to structure the data once my selection has been made. It is always harder to consider security at the end, when you try to shoehorn a security model onto an app and data structure which isn’t necessarily going to accept it.” “The key thing is to never take your data for granted, secure everything at the data layer first and foremost, and then carry those considerations through into the front end. Office 365 employs the concept of security trimming, whereby if you don’t have access to something then you don’t see it. This will automatically apply to the data, but the same should also be done to the screens. If a user shouldn’t be seeing the admin screen, don’t even show them a link to get there in the first place,” he adds. For responsiveness, he suggests, “...if an app is slow to respond then the immediate perception about the app is negative. There are various things that can be done if working directly with the data source, such as pulling data into a collection so that data interactions take place locally and then sync in the background.” After security, another important aspect of building an app is user-friendly and attractive UI and UX. Sharing a few tips for enhancing your app's UI and UX functionality, Weston said, “When creating your user interface, try to find the balance between images and textual information. Quite often I see PowerApps created which are just line after line of text, and that immediately switches me off.” He adds, “It could be the best app in the world that does everything I’d ever want, but if it doesn’t grip me visually then I probably won’t be using it. Likewise, I’ve seen apps go the other way and have far too many images and not enough text. It’s a fine balance, and one that’s quite hard.” Sharing a few ways for optimizing your app performance, Weston said, “Some basic things which I normally do is to load data on the OnVisible event for my first screen rather than loading everything on the App OnStart. This means that the app can load and can then be pulling in data once something is on the screen. This is generally the way that most websites work now, they just get something on the screen to keep the user happy and then pull data in.” “Also, give consideration to the number of calls back and forth to the data source as this will have an impact on the app performance. Only read and write when I really need to, and if I can, I pull as much of the data into collections as possible so that I have that temporary cache to work with,” he added. About the author Matthew Weston is an Office 365 and SharePoint consultant based in the Midlands in the United Kingdom. Matthew is a passionate evangelist of Microsoft technology, blogging and presenting about Office 365 and SharePoint at every opportunity. He has been creating and developing with PowerApps since it became generally available at the end of 2016. Usually, Matthew can be seen presenting within the community about PowerApps and Flow at SharePoint Saturdays, PowerApps and Flow User Groups, and Office 365 and SharePoint User Groups. Matthew is a Microsoft Certified Trainer (MCT) and a Microsoft Certified Solutions Expert (MCSE) in Productivity. Check out Weston’s latest book, Learn Microsoft PowerApps on PacktPub. This book is a step-by-step guide that will help you create lightweight business mobile apps with PowerApps.  Along with exploring the different built-in templates and types of PowerApps, you will generate and integrate apps directly with SharePoint. The book will further help you understand how PowerApps can use several Microsoft Power Automate and Azure functionalities to improve your applications. Follow Matthew Weston on Twitter: @MattWeston365. Denys Vuika on building secure and performant Electron apps, and more Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 32654

article-image-implement-an-effective-crm-system-in-odoo-11-tutorial
Sugandha Lahoti
18 Jul 2018
18 min read
Save for later

Implement an effective CRM system in Odoo 11 [Tutorial]

Sugandha Lahoti
18 Jul 2018
18 min read
Until recently, most business and financial systems had product-focused designs while records and fields maintained basic customer information, processes, and reporting typically revolved around product-related transactions. In the past, businesses were centered on specific products, but now the focus has shifted to center the business on the customer. The Customer Relationship Management (CRM) system provides the tools and reporting necessary to manage customer information and interactions. In this article, we will take a look at what it takes to implement a CRM system in Odoo 11 as part of an overall business strategy. We will also install the CRM application and setup salespersons that can be assigned to our customers. This article is an excerpt from the book, Working with Odoo 11 - Third Edition by Greg Moss. In this book, you will learn to configure, manage, and customize your Odoo system. Using CRM as a business strategy It is critical that the sales people share account knowledge and completely understand the features and capabilities of the system. They often have existing tools that they have relied on for many years. Without clear objectives and goals for the entire sales team, it is likely that they will not use the tool. A plan must be implemented to spend time training and encouraging the sharing of knowledge to successfully implement a CRM system. Installing the CRM application If you have not installed the CRM module, log in as the administrator and then click on the Apps menu. In a few seconds, the list of available apps will appear. The CRM will likely be in the top-left corner: Click on Install to set up the CRM application. Look at the CRM Dashboard Like with the installation of the Sales application, Odoo takes you to the Discuss menu. Click on Sales to see the new changes after installing the CRM application. New to Odoo 10 is an improved CRM Dashboard that provides you a friendly welcome message when you first install the application. You can use the dashboard to get an overview of your sales pipelines and get easy access to the most common actions within CRM. Assigning the sales representative or account manager In Odoo 10, like in most CRM systems, the sales representative or account manager plays an important role. Typically, this is the person that will ultimately be responsible for the customer account and a satisfactory customer experience. While most often a company will use real people as their salespeople, it is certainly possible to instead have a salesperson record refer to a group, or even a sub-contracted support service. We will begin by creating a salesperson that will handle standard customer accounts. Note that a sales representative is also a user in the Odoo system. Create a new salesperson by going to the Settings menu, selecting Users, and then clicking the Create button. The new user form will appear. We have filled in the form with values for a fictional salesperson, Terry Zeigler. The following is a screenshot of the user's Access Rights tab: Specifying the name of the user You specify the username. Unlike some systems that provide separate first name and last name fields, with Odoo you specify the full name within a single field. Email address Beginning in Odoo 9, the user and login form prompts for email as opposed to username. This practice has continued in Odoo version 10 as well. It is still possible to use a user name instead of email address, but given the strong encouragement to use email address in Odoo 9 and Odoo 10, it is possible that in future versions of Odoo the requirement to provide an email address may be more strictly enforced. Access Rights The Access Rights tab lets you control which applications the user will be able to access. By default, Odoo will specify Mr.Ziegler as an employee so we will accept that default. Depending on the applications you may have already installed or dependencies Odoo may add in various releases, it is possible that you will have other Access Rights listed. Sales application settings When setting up your sales people in Odoo 10, you have three different options on how much access an individual user has to the sales system: User: Own Documents Only This is the most restrictive access to the sales application. A user with this access level is only allowed to see the documents they have entered themselves or which have been assigned to them. They will not be able to see Leads assigned to other salespeople in the sales application. User: All Documents With this setting, the user will have access to all documents within the sales application. Manager The Manager setting is the highest access level in the Odoo sales system. With this access level, the user can see all Leads as well as access the configuration options of the sales application. The Manager setting also allows the user to access statistical reports. We will leave the Access Rights options unchecked. These are used when working with multiple companies or with multiple currencies. The Preferences tab consists of the following options: Language and Timezone Odoo allows you to select the language for each user. Currently, Odoo supports more than 20 language translations. Specifying the Timezone field allows Odoo to coordinate the display of date and time on messages. Leaving Timezone blank for a user will sometimes lead to unpredictable behavior in the Odoo software. Make sure you specify a timezone when creating a user record. Email Messages and Notifications In Odoo 7, messaging became a central component of the Odoo system. In version 10, support has been improved and it is now even easier to communicate important sales information between colleagues. Therefore, determining the appropriate handling of email, and circumstances in which a user will receive email, is very important. The Email Messages and Notifications option lets you determine when you will receive email messages from notifications that come to your Odoo inbox. For our example, we have chosen All Messages. This is now the new default setting in Odoo 10. However, since we have not yet configured an email server, or if you have not configured an email server yourself, no emails will be sent or received at this stage. Let's review the user options that will be available in communicating by email. Never: Selecting Never suppresses all email messaging for the user. Naturally, this is the setting you will wish to use if you do not have an email server configured. This is also a useful option for users that simply want to use the built-in inbox inside Odoo to retrieve their messages. All Messages (discussions, emails, followed system notifications): This option sends an email notification for any action that would create an entry in your Odoo inbox. Unlike the other options, this action can include system notifications or other automated communications. Signature The Signature section allows you to customize the signature that will automatically be appended to Odoo-generated messages and emails. Manually setting the user password You may have noticed that there is no visible password field in the user record. That is because the default method is to email the user an account verification they can use to set their password. However, if you do not have an email server configured, there is an alternative method for setting the user password. After saving the user record, use the Change Password button at the top of the form. A form will then appear allowing you to set the password for the user. Now in Odoo 10, there is a far more visible button available at the top left of the form. Just click the Change Password button. Assigning a salesperson to a customer Now that we have set up our salesperson, it is time to assign the salesperson their first customer. Previously, no salesperson had been assigned to our one and only customer, Mike Smith. So let's go to the Sales menu and then click on Mike Smith to pull up his customer record and assign him Terry Ziegler as his salesperson. The following screenshot is of the customer screen opened to assign a salesperson: Here, we have set the sales person to Terry Zeigler. By assigning your customers a salesperson, you can then better organize your customers for reports and additional statistical analysis. Understanding Your Pipeline Prior to Odoo 10, the CRM application primarily was a simple collection of Leads and opportunities. While Odoo still uses both Leads and opportunities as part of the CRM application, the concept of a Pipeline now takes center stage. You use the Pipeline to organize your opportunities by what stage they are within your sales process. Click on Your Pipeline in the Sales menu to see the overall layout of the Pipeline screen: In the preceding Pipeline forms, one of the first things to notice is that there are default filters applied to the view. Up in the search box, you will see that there is a filter to limit the records in this view to the Direct Sales team as well as a My Opportunities filter. This effectively limits the records so you only see your opportunities from your primary sales team. Removing the My Opportunities filter will allow you to see opportunities from other salespeople in your organization. Creating new opportunity In Odoo 10, a potential sale is defined by creating a new opportunity. An opportunity allows you to begin collecting information about the scope and potential outcomes for a sale. These opportunities can be created from new Leads, or an opportunity can originate from an existing customer. For our real-world example, let's assume that Mike Smith has called and was so happy with his first order that he now wants to discuss using Silkworm for his local sports team. After a short conversation we decide to create an opportunity by clicking the Create button. You can also use the + buttons within any of the pipeline stages to create an opportunity that is set to that stage in the pipeline. In Odoo 10, the CRM application greatly simplified the form for entering a new opportunity. Instead of bringing up the entire opportunity form with all the fields you get a simple form that collects only the most important information. The following screenshot is of a new opportunity form: Opportunity Title The title of your opportunity can be anything you wish. It is naturally important to choose a subject that makes it easy to identify the opportunity in a list. This is the only field required to create an opportunity in Odoo 10. Customer This field is automatically populated if you create an opportunity from the customer form. You can, however, assign a different customer if you like. This is not a required field, so if you have an opportunity that you do not wish to associate with a customer, that is perfectly fine. For example, you may leave this field blank if you are attending a trade show and expect to have revenue, but do not yet have any specific customers to attribute to the opportunity. Expected revenue Here, you specify the amount of revenue you can expect from the opportunity if you are successful. Inside the full opportunity form there is a field in which you can specify the percentage likelihood that an opportunity will result in a sale. These values are useful in many statistical reports, although they are not required to create an opportunity. Increasingly, more reports look to expected revenue and percentage of opportunity completions. Therefore, depending on your reporting requirements you may wish to encourage sales people to set target goals for each opportunity to better track conversion. Rating Some opportunities are more important than others. You can choose none, one, two, or three stars to designate the relative importance of this opportunity. Introduction to sales stages At the top of the Kanban view, you can see the default stages that are provided by an Odoo CRM installation. In this case, we see New, Qualified, Proposition, and Won. As an opportunity moves between stages, the Kanban view will update to show you where each opportunity currently stands. Here, we can see because this Sports Team Project has just been entered in the New column. Viewing the details of an opportunity If you click the three lines at the top right of the Sports Team Project opportunity in the Kanban view, which appears when you hover the mouse over it, you will see a pop-up menu with your available options. The following screenshot shows the available actions on an opportunity: Actions you can take on an opportunity Selecting the Edit option takes you to the opportunity record and into edit mode for you to change any of the information. In addition, you can delete the record or archive the record so it will no longer appear in your pipeline by default. The color palette at the bottom lets you color code your opportunities in the Kanban view. The small stars on the opportunity card allow you to highlight opportunities for special consideration. You can also easily drag and drop the opportunity into other columns as you work through the various stages of the sale. Using Odoo's OpenChatter feature One of the biggest enhancements brought about in Odoo 7 and expanded on in later versions of Odoo was the new OpenChatter feature that provides social networking style communication to business documents and transactions. As we work our brand new opportunity, we will utilize the OpenChatter feature to demonstrate how to communicate details between team members and generate log entries to document our progress. The best thing about the OpenChatter feature is that it is available for nearly all business documents in Odoo. It also allows you to see a running set of logs of the transactions or operations that have affected the document. This means everything that applies here to the CRM application can also be used to communicate in sales and purchasing, or in communicating about a specific customer or vendor. Changing the status of an opportunity For our example, let's assume that we have prepared our proposal and made the presentation. Bring up the opportunity by using the right-click Menu in the Kanban view or going into the list view and clicking the opportunity in the list. It is time to update the status of our opportunity by clicking the Proposition arrow at the top of the form: Notice that you do not have to edit the record to change the status of the opportunity. At the bottom of the opportunity, you will now see a logged note generated by Odoo that documents the changing of the opportunity from a new opportunity to a proposition. The following screenshot is of OpenChatter displaying a changed stage for the opportunity: Notice how Odoo is logging the events automatically as they take place. Managing the opportunity With the proposal presented, let's take down some details from what we have learned that may help us later when we come back to this opportunity. One method of collecting this information could be to add the details to the Internal Notes field in the opportunity form. There is value, however, in using the OpenChatter feature in Odoo to document our new details. Most importantly, using OpenChatter to log notes gives you a running transcript with date and time stamps automatically generated. With the Generic Notes field, it can be very difficult to manage multiple entries. Another major advantage is that the OpenChatter feature can automatically send messages to team members' inboxes updating them on progress. let's see it in action! Click the Log an Internal note link to attach a note to our opportunity. The following screenshot is for creating a note: The activity option is unique to the CRM application and will not appear in most documents. You can use the small icons at the bottom to add a smiley, attach a document, or open up a full featured editor if you are creating a long note. The full featured editor also allows you to save templates of messages/notes you may use frequently. Depending on your specific business requirements, this could be a great time saver. When you create a note, it is attached to the business document, but no message will be sent to followers. You can even attach a document to the note by using the Attach a File feature. After clicking the Log button, the note is saved and becomes part of the OpenChatter log for that document. Following a business document Odoo brings social networking concepts into your business communication. Fundamental to this implementation is that you can get automatic updates on a business document by following the document. Then, whenever there is a note, action, or a message created that is related to a document you follow, you will receive a message in your Odoo inbox. In the bottom right-hand corner of the form, you are presented with the options for when you are notified and for adding or removing followers from the document. The following screenshot is of the OpenChatter follow options: In this case, we can see that both Terry Zeigler and Administrator are set as followers for this opportunity. The Following checkbox at the top indicates that I am following this document. Using the Add Followers link you can add additional users to follow the document. The items followers are notified are viewed by clicking the arrow to the right of the following button. This brings up a list of the actions that will generate notifications to followers: The checkbox next to Discussions indicates that I should be notified of any discussions related to this document. However, I would not be notified, for example, if the stage changed. When you send a message, by default the customer will become a follower of the document. Then, whenever the status of the document changes, the customer will receive an email. Test out all your processes before integrating with an email server. Modifying the stages of the sale We have seen that Odoo provides a default set of sales stages. Many times, however, you will want to customize the stages to best deliver an outstanding customer experience. Moving an opportunity through stages should trigger actions that create a relationship with the customer and demonstrate your understanding of their needs. A customer in the qualification stage of a sale will have much different needs and much different expectations than a customer that is in the negotiation phase. For our case study, there are sometimes printing jobs that are technically complex to accomplish. With different jerseys for a variety of teams, the final details need to go through a final technical review and approval process before the order can be entered and verified. From a business perspective, the goal is not just to document the stage of the sales cycle; the primary goal is to use this information to drive customer interactions and improve the overall customer experience. To add a stage to the sales process, bring up Your Pipeline and then click on the ADD NEW COLUMN area in the right of the form to bring up a little popup to enter the name for the new stage: After you have added the column to the sales process, you can use your mouse to drag and drop the columns into the order that you wish them to appear. We are now ready to begin the technical approval stage for this opportunity. Drag and drop the Sports Team Project opportunity over to the Technical Approval column in the Kanban view. The following screenshot is of the opportunities Kanban view after adding the technical approval stage: We now see the Technical Approval column in our Kanban view and have moved over the opportunity. You will also notice that any time you change the stage of an opportunity that there will be an entry that will be created in the OpenChatter section at the bottom of the form. In addition to the ability to drag and drop an opportunity into a new stage, you can also change the stage of an opportunity by going into the form view. Closing the sale After a lot of hard work, we have finally won the opportunity, and it is time to turn this opportunity into a quotation. At this point, Odoo makes it easy to take that opportunity and turn it into an actual quotation. Open up the opportunity and click the New Quotation tab at the top of the opportunity form: Unlike Odoo 8, which prompts for more information, in Odoo 10 you get taken to a new quote with the customer information already filled in: We installed the CRM module, created salespeople, and proceeded to develop a system to manage the sales process. To modify stages in the sales cycle and turn the opportunity into a quotation using Odoo 11, grab the latest edition  Working with Odoo 11 - Third Edition. ERP tool in focus: Odoo 11 Building Your First Odoo Application How to Scaffold a New module in Odoo 11
Read more
  • 0
  • 0
  • 32202

article-image-understanding-ranges
Packt
28 Oct 2015
40 min read
Save for later

Understanding Ranges

Packt
28 Oct 2015
40 min read
In this article by Michael Parker, author of the book Learning D, explains since they were first introduced, ranges have become a pervasive part of D. It's possible to write D code and never need to create any custom ranges or algorithms yourself, but it helps tremendously to understand what they are, where they are used in Phobos, and how to get from a range to an array or another data structure. If you intend to use Phobos, you're going to run into them eventually. Unfortunately, some new D users have a difficult time understanding and using ranges. The aim of this article is to present ranges and functional styles in D from the ground up, so you can see they aren't some arcane secret understood only by a chosen few. Then, you can start writing idiomatic D early on in your journey. In this article, we lay the foundation with the basics of constructing and using ranges in two sections: Ranges defined Ranges in use (For more resources related to this topic, see here.) Ranges defined In this section, we're going to explore what ranges are and see the concrete definitions of the different types of ranges recognized by Phobos. First, we'll dig into an example of the sort of problem ranges are intended to solve and in the process, develop our own solution. This will help form an understanding of ranges from the ground up. The problem As part of an ongoing project, you've been asked to create a utility function, filterArray, which takes an array of any type and produces a new array containing all of the elements from the source array that pass a Boolean condition. The algorithm should be nondestructive, meaning it should not modify the source array at all. For example, given an array of integers as the input, filterArray could be used to produce a new array containing all of the even numbers from the source array. It should be immediately obvious that a function template can handle the requirement to support any type. With a bit of thought and experimentation, a solution can soon be found to enable support for different Boolean expressions, perhaps a string mixin, a delegate, or both. After browsing the Phobos documentation for a bit, you come across a template that looks like it will help with the std.functional.unaryFun implementation. Its declaration is as follows: template unaryFun(alias fun, string parmName = "a"); The alias fun parameter can be a string representing an expression, or any callable type that accepts one argument. If it is the former, the name of the variable inside the expression should be parmName, which is "a" by default. The following snippet demonstrates this: int num = 10; assert(unaryFun!("(a & 1) == 0")(num)); assert(unaryFun!("(x > 0)", "x")(num)); If fun is a callable type, then unaryFun is documented to alias itself to fun and the parmName parameter is ignored. The following snippet calls unaryFun first with struct that implements opCall, then calls it again with a delegate literal: struct IsEven { bool opCall(int x) { return (x & 1) == 0; } } IsEven isEven; assert(unaryFun!isEven(num)); assert(unaryFun!(x => x > 0)(num)); With this, you have everything you need to implement the utility function to spec: import std.functional; T[] filterArray(alias predicate, T)(T[] source) if(is(typeof(unaryFun!predicate(source[0]))) { T[] sink; foreach(t; source) { if(unaryFun!predicate(t)) sink ~= t; } return sink; } unittest { auto ints = [1, 2, 3, 4, 5, 6, 7]; auto even = ints.filterArray!(x => (x & 1) == 0)(); assert(even == [2, 4, 6]); }  The unittest verifies that the function works as expected. As a standalone implementation, it gets the job done and is quite likely good enough. But, what if, later on down the road, someone decides to create more functions that perform specific operations on arrays in the same manner? The natural outcome of that is to use the output of one operation as the input for another, creating a chain of function calls to transform the original data.  The most obvious problem is that any such function that cannot perform its operation in place must allocate at least once every time it's called. This means, chain operations on a single array will end up allocating memory multiple times. This is not the sort of habit you want to get into in any language, especially in performance-critical code, but in D, you have to take the GC into account. Any given allocation could trigger a garbage collection cycle, so it's a good idea to program to the GC; don't be afraid of allocating, but do so only when necessary and keep it out of your inner loops.  In filterArray, the naïve appending can be improved upon, but the allocation can't be eliminated unless a second parameter is added to act as the sink. This allows the allocation strategy to be decided at the call site rather than by the function, but it leads to another problem. If all of the operations in a chain require a sink and the sink for one operation becomes the source for the next, then multiple arrays must be declared to act as sinks. This can quickly become unwieldy. Another potential issue is that filterArray is eager, meaning that every time the function is called, the filtering takes place immediately. If all of the functions in a chain are eager, it becomes quite difficult to get around the need for allocations or multiple sinks. The alternative, lazy functions, do not perform their work at the time they are called, but rather at some future point. Not only does this make it possible to put off the work until the result is actually needed (if at all), it also opens the door to reducing the amount of copying or allocating needed by operations in the chain. Everything could happen in one step at the end.  Finally, why should each operation be limited to arrays? Often, we want to execute an algorithm on the elements of a list, a set, or some other container, so why not support any collection of elements? By making each operation generic enough to work with any type of container, it's possible to build a library of reusable algorithms without the need to implement each algorithm for each type of container. The solution Now we're going to implement a more generic version of filterArray, called filter, which can work with any container type. It needs to avoid allocation and should also be lazy. To facilitate this, the function should work with a well-defined interface that abstracts the container away from the algorithm. By doing so, it's possible to implement multiple algorithms that understand the same interface. It also takes the decision on whether or not to allocate completely out of the algorithms. The interface of the abstraction need not be an actual interface type. Template constraints can be used to verify that a given type meets the requirements.  You might have heard of duck typing. It originates from the old saying, If it looks like a duck, swims like a duck, and quacks like a duck, then it's probably a duck. The concept is that if a given object instance has the interface of a given type, then it's probably an instance of that type. D's template constraints and compile-time capabilities easily allow for duck typing. The interface In looking for inspiration to define the new interface, it's tempting to turn to other languages like Java and C++. On one hand, we want to iterate the container elements, which brings to mind the iterator implementations in other languages. However, we also want to do a bit more than that, as demonstrated by the following chain of function calls: container.getType.algorithm1.algorithm2.algorithm3.toContainer();  Conceptually, the instance returned by getType will be consumed by algorithm1, meaning that inside the function, it will be iterated to the point where it can produce no more elements. But then, algorithm1 should return an instance of the same type, which can iterate over the same container, and which will in turn be consumed by algorithm2. The process repeats for algorithm3. This implies that instances of the new type should be able to be instantiated independent of the container they represent.  Moreover, given that D supports slicing, the role of getType previously could easily be played by opSlice. Iteration need not always begin with the first element of a container and end with the last; any range of elements should be supported. In fact, there's really no reason for an actual container instance to exist at all in some cases. Imagine a random number generator; we should be able to plug one into the preceding function chain; just eliminate the container and replace getType with the generator instance. As long as it conforms to the interface we define, it doesn't matter that there is no concrete container instance backing it. The short version of it is, we don't want to think solely in terms of iteration, as it's only a part of the problem we're trying to solve. We want a type that not only supports iteration, of either an actual container or a conceptual one, but one that also can be instantiated independently of any container, knows both its beginning and ending boundaries, and, in order to allow for lazy algorithms, can be used to generate new instances that know how to iterate over the same elements. Considering these requirements, Iterator isn't a good fit as a name for the new type. Rather than naming it for what it does or how it's used, it seems more appropriate to name it for what it represents. There's more than one possible name that fits, but we'll go with Range (as in, a range of elements). That's it for the requirements and the type name. Now, moving on to the API. For any algorithm that needs to sequentially iterate a range of elements from beginning to end, three basic primitives are required: There must be a way to determine whether or not any elements are available There must be a means to access the next element in the sequence There must be a way to advance the sequence so that another element can be made ready Based on these requirements, there are several ways to approach naming the three primitives, but we'll just take a shortcut and use the same names used in D. The first primitive will be called empty and can be implemented either as a member function that returns bool or as a bool member variable. The second primitive will be called front, which again could be a member function or variable, and which returns T, the element type of the range. The third primitive can only be a member function and will be called popFront, as conceptually it is removing the current front from the sequence to ready the next element. A range for arrays Wrapping an array in the Range interface is quite easy. It looks like this: auto range(T)(T[] array) { struct ArrayRange(T) { private T[] _array; bool empty() @property { return _array.length == 0; } ref T front() { return _array[0]; } void popFront() { _array = _array[1 .. $]; } } return ArrayRange!T(array); } By implementing the iterator as struct, there's no need to allocate GC memory for a new instance. The only member is a slice of the source array, which again avoids allocation. Look at the implementation of popFront. Rather than requiring a separate variable to track the current array index, it slices the first element out of _array so that the next element is always at index 0, consequently shortening.length of the slice by 1 so that after every item has been consumed, _array.length will be 0. This makes the implementation of both empty and front dead simple. ArrayRange can be a Voldemort type because there is no need to declare its type in any algorithm it's passed to. As long as the algorithms are implemented as templates, the compiler can infer everything that needs to be known for them to work. Moreover, thanks to UFCS, it's possible to call this function as if it were an array property. Given an array called myArray, the following is valid: auto range = myArray.range; Next, we need a template to go in the other direction. This needs to allocate a new array, walk the iterator, and store the result of each call to element in the new array. Its implementation is as follows: T[] array(T, R)(R range) @property { T[] ret; while(!range.empty) { ret ~= range.front; range.popFront(); } return ret; } This can be called after any operation that produces any Range in order to get an array. If the range comes at the end of one or more lazy operations, this will cause all of them to execute simply by the call to popFront (we'll see how shortly). In that case, no allocations happen except as needed in this function when elements are appended to ret. Again, the appending strategy here is naïve, so there's room for improvement in order to reduce the potential number of allocations. Now it's time to implement an algorithm to make use of our new range interface. The implementation of filter The filter function isn't going to do any filtering at all. If that sounds counterintuitive, recall that we want the function to be lazy; all of the work should be delayed until it is actually needed. The way to accomplish that is to wrap the input range in a custom range that has an internal implementation of the filtering algorithm. We'll call this wrapper FilteredRange. It will be a Voldemort type, which is local to the filter function. Before seeing the entire implementation, it will help to examine it in pieces as there's a bit more to see here than with ArrayRange. FilteredRange has only one member: private R _source; R is the type of the range that is passed to filter. The empty and front functions simply delegate to the source range, so we'll look at popFront next: void popFront() { _source.popFront(); skipNext(); } This will always pop the front from the source range before running the filtering logic, which is implemented in the private helper function skipNext: private void skipNext() { while(!_source.empty && !unaryFun!predicate(_source.front)) _source.popFront(); } This function tests the result of _source.front against the predicate. If it doesn't match, the loop moves on to the next element, repeating the process until either a match is found or the source range is empty. So, imagine you have an array arr of the values [1,2,3,4]. Given what we've implemented so far, what would be the result of the following chain? Let's have a look at the following code: arr.range.filter!(x => (x & 1) == 0).front; As mentioned previously, front delegates to _source.front. In this case, the source range is an instance of ArrayRange; its front returns _source[0]. Since popFront was never called at any point, the first value in the array was never tested against the predicate. Therefore, the return value is 1, a value which doesn't match the predicate. The first value returned by front should be 2, since it's the first even number in the array. In order to make this behave as expected, FilteredRange needs to ensure the wrapped range is in a state such that either the first call to front will properly return a filtered value, or empty will return true, meaning there are no values in the source range that match the predicate. This is best done in the constructor: this(R source) { _source = source; skipNext(); } Calling skipNext in the constructor ensures that the first element of the source range is tested against the predicate; however, it does mean that our filter implementation isn't completely lazy. In an extreme case, that _source contains no values that match the predicate; it's actually going to be completely eager. The source elements will be consumed as soon as the range is instantiated. Not all algorithms will lend themselves to 100 percent laziness. No matter what we have here is lazy enough. Wrapped up inside the filter function, the whole thing looks like this: import std.functional; auto filter(alias predicate, R)(R source) if(is(typeof(unaryFun!predicate))) { struct FilteredRange { private R _source; this(R source) { _source = source; skipNext(); } bool empty() { return _source.empty; } auto ref front() { return _source.front; } void popFront() { _source.popFront(); skipNext(); } private void skipNext() { while(!_source.empty && !unaryFun!predicate(_source.front)) _source.popFront(); } } return FilteredRange(source); } It might be tempting to take the filtering logic out of the skipNext method and add it to front, which is another way to guarantee that it's performed on every element. Then no work would need to be done in the constructor and popFront would simply become a wrapper for _source.popFront. The problem with that approach is that front can potentially be called multiple times without calling popFront in between. Aside from the fact that it should return the same value each time, which can easily be accommodated, this still means the current element will be tested against the predicate on each call. That's unnecessary work. As a general rule, any work that needs to be done inside a range should happen as a result of calling popFront, leaving front to simply focus on returning the current element. The test With the implementation complete, it's time to put it through its paces. Here are a few test cases in a unittest block: unittest { auto arr = [10, 13, 300, 42, 121, 20, 33, 45, 50, 109, 18]; auto result = arr.range .filter!(x => x < 100 ) .filter!(x => (x & 1) == 0) .array!int(); assert(result == [10,42,20,50,18]); arr = [1,2,3,4,5,6]; result = arr.range.filter!(x => (x & 1) == 0).array!int; assert(result == [2, 4, 6]); arr = [1, 3, 5, 7]; auto r = arr.range.filter!(x => (x & 1) == 0); assert(r.empty); arr = [2,4,6,8]; result = arr.range.filter!(x => (x & 1) == 0).array!int; assert(result == arr); } Assuming all of this has been saved in a file called filter.d, the following will compile it for unit testing: dmd -unittest -main filter That should result in an executable called filter which, when executed, should print nothing to the screen, indicating a successful test run. Notice the test that calls empty directly on the returned range. Sometimes, we might not need to convert a range to a container at the end of the chain. For example, to print the results, it's quite reasonable to iterate a range directly. Why allocate when it isn't necessary? The real ranges The purpose of the preceding exercise was to get a feel of the motivation behind D ranges. We didn't develop a concrete type called Range, just an interface. D does the same, with a small set of interfaces defining ranges for different purposes. The interface we developed exactly corresponds to the basic kind of D range, called an input range, one of one of two foundational range interfaces in D (the upshot of that is that both ArrayRange and FilteredRange are valid input ranges, though, as we'll eventually see, there's no reason to use either outside of this article). There are also certain optional properties that ranges might have, which, when present, some algorithms might take advantage of. We'll take a brief look at the range interfaces now, then see more details regarding their usage in the next section. Input ranges This foundational range is defined to be anything from which data can be sequentially read via the three primitives empty, front, and popFront. The first two should be treated as properties, meaning they can be variables or functions. This is important to keep in mind when implementing any generic range-based algorithm yourself; calls to these two primitives should be made without parentheses. The three higher-order range interfaces, we'll see shortly, build upon the input range interface. To reinforce a point made earlier, one general rule to live by when crafting input ranges is that consecutive calls to front should return the same value until popFront is called; popFront prepares an element to be returned and front returns it. Breaking this rule can lead to unexpected consequences when working with range-based algorithms, or even foreach. Input ranges are somewhat special in that they are recognized by the compiler. The opApply enables iteration of a custom type with a foreach loop. An alternative is to provide an implementation of the input range primitives. When the compiler encounters a foreach loop, it first checks to see if the iterated instance is of a type that implements opApply. If not, it then checks for the input range interface and, if found, rewrites the loop. In a given range someRange, take for example the following loop: foreach(e; range) { ... } This is rewritten to something like this: for(auto __r = range; !__r.empty; __r.popFront()) { auto e = __r.front; ... } This has implications. To demonstrate, let's use the ArrayRange from earlier: auto ar = [1, 2, 3, 4, 5].range; foreach(n; ar) { writeln(n); } if(!ar.empty) writeln(ar.front); The last line prints 1. If you're surprised, look back up at the for loop that the compiler generates. ArrayRange is a struct, so when it's assigned to __r, a copy is generated. The slices inside, ar and __r, point to the same memory, but their .ptr and .length properties are distinct. As the length of the __r slice decreases, the length of the ar slice remains the same. When implementing generic algorithms that loop over a source range, it's not a good idea to assume the original range will not be consumed by the loop. If it's a class instead of struct, it will be consumed by the loop, as classes are references types. Furthermore, there are no guarantees about the internal implementation of a range. There could be struct-based ranges that are actually consumed in a foreach loop. Generic functions should always assume this is the case. Test if a given range type R is an input range: import std.range : isInputRange; static assert(isInputRange!R); There are no special requirements on the return value of the front property. Elements can be returned by value or by reference, they can be qualified or unqualified, they can be inferred via auto, and so on. Any qualifiers, storage classes, or attributes that can be applied to functions and their return values can be used with any range function, though it might not always make sense to do so. Forward ranges The most basic of the higher-order ranges is the forward range. This is defined as an input range that allows its current point of iteration to be saved via a primitive appropriately named save. Effectively, the implementation should return a copy of the current state of the range. For ranges that are struct types, it could be as simple as: auto save() { return this; } For ranges that are class types, it requires allocating a new instance: auto save() { return new MyForwardRange(this); } Forward ranges are useful for implementing algorithms that require a look ahead. For example, consider the case of searching a range for a pair of adjacent elements that pass an equality test: auto saved = r.save; if(!saved.empty) { for(saved.popFront(); !saved.empty; r.popFront(), saved.popFront()) { if(r.front == saved.front) return r; } } return saved; Because this uses a for loop and not a foreach loop, the ranges are iterated directly and are going to be consumed. Before the loop begins, a copy of the current state of the range is made by calling r.save. Then, iteration begins over both the copy and the original. The original range is positioned at the first element, and the call to saved.popFront in the beginning of the loop statement positions the saved range at the second element. As the ranges are iterated in unison, the comparison is always made on adjacent elements. If a match is found, r is returned, meaning that the returned range is positioned at the first element of a matching pair. If no match is found, saved is returned—since it's one element ahead of r, it will have been consumed completely and its empty property will be true. The preceding example is derived from a more generic implementation in Phobos, std.range.findAdjacent. It can use any binary (two argument) Boolean condition to test adjacent elements and is constrained to only accept forward ranges. It's important to understand that calling save usually does not mean a deep copy, but it sometimes can. If we were to add a save function to the ArrayRange from earlier, we could simply return this; the array elements would not be copied. A class-based range, on the other hand, will usually perform a deep copy because it's a reference type. When implementing generic functions, you should never make the assumption that the range does not require a deep copy. For example, given a range r: auto saved = r; // INCORRECT!! auto saved = r.save; // Correct. If r is a class, the first line is almost certainly going to result in incorrect behavior. It would in the preceding example loop. To test if a given range R is a forward range: import std.range : isForwardRange; static assert(isForwardRange!R); Bidirectional ranges A bidirectional range is a forward range that includes the primitives back and popBack, allowing a range to be sequentially iterated in reverse. The former should be a property, the latter a function. Given a bidirectional range r, the following forms of iteration are possible: foreach_reverse(e; r) writeln(e); for(; !r.empty; r.popBack) writeln(r.back); } Like its cousin foreach, the foreach_reverse loop will be rewritten into a for loop that does not consume the original range; the for loop shown here does consume it. Test whether a given range type R is a bidirectional range: import std.range : isBidirectionalRange; static assert(isBidirectionalRange!R); Random-access ranges A random-access range is a bidirectional range that supports indexing and is required to provide a length primitive unless it's infinite (two topics we'll discuss shortly). For custom range types, this is achieved via the opIndex operator overload. It is assumed that r[n] returns a reference to the (n+1)th element of the range, just as when indexing an array. Test whether a given range R is a random-access range: import std.range : isRandomAccessRange; static assert(isRandomAccessRange!R); Dynamic arrays can be treated as random-access ranges by importing std.array. This pulls functions into scope that accept dynamic arrays as parameters and allows them to pass all the isRandomAccessRange checks. This makes our ArrayRange from earlier obsolete. Often, when you need a random-access range, it's sufficient just to use an array instead of creating a new range type. However, char and wchar arrays (string and wstring) are not considered random-access ranges, so they will not work with any algorithm that requires one. Getting a random-access range from char[] and wchar[] Recall that a single Unicode character can be composed of multiple elements in a char or wchar array, which is an aspect of strings that would seriously complicate any algorithm implementation that needs to directly index the array. To get around this, the thing to do in a general case is to convert char[] and wchar[] into dchar[]. This can be done with std.utf.toUTF32, which encodes UTF-8 and UTF-16 strings into UTF-32 strings. Alternatively, if you know you're only working with ASCII characters, you can use std.string.representation to get ubyte[] or ushort[] (on dstring, it returns uint[]). Output ranges The output range is the second foundational range type. It's defined to be anything that can be sequentially written to via the primitive put. Generally, it should be implemented to accept a single parameter, but the parameter could be a single element, an array of elements, a pointer to elements, or another data structure containing elements. When working with output ranges, never call the range's implementation of put directly; instead, use the Phobos' utility function std.range.put. It will call the range's implementation internally, but it allows for a wider range of argument types. Given a range r and element e, it would look like this: import std.range : put; put(r, e); The benefit here is if e is anything other than a single element, such as an array or another range, the global put does what is necessary to pull elements from it and put them into r one at a time. With this, you can define and implement a simple output range that might look something like this: MyOutputRange(T) { private T[] _elements; void put(T elem) { _elements ~= elem; } } Now, you need not worry about calling put in a loop, or overloading it to accept collections of T. For example, let's have a look at the following code: MyOutputRange!int range; auto nums = [11, 22, 33, 44, 55]; import std.range : put; put(range, nums); Note that using UFCS here will cause compilation to fail, as the compiler will attempt to call MyOutputRange.put directly, rather than the utility function. However, it's fine to use UFCS when the first parameter is a dynamic array. This allows arrays to pass the isOutputRange predicate Test whether a given range R is an output range: import std.range : isOutputRange; static assert(isOutputRange!(R, E)); Here, E is the type of element accepted by R.put. Optional range primitives In addition to the five primary range types, some algorithms in Phobos are designed to look for optional primitives that can be used as an optimization or, in some cases, a requirement. There are predicate templates in std.range that allow the same options to be used outside of Phobos. hasLength Ranges that expose a length property can reduce the amount of work needed to determine the number of elements they contain. A great example is the std.range.walkLength function, which will determine and return the length of any range, whether it has a length primitive or not. Given a range that satisfies the std.range.hasLength predicate, the operation becomes a call to the length property; otherwise, the range must be iterated until it is consumed, incrementing a variable every time popFront is called. Generally, length is expected to be a O(1) operation. If any given implementation cannot meet that expectation, it should be clearly documented as such. For non-infinite random-access ranges, length is a requirement. For all others, it's optional. isInfinite An input range with an empty property, which is implemented as a compile-time value set to false is considered an infinite range. For example, let's have a look at the following code: struct IR { private uint _number; enum empty = false; auto front() { return _number; } void popFront() { ++_number; } } Here, empty is a manifest constant, but it could alternatively be implemented as follows: static immutable empty = false; The predicate template std.range.isInfinite can be used to identify infinite ranges. Any range that is always going to return false from empty should be implemented to pass isInfinite. Wrapper ranges (such as the FilterRange we implemented earlier) in some functions might check isInfinite and customize an algorithm's behavior when it's true. Simply returning false from an empty function will break this, potentially leading to infinite loops or other undesired behavior. Other options There are a handful of other optional primitives and behaviors, as follows: hasSlicing: This returns true for any forward range that supports slicing. There are a set of requirements specified by the documentation for finite versus infinite ranges and whether opDollar is implemented. hasMobileElements: This is true for any input range whose elements can be moved around in the memory (as opposed to copied) via the primitives moveFront, moveBack, and moveAt. hasSwappableElements: This returns true if a range supports swapping elements through its interface. The requirements are different depending on the range type. hasAssignableElements: This returns true if elements are assignable through range primitives such as front, back, or opIndex. At http://dlang.org/phobos/std_range_primitives.html, you can find specific documentation for all of these tests, including any special requirements that must be implemented by a range type to satisfy them. Ranges in use The key concept to understand ranges in the general case is that, unless they are infinite, they are consumable. In idiomatic usage, they aren't intended to be kept around, adding and removing elements to and from them as if they were some sort of container. A range is generally created only when needed, passed to an algorithm as input, then ultimately consumed, often at the end of a chain of algorithms. Even forward ranges and output ranges with their save and put primitives usually aren't intended to live long beyond an algorithm. That's not to say it's forbidden to keep a range around; some might even be designed for long life. For example, the random number generators in std.random are all ranges that are intended to be reused. However, idiomatic usage in D generally means lazy, fire-and-forget ranges that allow algorithms to operate on data from any source. For most programs, the need to deal with ranges directly should be rare; most code will be passing ranges to algorithms, then either converting the result to a container or iterating it with a foreach loop. Only when implementing custom containers and range-based algorithms is it necessary to implement a range or call a range interface directly. Still, understanding what's going on under the hood helps in understanding the algorithms in Phobos, even if you never need to implement a range or algorithm yourself. That's the focus of the remainder of this article. Custom ranges When implementing custom ranges, some thought should be given to the primitives that need to be supported and how to implement them. Since arrays support a number of primitives out of the box, it might be tempting to return a slice from a custom type, rather than struct wrapping an array or something else. While that might be desirable in some cases, keep in mind that a slice is also an output range and has assignable elements (unless it's qualified as const or immutable, but those can be cast away). In many cases, what's really wanted is an input range that can never be modified; one that allows iteration and prevents unwanted allocations. A custom range should be as lightweight as possible. If a container uses an array or pointer internally, the range should operate on a slice of the array, or a copy of the pointer, rather than a copy of the data. This is especially true for the save primitive of a forward iterator; it could be called more than once in a chain of algorithms, so an implementation that requires deep copying would be extremely suboptimal (not to mention problematic for a range that requires ref return values from front). Now we're going to implement two actual ranges that demonstrate two different scenarios. One is intended to be a one-off range used to iterate a container, and one is suited to sticking around for as long as needed. Both can be used with any of the algorithms and range operations in Phobos. Getting a range from a stack Here's a barebones, simple stack implementation with the common operations push, pop, top, and isEmpty (named to avoid confusion with the input range interface). It uses an array to store its elements, appending them in the push operation and decreasing the array length in the pop operation. The top of the stack is always _array[$-1]: struct Stack(T) { private T[] _array; void push(T element) { _array ~= element; } void pop() { assert(!isEmpty); _array.length -= 1; } ref T top() { assert(!isEmpty); return _array[$-1]; } bool isEmpty() { return _array.length == 0; } } Rather than adding an opApply to iterate a stack directly, we want to create a range to do the job for us so that we can use it with all of those algorithms. Additionally, we don't want the stack to be modified through the range interface, so we should declare a new range type internally. That might look like this: private struct Range { T[] _elements; bool empty() { return _elements.length == 0; } T front() { return _elements[$-1]; } void popFront() { _elements.length -= 1; } } Add this anywhere you'd like inside the Stack declaration. Note the iteration of popFront. Effectively, this range will iterate the elements backwards. Since the end of the array is the top of the stack, that means it's iterating the stack from the top to the bottom. We could also add back and popBack primitives that iterate from the bottom to the top, but we'd also have to add a save primitive since bidirectional ranges must also be forward ranges. Now, all we need is a function to return a Range instance: auto elements() { return Range(_array); } Again, add this anywhere inside the Stack declaration. A real implementation might also add the ability to get a range instance from slicing a stack. Now, test it out: Stack!int stack; foreach(i; 0..10) stack.push(i); writeln("Iterating..."); foreach(i; stack.elements) writeln(i); stack.pop(); stack.pop(); writeln("Iterating..."); foreach(i; stack.elements) writeln(i); One of the great side effects of this sort of range implementation is that you can modify the container behind the range's back and the range doesn't care: foreach(i; stack.elements) { stack.pop(); writeln(i); } writeln(stack.top); This will still print exactly what was in the stack at the time the range was created, but the writeln outside the loop will cause an assertion failure because the stack will be empty by then. Of course, it's still possible to implement a container that can cause its ranges not just to become stale, but to become unstable and lead to an array bounds error or an access violation or some such. However, D's slices used in conjunction with structs give a good deal of flexibility. A name generator range Imagine that we're working on a game and need to generate fictional names. For this example, let's say it's a music group simulator and the names are those of group members. We'll need a data structure to hold the list of possible names. To keep the example simple, we'll implement one that holds both first and last names: struct NameList { private: string[] _firstNames; string[] _lastNames; struct Generator { private string[] _first; private string[] _last; private string _next; enum empty = false; this(string[] first, string[] last) { _first = first; _last = last; popFront(); } string front() { return _next; } void popFront() { import std.random : uniform; auto firstIdx = uniform(0, _first.length); auto lastIdx = uniform(0, _last.length); _next = _first[firstIdx] ~ " " ~ _last[lastIdx]; } } public: auto generator() { return Generator(_firstNames, _lastNames); } } The custom range is in the highlighted block. It's a struct called Generator that stores two slices, _first and _last, which are both initialized in its only constructor. It also has a field called _next, which we'll come back to in a minute. The goal of the range is to provide an endless stream of randomly generated names, which means it doesn't make sense for its empty property to ever return true. As such, it is marked as an infinite range by the manifest constant implementation of empty that is set to false. This range has a constructor because it needs to do a little work to prepare itself before front is called for the first time. All of the work is done in popFront, which the constructor calls after the member variables are set up. Inside popFront, you can see that we're using the std.random.uniform function. By default, this function uses a global random number generator and returns a value in the range specified by the parameters, in this case 0 and the length of each array. The first parameter is inclusive and the second is exclusive. Two random numbers are generated, one for each array, and then used to combine a first name and a last name to store in the _next member, which is the value returned when front is called. Remember, consecutive calls to front without any calls to popFront should always return the same value. std.random.uniform can be configured to use any instance of one of the random number generator implementations in Phobos. It can also be configured to treat the bounds differently. For example, both could be inclusive, exclusive, or the reverse of the default. See the documentation at http://dlang.org/phobos/std_random.html for details. The generator property of NameList returns an instance of Generator. Presumably, the names in a NameList would be loaded from a file on disk, or a database, or perhaps even imported at compile-time. It's perfectly fine to keep a single Generator instance handy for the life of the program as implemented. However, if the NameList instance backing the range supported reloading or appending, not all changes would be reflected in the range. In that scenario, it's better to go through generator every time new names need to be generated. Now, let's see how our custom range might be used: auto nameList = NameList( ["George", "John", "Paul", "Ringo", "Bob", "Jimi", "Waylon", "Willie", "Johnny", "Kris", "Frank", "Dean", "Anne", "Nancy", "Joan", "Lita", "Janice", "Pat", "Dionne", "Whitney", "Donna", "Diana"], ["Harrison", "Jones", "Lennon", "Denver", "McCartney", "Simon", "Starr", "Marley", "Dylan", "Hendrix", "Jennings", "Nelson", "Cash", "Mathis", "Kristofferson", "Sinatra", "Martin", "Wilson", "Jett", "Baez", "Ford", "Joplin", "Benatar", "Boone", "Warwick", "Houston", "Sommers", "Ross"] ); import std.range : take; auto names = nameList.generator.take(4); writeln("These artists want to form a new band:"); foreach(artist; names) writeln(artist); First up, we initialize a NameList instance with two array literals, one of first names and one of last names. Next, the highlighted line is where the range is used. We call nameList.generator and then, using UFCS, pass the returned Generator instance to std.range.take. This function creates a new lazy range containing a number of elements, four in this case, from the source range. In other words, the result is the equivalent of calling front and popFront four times on the range returned from nameList.generator, but since it's lazy, the popping doesn't occur until the foreach loop. That loop produces four randomly generated names that are each written to standard output. One iteration yielded the following names for me: These artists want to form a new band: Dionne WilsonJohnny StarrRingo SinatraDean Kristofferson Other considerations The Generator range is infinite, so it doesn't need length. There should never be a need to index it, iterate it in reverse, or assign any values to it. It has exactly the interface it needs. But it's not always so obvious where to draw the line when implementing a custom range. Consider the interface for a range from a queue data structure. A basic queue implementation allows two operations to add and remove items—enqueue and dequeue (or push and pop if you prefer). It provides the self-describing properties empty and length. What sort of interface should a range from a queue implement? An input range with a length property is perhaps the most obvious, reflecting the interface of the queue itself. Would it make sense to add a save property? Should it also be a bidirectional range? What about indexing? Should the range be random-access? There are queue implementations out there in different languages that allow indexing, either through an operator overload or a function such as getElementAt. Does that make sense? Maybe. More importantly, if a queue doesn't allow indexing, does it make sense for a range produced from that queue to allow it? What about slicing? Or assignable elements? For our queue type at least, there are no clear-cut answers to these questions. A variety of factors come into play when choosing which range primitives to implement, including the internal data structure used to implement the queue, the complexity requirements of the primitives involved (indexing should be an O(1) operation), whether the queue was implemented to meet a specific need or is a more general-purpose data structure, and so on. A good rule of thumb is that if a range can be made a forward range, then it should be. Custom algorithms When implementing custom, range-based algorithms, it's not enough to just drop an input range interface onto the returned range type and be done with it. Some consideration needs to be given to the type of range used as input to the function and how its interface should affect the interface of the returned range. Consider the FilteredRange we implemented earlier, which provides the minimal input range interface. Given that it's a wrapper range, what happens when the source range is an infinite range? Let's look at it step by step. First, an infinite range is passed in to filter. Next, it's wrapped up in a FilteredRange instance that's returned from the function. The returned range is going to be iterated at some point, either directly by the caller or somewhere in a chain of algorithms. There's one problem, though: with a source range that's infinite, the FilteredRange instance can never be consumed. Because its empty property simply wraps that of the source range, it's always going to return false if the source range is infinite. However, since FilteredRange does not implement empty as a compile-time constant, it will never match the isInfiniteRange predicate. This will cause any algorithm that makes that check to assume it's dealing with a finite range and, if iterating it, enter into an infinite loop. Imagine trying to track down that bug. One option is to prohibit infinite ranges with a template constraint, but that's too restrictive. The better way around this potential problem is to check the source range against the isInfinite predicate inside the FilteredRange implementation. Then, the appropriate form of the empty primitive of FilteredRange can be configured with conditional compilation: import std.range : isInfinite; static if(isInfinite!T) enum empty = false; else bool empty(){ return _source.empty; } With this, FilteredRange will satisfy the isInfinite predicate when it wraps an infinite range, avoiding the infinite loop bug. Another good rule of thumb is that a wrapper range should implement as many of the primitives provided by the source range as it reasonably can. If the range returned by a function has fewer primitives than the one that went in, it is usable with fewer algorithms. But not all ranges can accommodate every primitive. Take FilteredRange as an example again. It could be configured to support the bidirectional interface, but that would have a bit of a performance impact as the constructor would have to find the last element in the source range that satisfies the predicate in addition to finding the first, so that both front and back are primed to return the correct values. Rather than using conditional compilation, std.algorithm provides two functions, filter and filterBidirectional, so that users must explicitly choose to use the latter version. A bidirectional range passed to filter will produce a forward range, but the latter maintains the interface. The random-access interface, on the other hand, makes no sense on FilteredRange. Any value taken from the range must satisfy the predicate, but if users can randomly index the range, they could quite easily get values that don't satisfy the predicate. It could work if the range was made eager rather than lazy. In that case, it would allocate new storage and copy all the elements from the source that satisfies the predicate, but that defeats the purpose of using lazy ranges in the first place. Summary In this article, we've taken an introductory look at ranges in D and how to implement them into containers and algorithms. For more information on ranges and their primitives and traits, see the documentation at http://dlang.org/phobos/std_range.html. Resources for Article: Further resources on this subject: Transactions and Operators [article] Using Protocols and Protocol Extensions [article] Hand Gesture Recognition Using a Kinect Depth Sensor [article]
Read more
  • 0
  • 0
  • 32189
article-image-understanding-result-type-in-swift-5-with-daniel-steinberg
Sugandha Lahoti
16 Dec 2019
4 min read
Save for later

Understanding Result Type in Swift 5 with Daniel Steinberg

Sugandha Lahoti
16 Dec 2019
4 min read
One of the first things many programmers add to their Swift projects is a Result type. From Swift 5 onwards, Swift included an official Result type. In his talk at iOS Cong SG 2019, Daniel Steinberg explained why developers would need a Result type, how and when to use it, and what map and flatMap bring for Result. Swift 5, released in March this year hosts a number of key features such as concurrency, generics, and memory management. If you want to learn and master Swift 5, you may like to go through Mastering Swift 5, a book by Packt Publishing. Inside this book, you'll find the key features of Swift 5 easily explained with complete sets of examples. Handle errors in Swift 5 easily with Result type Result type gives a simple clear way of handling errors in complex code such as asynchronous APIs. Daniel describes the Result type as a hybrid of optionals and errors. He says, “We've used it like optionals but we've got the power of errors we know what went wrong and we can pull that error out at any time that we need it. The idea was we have one return type whether we succeeded or failed. We get a record of our first error and we are able to keep going if there are no errors.” In Swift 5, Swift’s Result type is implemented as an enum that has two cases: success and failure. Both are implemented using generics so they can have an associated value of your choosing, but failure must be something that conforms to Swift’s Error type. Due to the addition of Standard Library, the Error protocol now conforms to itself and makes working with errors easier. Image taken from Daniel’s presentation Result type has four other methods namely map(), flatMap(), mapError(), and flatMapError(). These methods enables us to do many other kinds of transformations using inline closures and functions. The map() method looks inside the Result, and transforms the success value into a different kind of value using a closure specified. However, if it finds failure instead, it just uses that directly and ignores the transformation. Basically, it enables the automatic transformation of a value (error) through a closure, but only in case of success (failure), otherwise, the Result is left unmodified. flatMap() returns a new result, mapping any success value using the given transformation and unwrapping the produced result. Daniel says, “If I need recursion I'm often reaching for flat map.” Daniel adds, “Things that can’t fail use map() and things that can fail use flatmap().” mapError(_:) returns a new result, mapping any failure value using the given transformation and flatMapError(_:) returns a new result, mapping any failure value using the given transformation and unwrapping the produced result. flatMap() (flatMapError()) is useful when you want to transform your value (error) using a closure that returns itself a Result to handle the case when the transformation fails. Using a Result type can be a great way to reduce ambiguity when dealing with values and results of asynchronous operations. By adding convenience APIs using extensions we can also reduce boilerplate and make it easier to perform common operations when working with results, all while retaining full type safety. You can watch Daniel Steinberg’s full video on YouTube where he explains Result Type with detailed code examples and points out common mistakes. If you want to learn more about all the new features of Swift 5 programming language then check out our book, Mastering Swift 5 by Jon Hoffman. Swift 5 for Xcode 10.2 is here! Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta.
Read more
  • 0
  • 0
  • 31826

article-image-set-up-scala-plugin-for-intellij-ide
Pavan Ramchandani
26 Jun 2018
2 min read
Save for later

How to set up the Scala Plugin in IntelliJ IDE [Tutorial]

Pavan Ramchandani
26 Jun 2018
2 min read
The Scala Plugin is used to turn a normal IntelliJ IDEA into a convenient Scala development environment. In this article, we will discuss how to set up Scala Plugin for IntelliJ IDEA IDE.  If you do not have IntelliJ IDEA, you can download it from here. By default, IntelliJ IDEA does not come with Scala features. Scala Plugin adds Scala features means that we can create Scala/Play Projects, we can create Scala Applications, Scala worksheets, and more. Scala Plugin contains the following technologies: Scala Play Framework SBT Scala.js It supports three popular OS Environments: Windows, Mac, and Linux. Setting up Scala Plugin for IntelliJ IDE Perform the  following steps to install Scala Plugin for IntelliJ IDE to develop our Scala-based projects: Open IntelliJ IDE: Go to  Configure at the bottom right and click on the Plugins option available in the drop-down, as shown here: This opens the Plugins window as shown here: Now click on InstallJetbrainsplugins, as shown in the preceding screenshot. Next, type the word Scala in the search bar to see the ScalaPlugin, as shown here: Click on the Install button to install Scala Plugin for IntelliJ IDEA. Now restart IntelliJ IDEA to see that Scala Plugin features. After we re-open IntelliJ IDEA, if we try to access File | New Project option, we will see Scala option in New Project window as shown in the following screenshot to create new Scala or Play Framework-based SBT projects: We can see the Play Framework option only in the IntelliJ IDEA Ultimate Edition. As we are using CE (Community Edition), we cannot see that option. It's now time to start Scala/Play application development using the IntelliJ IDE. You can start developing some Scala/Play-based applications. To summarize, we got an understanding to Scala Plugin and covered the installation steps for Scala Plugin for IntelliJ. To learn more about solutions for taking reactive programming approach with Scala, please refer the book Scala Reactive Programming. What Scala 3.0 Roadmap looks like! Building Scalable Microservices Exploring Scala Performance
Read more
  • 0
  • 0
  • 31806

article-image-qt-style-sheets
Packt
29 Sep 2016
26 min read
Save for later

QT Style Sheets

Packt
29 Sep 2016
26 min read
In this article by Lee Zhi Eng, author of the book, Qt5 C++ GUI Programming Cookbook, we will see how Qt allows us to easily design our program's user interface through a method which most people are familiar with. Qt not only provides us with a powerful user interface toolkit called Qt Designer, which enables us to design our user interface without writing a single line of code, but it also allows advanced users to customize their user interface components through a simple scripting language called Qt Style Sheets. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Using style sheets with Qt Designer Basic style sheets customization Creating a login screen using style sheet Use style sheets with Qt Designer In this example, we will learn how to change the look and feel of our program and make it look more professional by using style sheets and resources. Qt allows you to decorate your GUIs (Graphical User Interfaces) using a style sheet language called Qt Style Sheets, which is very similar to CSS (Cascading Style Sheets) used by web designers to decorate their websites. How to do it… The first thing we need to do is open up Qt Creator and create a new project. If this is the first time you have used Qt Creator, you can either click the big button that says New Project with a + sign, or simply go to File | New File or New Project. Then, select Application under the Project window and select Qt Widgets Application. After that, click the Choose button at the bottom. A window will then pop out and ask you to insert the project name and its location. Once you're done with that, click Next several times and click the Finish button to create the project. We will just stick to all the default settings for now. Once the project is created, the first thing you will see is the panel with tons of big icons on the left side of the window which is called the Mode Selector panel; we will discuss this more later in the How it works section. Then, you will also see all your source files listed on the Side Bar panel which is located right next to the Mode Selector panel. This is where you can select which file you want to edit, which, in this case, is mainwindow.ui because we are about to start designing the program's UI! Double click mainwindow.ui and you will see an entirely different interface appearing out of nowhere. Qt Creator actually helped you to switch from the script editor to the UI editor (Qt Designer) because it detected .ui extension on the file you're trying to open. You will also notice that the highlighted button on the Mode Selector panel has changed from the Edit button to the Design button. You can switch back to the script editor or change to any other tools by clicking one of the buttons located at the upper half of the Mode Selector panel. Let's go back to the Qt Designer and look at the mainwindow.ui file. This is basically the main window of our program (as the file name implies) and it's empty by default, without any widget on it. You can try to compile and run the program by pressing the Run button (green arrow button) at the bottom of the Mode Selector panel, and you will see an empty window popping out once the compilation is complete: Now, let's add a push button to our program's UI by clicking on the Push Button item in the widget box (under the Buttons category) and drag it to your main window in the form editor. Then, keep the push button selected and now you will see all the properties of this button inside the property editor on the right side of your window. Scroll down to somewhere around the middle and look for a property called styleSheet. This is where you apply styles to your widget, which may or may not inherit to its children or grandchildren recursively depending on how you set your style sheet. Alternatively, you can also right click on any widget in your UI at the form editor and select Change Style Sheet from the pop up menu. You can click on the input field of the styleSheet property to directly write the style sheet code, or click on the … button besides the input field to open up the Edit Style Sheet window which has a bigger space for writing longer style sheet code. At the top of the window you can find several buttons such as Add Resource, Add Gradient, Add Color, and Add Font that can help you to kick-start your coding if you don't remember the properties' names. Let's try to do some simple styling with the Edit Style Sheet window. Click Add Color and choose color. Pick a random color from the color picker window, let's say, a pure red color. Then click Ok. Now, you will see a line of code has been added to the text field on the Edit Style Sheet window, which in my case is as follows: color: rgb(255, 0, 0); Click the Ok button and now you will see the text on your push button has changed to a red color. How it works Let's take a bit of time to get ourselves familiar with Qt Designer's interface before we start learning how to design our own UI: Menu bar: The menu bar houses application-specific menus which provide easy access to essential functions such as create new projects, save files, undo, redo, copy, paste, and so on. It also allows you to access development tools that come with Qt Creator, such as the compiler, debugger, profiler, and so on. Widget box: This is where you can find all the different types of widgets provided by Qt Designer. You can add a widget to your program's UI by clicking one of the widgets from the widget box and dragging it to the form editor. Mode selector: The mode selector is a side panel that places shortcut buttons for easy access to different tools. You can quickly switch between the script editor and form editor by clicking the Edit or Design buttons on the mode selector panel which is very useful for multitasking. You can also easily navigate to the debugger and profiler tools in the same speed and manner. Build shortcuts: The build shortcuts are located at the bottom of the mode selector panel. You can build, run, and debug your project easily by pressing the shortcut buttons here. Form editor: Form editor is where you edit your program's UI. You can add different widgets to your program by selecting a widget from the widget box and dragging it to the form editor. Form toolbar: From here, you can quickly select a different form to edit, click the drop down box located above the widget box and select the file you want to open with Qt Designer. Beside the drop down box are buttons for switching between different modes for the form editor and also buttons for changing the layout of your UI. Object inspector: The object inspector lists out all the widgets within your current .ui file. All the widgets are arranged according to its parent-child relationship in the hierarchy. You can select a widget from the object inspector to display its properties in the property editor. Property editor: Property editor will display all the properties of the widget you selected either from the object inspector window or the form editor window. Action Editor and Signals & Slots Editor: This window contains two editors – Action Editor and the Signals & Slots Editor which can be accessed from the tabs below the window. The action editor is where you create actions that can be added to a menu bar or toolbar in your program's UI. Output panes: Output panes consist of several different windows that display information and output messages related to script compilation and debugging. You can switch between different output panes by pressing the buttons that carry a number before them, such as 1-Issues, 2-Search Results, 3-Application Output, and so on. There's more… In the previous section, we discussed how to apply style sheets to Qt widgets through C++ coding. Although that method works really well, most of the time the person who is in charge of designing the program's UI is not the programmer himself, but a UI designer who specializes in designing user-friendly UI. In this case, it's better to let the UI designer design the program's layout and style sheet with a different tool and not mess around with the code. Qt provides an all-in-one editor called the Qt Creator. Qt Creator consists of several different tools, such as script editor, compiler, debugger, profiler, and UI editor. The UI editor, which is also called the Qt Designer, is the perfect tool for designers to design their program's UI without writing any code. This is because Qt Designer adopted the What-You-See-Is-What-You-Get approach by providing accurate visual representation of the final result, which means whatever you design with Qt Designer will turn out exactly the same when the program is compiled and run. The similarities between Qt Style Sheets and CSS are as follows: CSS: h1 { color: red; background-color: white;} Qt Style Sheets: QLineEdit { color: red; background-color: white;} As you can see, both of them contain a selector and a declaration block. Each declaration contains a property and a value, separated by a colon. In Qt, a style sheet can be applied to a single widget by calling QObject::setStyleSheet() function in C++ code. For example: myPushButton->setStyleSheet("color : blue"); The preceding code will turn the text of a button with the variable name myPushButton to a blue color. You can also achieve the same result by writing the declaration in the style sheet property field in Qt Designer. We will discuss more about Qt Designer in the next section. Qt Style Sheets also supports all the different types of selectors defined in CSS2 standard, including Universal selector, Type selector, Class selector, ID selector, and so on, which allows us to apply styling to a very specific individual or group of widgets. For instance, if we want to change the background color of a specific line edit widget with the object name usernameEdit, we can do this by using an ID selector to refer to it: QLineEdit#usernameEdit { background-color: blue } To learn about all the selectors available in CSS2 (which are also supported by Qt Style Sheets), please refer to this document: http://www.w3.org/TR/REC-CSS2/selector.html. Basic style sheet customization In the previous example, you learned how to apply a style sheet to a widget with Qt Designer. Let's go crazy and push things further by creating a few other types of widgets and change their style properties to something bizarre for the sake of learning. This time, however, we will not apply the style to every single widget one by one, but we will learn to apply the style sheet to the main window and let it inherit down the hierarchy to all the other widgets so that the style sheet is easier to manage and maintain in long run. How to do it… First of all, let's remove the style sheet from the push button by selecting it and clicking the small arrow button besides the styleSheet property. This button will revert the property to the default value, which in this case is the empty style sheet. Then, add a few more widgets to the UI by dragging them one by one from the widget box to the form editor. I've added a line edit, combo box, horizontal slider, radio button, and a check box. For the sake of simplicity, delete the menu bar, main toolbar, and the status bar from your UI by selecting them from the object inspector, right click, and choose Remove. Now your UI should look something similar to this: Select the main window either from the form editor or the object inspector, then right click and choose Change Stylesheet to open up the Edit Style Sheet. border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; Now what you will see is a completely bizarre-looking UI with everything covered in yellow with a thick border. This is because the precedingstyle sheet does not have any selector, which means the style will apply to the children widgets of the main window all the way down the hierarchy. To change that, let's try something different: QPushButton { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } This time, only the push button will get the style described in the preceding code, and all other widgets will return to the default styling. You can try to add a few more push buttons to your UI and they will all look the same: This happens because we specifically tell the selector to apply the style to all the widgets with the class called QPushButton. We can also apply the style to just one of the push buttons by mentioningit's name in thestyle sheet, like so: QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } Once you understand this method, we can add the following code to the style sheet : QPushButton { color: red; border: 0px; padding: 0 8px; background: white; } QPushButton#pushButton_2 { border: 1px solid red; border-radius: 10px; } QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } What it does is basically change the style of all the push buttons as well as some properties of a specific button named pushButton_2. We keep the style sheet of pushButton_3 as it is. Now the buttons will look like this: The first set of style sheet will change all widgets of QPushButton type to a white rectangular button with no border and red text. Then the second set of style sheets changes only the border of a specific QPushButton widget by name of pushButton_2. Notice that the background color and text color of pushButton_2 remain as white and red color respectively because we didn't override them in the second set of style sheet, hence it will follow back the style described in the first set of style sheet since it's applicable to all QPushButton type widgets. Do notice that the text of the third button has also changed to red because we didn't describe the color property in the third set of style sheet. After that, create another set of style using the universal selector, like so: * { background: qradialgradient(cx: 0.3, cy: -0.4, fx: 0.3, fy: -0.4, radius: 1.35, stop: 0 #fff, stop: 1 #888); color: rgb(255, 255, 255); border: 1px solid #ffffff; } The universal selector will affect all the widgets regardless of their type. Therefore, the preceding style sheet will apply a nice gradient color to all the widgets' background as well as setting their text as white color and giving them a one-pixel solid outline which is also in white color. Instead of writing the name of the color (that is, white), we can also use the rgb function (rgb(255, 255, 255)) or hex code (#ffffff) to describe the color value. Just as before, the preceding style sheet will not affect the push buttons because we have already given them their own styles which will override the general style described in the universal selector. Just remember that in Qt, the style which is more specific will ultimately be used when there is more than one style having influence on a widget. This is how the UI will look like now: How it works If you are ever involved in web development using HTML and CSS, Qt's style sheet works exactly the same way as CSS. Style Sheet provides the definitions for describing the presentation of the widgets – what the colors are for each element in the widget group, how thick the border should be, and so on and so forth. If you specify the name of the widget to the style sheet, it will change the style of a particular push button widget with the name you provide. All the other widgets will not be affected and will remain as the default style. To change the name of a widget, select the widget either from the form editor or the object inspector and change the property called objectName in the property window. If you have used the ID selector previously to change the style of the widget, changing its object name will break the style sheet and lose the style. To fix this problem, simply change the object name in the style sheet as well. Creating a login screen using style sheet Next, we will learn how to put all the knowledge we learned in the previous example together and create a fake graphical login screen for an imaginary operating system. Style sheet is not the only thing you need to master in order to design a good UI. You will also need to learn how to arrange the widgets neatly using the layout system in Qt Designer. How to do it… The first thing we need to do is design the layout of the graphical login screen before we start doing anything. Planning is very important in order to produce good software. The following is a sample layout design I made to show you how I imagine the login screen will look. Just a simple line drawing like this is sufficient as long as it conveys the message clearly: Now that we know exactly how the login screen should look, let's go back to Qt Designer again. We will be placing the widgets at the top panel first, then the logo and the login form below it. Select the main window and change its width and height from 400 and 300 to 800 and 600 respectively because we'll need a bigger space in which to place all the widgets in a moment. Click and drag a label under theDisplay Widgets category from the widget box to the form editor. Change the objectName property of the label to currentDateTime and change its Text property to the current date and time just for display purposes, such as Monday, 25-10-2015 3:14 PM. Click and drag a push button under the Buttons category to the form editor. Repeat this process one more time because we have two buttons on the top panel. Rename the two buttons to restartButton and shutdownButton respectively. Next, select the main window and click the small icon button on the form toolbar that says Lay Out Vertically when you mouse-over it. Now you will see the widgets are being automatically arranged on the main window, but it's not exactly what we want yet. Click and drag a horizontal layout widget under the Layouts category to the main window. Click and drag the two push buttons and the text label into the horizontal layout. Now you will see the three widgets being arranged in a horizontal row, but vertically they are located in the middle of the screen. The horizontal arrangement is almost correct, but the vertical position is totally off. Click and drag a vertical spacer from the Spacers category and place it below the horizontal layout we just created previously (below the red rectangular outline). Now you will see all the widgets are being pushed to the top by the spacer. Now, place a horizontal spacer between the text label and the two buttons to keep them apart. This will make the text label always stick to the left and the buttons align to the right. Set both the Horizontal Policy and Vertical Policy properties of the two buttons to Fixed and set the minimumSize property to 55x55. Then, set the text property of the buttons to empty as we will be using icons instead of text. We will learn how to place an icon in the button widgets in the following section: Now your UI should look similar to this: Next, we will be adding the logo by using the following steps: Add a horizontal layout between the top panel and the vertical spacer to serve as a container for the logo. After adding the horizontal layout, you will find the layout is way too thin in height to be able to add any widget to it. This is because the layout is empty and it's being pushed by the vertical spacer below it into zero height. To solve this problem, we can set its vertical margin (either layoutTopMargin or layoutBottomMargin) to be temporarily bigger until a widget is added to the layout. Next, add a label to the horizontal layout that you just created and rename it to logo. We will learn more about how to insert an image into the label to use it as logo in the next section. For now, just empty out the text property and set both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 150x150. Set the vertical margin of the layout back to zero if you haven't done so. The logo now looks invisible, so we will just place a temporary style sheet to make it visible until we add an image to it in the next section. The style sheet is really simple: border: 1px solid; Now your UI should look something similar to this: Now let's create the login form by using the following steps: Add a horizontal layout between the logo's layout and the vertical spacer. Just as we did previously, set the layoutTopMargin property to a bigger number (that is,100) so that you can add a widget to it more easily. After that, add a vertical layout inside the horizontal layout you just created. This layout will be used as a container for the login form. Set its layoutTopMargin to a number lower than that of the horizontal layout (that is, 20) so that we can place widgets in it. Next, right click the vertical layout you just created and choose Morph into -> QWidget. The vertical layout is now being converted into an empty widget. This step is essential because we will be adjusting the width and height of the container for the login form. A layout widget does not contain any properties for width and height, but only margins, due to the fact that a layout will expand toward the empty space surrounding it, which does make sense, considering that it does not have any size properties. After you have converted the layout to a QWidget object, it will automatically inherit all the properties from the widget class and so we are now able to adjust its size to suit our needs. Rename the QWidget object which we just converted from the layout to loginForm and change both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 350x200. Since we already placed the loginForm widget inside the horizontal layout, we can now set its layoutTopMargin property back to zero. Add the same style sheet as the logo to the loginForm widget to make it visible temporarily, except this time we need to add an ID selector in front so that it will only apply the style to loginForm and not its children widgets: #loginForm { border: 1px solid; } Now your UI should look something like this: We are not done with the login form yet. Now that we have created the container for the login form, it's time to put more widgets into the form: Place two horizontal layouts into the login form container. We need two layouts as one for the username field and another for the password field. Add a label and a line edit to each of the layouts you just added. Change the text property of the upper label to Username: and the one below as Password:. Then, rename the two line edits as username and password respectively. Add a push button below the password layout and change its text property to Login. After that, rename it as loginButton. You can add a vertical spacer between the password layout and the login button to distance them slightly. After the vertical spacer has been placed, change its sizeType property to Fixed and change the Height to 5. Now, select the loginForm container and set all its margins to 35. This is to make the login form look better by adding some space to all its sides. You can also set the Height property of the username, password, and loginButton widgets to 25 so that they don't look so cramped. Now your UI should look something like this: We're not done yet! As you can see the login form and the logo are both sticking to the top of the main window due to the vertical spacer below them. The logo and the login form should be placed at the center of the main window instead of the top. To fix this problem use the following steps: Add another vertical spacer between the top panel and the logo's layout. This way it will counter the spacer at the bottom which balances out the alignment. If you think that the logo is sticking too close to the login form, you can also add a vertical spacer between the logo's layout and the login form's layout. Set its sizeType property to Fixed and the Height property to 10. Right click the top panel's layout and choose Morph into -> QWidget. Then, rename it topPanel. The reason why the layout has to be converted into QWidget because we cannot apply style sheets to a layout, as it doesn't have any properties other than margins. Currently you can see there is a little bit of margin around the edges of the main window – we don't want that. To remove the margins, select the centralWidget object from the object inspector window, which is right under the MainWindow panel, and set all the margin values to zero. At this point, you can run the project by clicking the Run button (withgreen arrow icon) to see what your program looks like now. If everything went well, you should see something like this: After we've done the layout, it's time for us to add some fanciness to the UI using style sheets! Since all the important widgets have been given an object name, it's easier for us to apply the style sheets to it from the main window, since we will only write the style sheets to the main window and let them inherit down the hierarchy tree. Right click on MainWindow from the object inspector window and choose Change Stylesheet. Add the following code to the style sheet: #centralWidget { background: rgba(32, 80, 96, 100); } Now you will see that the background of the main window changes its color. We will learn how to use an image for the background in the next section so the color is just temporary. In Qt, if you want to apply styles to the main window itself, you must apply it to its central widget instead of the main window itself because the window is just a container. Then, we will add a nice gradient color to the top panel: #topPanel { background-color: qlineargradient(spread:reflect, x1:0.5, y1:0, x2:0, y2:0, stop:0 rgba(91, 204, 233, 100), stop:1 rgba(32, 80, 96, 100)); } After that, we will apply black color to the login form and make it look semi-transparent. After that, we will also make the corners of the login form container slightly rounded by setting the border-radius property: #loginForm { background: rgba(0, 0, 0, 80); border-radius: 8px; } After we're done applying styles to the specific widgets, we will apply styles to the general types of widgets instead: QLabel { color: white; } QLineEdit { border-radius: 3px; } The style sheets above will change all the labels' texts to a white color, which includes the text on the widgets as well because, internally, Qt uses the same type of label on the widgets that have text on it. Also, we made the corners of the line edit widgets slightly rounded. Next, we will apply style sheets to all the push buttons on our UI: QPushButton { color: white; background-color: #27a9e3; border-width: 0px; border-radius: 3px; } The preceding style sheet changes the text of all the buttons to a white color, then sets its background color to blue, and makes its corners slightly rounded as well. To push things even further, we will change the color of the push buttons when we mouse-over it, using the keyword hover: QPushButton:hover { background-color: #66c011; } The preceding style sheet will change the background color of the push buttons to green when we mouse-over. We will talk more about this in the following section. You can further adjust the size and margins of the widgets to make them look even better.Remember to remove the border line of the login form by removing the style sheet which we applied directly to it earlier. Now your login screen should look something like this: How it works This example focuses more on the layout system of Qt. The Qt layout system provides a simple and powerful way of automatically arranging child widgets within a widget to ensure that they make good use of the available space. The spacer items used in the preceding example help to push the widgets contained in a layout outward to create spacing along the width of the spacer item. To locate a widget to the middle of the layout, put two spacer items to the layout, one on the left side of the widget and another on the right side of the widget. The widget will then be pushed to the middle of the layout by the two spacers. Summary So in this article we saw how Qt allows us to easily design our program's user interface through a method which most people are familiar with. We also covered the toolkit, Qt Designer, which enables us to design our user interface without writing a single line of code. Finally, we saw how to create a login screen. For more information on Qt5 and C++ you can check other books by Packt, mentioned as follows: Qt 5 Blueprints: https://www.packtpub.com/application-development/qt-5-blueprints Boost C++ Application Development Cookbook: https://www.packtpub.com/application-development/boost-c-application-development-cookbook Learning Boost C++ Libraries: https://www.packtpub.com/application-development/learning-boost-c-libraries Resources for Article: Further resources on this subject: OpenCart Themes: Styling Effects of jQuery Plugins [article] Responsive Web Design [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 31767
article-image-yuri-shkuro-on-observability-challenges-in-microservices-and-cloud-native-applications
Packt Editorial Staff
05 Apr 2019
11 min read
Save for later

Yuri Shkuro on Observability challenges in microservices and cloud-native applications

Packt Editorial Staff
05 Apr 2019
11 min read
In the last decade, we saw a significant shift in how modern, internet-scale applications are being built. Cloud computing (infrastructure as a service) and containerization technologies (popularized by Docker) enabled a new breed of distributed system designs commonly referred to as microservices (and their next incarnation, FaaS). Successful companies like Twitter and Netflix have been able to leverage them to build highly scalable, efficient, and reliable systems, and to deliver more features faster to their customers. In this article we explain the concept of observability in microservices, its challenges and traditional monitoring tools in microservices. This article is an extract taken from the book Mastering Distributed Tracing, written by Yuri Shkuro. This book will equip you to operate and enhance your own tracing infrastructure. Through practical exercises and code examples, you will learn how end-to-end tracing can be used as a powerful application performance management and comprehension tool. While there is no official definition of microservices, a certain consensus has evolved over time in the industry. Martin Fowler, the author of many books on software design, argues that microservices architectures exhibit the following common characteristics: Componentization via (micro)services Smart endpoints and dumb pipes Organized around business capabilities Decentralized governance Decentralized data management Infrastructure automation Design for failure Evolutionary design Because of the large number of microservices involved in building modern applications, rapid provisioning, rapid deployment via decentralized continuous delivery, strict DevOps practices, and holistic service monitoring are necessary to effectively develop, maintain, and operate such applications. The infrastructure requirements imposed by the microservices architectures spawned a whole new area of development of infrastructure platforms and tools for managing these complex cloud-native applications. In 2015, the Cloud Native Computing Foundation (CNCF) was created as a vendor-neutral home for many emerging open source projects in this area, such as Kubernetes, Prometheus, Linkerd, and so on, with a mission to "make cloud-native computing ubiquitous." Read more on Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] What is observability? The term "observability" in control theory states that the system is observable if the internal states of the system and, accordingly, its behavior, can be determined by only looking at its inputs and outputs. At the 2018 Observability Practitioners Summit, Bryan Cantrill, the CTO of Joyent and one of the creators of the tool dtrace, argued that this definition is not practical to apply to software systems because they are so complex that we can never know their complete internal state, and therefore the control theory's binary measure of observability is always zero (I highly recommend watching his talk on YouTube: https://youtu.be/U4E0QxzswQc). Instead, a more useful definition of observability for a software system is its "capability to allow a human to ask and answer questions". The more questions we can ask and answer about the system, the more observable it is. Figure 1: The Twitter debate There are also many debates and Twitter zingers about the difference between monitoring and observability. Traditionally, the term monitoring was used to describe metrics collection and alerting. Sometimes it is used more generally to include other tools, such as "using distributed tracing to monitor distributed transactions." The definition by Oxford dictionaries of the verb "monitor" is "to observe and check the progress or quality of (something) over a period of time; keep under systematic review." However, it is better scoped to describing the process of observing certain a priori defined performance indicators of our software system, such as those measuring an impact on the end-user experience, like latency or error counts, and using their values to alert us when these signals indicate an abnormal behavior of the system. Metrics, logs, and traces can all be used as a means to extract those signals from the application. We can then reserve the term "observability" for situations when we have a human operator proactively asking questions that were not predefined. As Bryan Cantrill put it in his talk, this process is debugging, and we need to "use our brains when debugging." Monitoring does not require a human operator; it can and should be fully automated. "If you want to talk about (metrics, logs, and traces) as pillars of observability–great. The human is the foundation of observability!"  -- BryanCantrill In the end, the so-called "three pillars of observability" (metrics, logs, and traces) are just tools, or more precisely, different ways of extracting sensor data from the applications. Even with metrics, the modern time series solutions like Prometheus, InfluxDB, or Uber's M3 are capable of capturing the time series with many labels, such as which host emitted a particular value of a counter. Not all labels may be useful for monitoring, since a single misbehaving service instance in a cluster of thousands does not warrant an alert that wakes up an engineer. But when we are investigating an outage and trying to narrow down the scope of the problem, the labels can be very useful as observability signals. The observability challenge of microservices By adopting microservices architectures, organizations are expecting to reap many benefits, from better scalability of components to higher developer productivity. There are many books, articles, and blog posts written on this topic, so I will not go into that. Despite the benefits and eager adoption by companies large and small, microservices come with their own challenges and complexity. Companies like Twitter and Netflix were successful in adopting microservices because they found efficient ways of managing that complexity. Vijay Gill, Senior VP of Engineering at Databricks, goes as far as saying that the only good reason to adopt microservices is to be able to scale your engineering organization and to "ship the org chart". So, what are the challenges of this design? There are quite a few: In order to run these microservices in production, we need an advanced orchestration platform that can schedule resources, deploy containers, autoscale, and so on. Operating an architecture of this scale manually is simply not feasible, which is why projects like Kubernetes became so popular. In order to communicate, microservices need to know how to find each other on the network, how to route around problematic areas, how to perform load balancing, how to apply rate limiting, and so on. These functions are delegated to advanced RPC frameworks or external components like network proxies and service meshes. Splitting a monolith into many microservices may actually decrease reliability. Suppose we have 20 components in the application and all of them are required to produce a response to a single request. When we run them in a monolith, our failure modes are restricted to bugs and potentially a crush of the whole server running the monolith. But if we run the same components as microservices, on different hosts and separated by a network, we introduce many more potential failure points, from network hiccups, to resource constraints due to noisy neighbors. The latency may also increase. Assume each microservice has 1 ms average latency, but the 99th percentile is 1s. A transaction touching just one of these services has a 1% chance to take ≥ 1s. A transaction touching 100 of these services has 1 - (1 - 0.01)100 = 63% chance to take ≥ 1s. Finally, the observability of the system is dramatically reduced if we try to use traditional monitoring tools. When we see that some requests to our system are failing or slow, we want our observability tools to tell us the story about what happens to that request. Traditional monitoring tools Traditional monitoring tools were designed for monolith systems, observing the health and behavior of a single application instance. They may be able to tell us a story about that single instance, but they know almost nothing about the distributed transaction that passed through it. These tools "lack the context" of the request. Metrics It goes like this: "Once upon a time…something bad happened. The end." How do you like this story? This is what the chart in Figure 2 tells us. It's not completely useless; we do see a spike and we could define an alert to fire when this happens. But can we explain or troubleshoot the problem? Figure 2: A graph of two time series representing (hypothetically) the volume of traffic to a service Metrics, or stats, are numerical measures recorded by the application, such as counters, gauges, or timers. Metrics are very cheap to collect, since numeric values can be easily aggregated to reduce the overhead of transmitting that data to the monitoring system. They are also fairly accurate, which is why they are very useful for the actual monitoring (as the dictionary defines it) and alerting. Yet the same capacity for aggregation is what makes metrics ill-suited for explaining the pathological behavior of the application. By aggregating data, we are throwing away all the context we had about the individual transactions. Logs Logging is an even more basic observability tool than metrics. Every programmer learns their first programming language by writing a program that prints (that is, logs) "Hello, World!" Similar to metrics, logs struggle with microservices because each log stream only tells us about a single instance of a service. However, the evolving programming paradigms creates other problems for logs as a debugging tool. Ben Sigelman, who built Google's distributed tracing system Dapper, explained it in his KubeCon 2016 keynote talk as four types of concurrency (Figure 3): Figure 3: Evolution of concurrency Years ago, applications like early versions of Apache HTTP Server handled concurrency by forking child processes and having each process handle a single request at a time. Logs collected from that single process could do a good job of describing what happened inside the application. Then came multi-threaded applications and basic concurrency. A single request would typically be executed by a single thread sequentially, so as long as we included the thread name in the logs and filtered by that name, we could still get a reasonably accurate picture of the request execution. Then came asynchronous concurrency, with asynchronous and actor-based programming, executor pools, futures, promises, and event-loop-based frameworks. The execution of a single request may start on one thread, then continue on another, then finish on the third. In the case of event loop systems like Node.js, all requests are processed on a single thread but when the execution tries to make an I/O, it is put in a wait state and when the I/O is done, the execution resumes after waiting its turn in the queue. Both of these asynchronous concurrency models result in each thread switching between multiple different requests that are all in flight. Observing the behavior of such a system from the logs is very difficult, unless we annotate all logs with some kind of unique id representing the request rather than the thread, a technique that actually gets us close to how distributed tracing works. Finally, microservices introduced what we can call "distributed concurrency." Not only can the execution of a single request jump between threads, but it can also jump between processes, when one microservice makes a network call to another. Trying to troubleshoot request execution from such logs is like debugging without a stack trace: we get small pieces, but no big picture. In order to reconstruct the flight of the request from the many log streams, we need powerful logs aggregation technology and a distributed context propagation capability to tag all those logs in different processes with a unique request id that we can use to stitch those requests together. We might as well be using the real distributed tracing infrastructure at this point! Yet even after tagging the logs with a unique request id, we still cannot assemble them into an accurate sequence, because the timestamps from different servers are generally not comparable due to clock skews. In this article we looked at the concept of observability and some challenges one has to face in microservices. We further discussed traditional monitoring tools for microservices. Applying distributed tracing to microservices-based architectures will be easy with Mastering Distributed Tracing written by Yuri Shkuro. 6 Ways to blow up your Microservices! Have Microservices killed the monolithic architecture? Maybe not! How to build Dockers with microservices  
Read more
  • 0
  • 0
  • 31723

article-image-api-gateway-and-its-need
Packt
21 Feb 2018
9 min read
Save for later

API Gateway and its Need

Packt
21 Feb 2018
9 min read
 In this article by Umesh R Sharma, author of the book Practical Microservices, we will cover API Gateway and its need with simple and short examples. (For more resources related to this topic, see here.) Dynamic websites show a lot on a single page, and there is a lot of information that needs to be shown on the page. The common success order summary page shows the cart detail and customer address. For this, frontend has to fire a different query to the customer detail service and order detail service. This is a very simple example of having multiple services on a single page. As a single microservice has to deal with only one concern, in result of that to show much information on page, there are many API calls on the same page. So, a website or mobile page can be very chatty in terms of displaying data on the same page. Another problem is that, sometimes, microservice talks on another protocol, then HTTP only, such as thrift call and so on. Outer consumers can't directly deal with microservice in that protocol. As a mobile screen is smaller than a web page, the result of the data required by the mobile or desktop API call is different. A developer would want to give less data to the mobile API or have different versions of the API calls for mobile and desktop. So, you could face a problem such as this: each client is calling different web services and keeping track of their web service and developers have to give backward compatibility because API URLs are embedded in clients like in mobile app. Why do we need the API Gateway? All these preceding problems can be addressed with the API Gateway in place. The API Gateway acts as a proxy between the API consumer and the API servers. To address the first problem in that scenario, there will only be one call, such as /successOrderSummary, to the API Gateway. The API Gateway, on behalf of the consumer, calls the order and user detail, then combines the result and serves to the client. So basically, it acts as a facade or API call, which may internally call many APIs. The API Gateway solves many purposes, some of which are as follows. Authentication API Gateways can take the overhead of authenticating an API call from outside. After that, all the internal calls remove security check. If the request comes from inside the VPC, it can remove the check of security, decrease the network latency a bit, and make the developer focus more on business logic than concerning about security. Different protocol Sometimes, microservice can internally use different protocols to talk to each other; it can be thrift call, TCP, UDP, RMI, SOAP, and so on. For clients, there can be only one rest-based HTTP call. Clients hit the API Gateway with the HTTP protocol and the API Gateway can make the internal call in required protocol and combine the results in the end from all web service. It can respond to the client in required protocol; in most of the cases, that protocol will be HTTP. Load-balancing The API Gateway can work as a load balancer to handle requests in the most efficient manner. It can keep a track of the request load it has sent to different nodes of a particular service. Gateway should be intelligent enough to load balances between different nodes of a particular service. With NGINX Plus coming into the picture, NGINX can be a good candidate for the API Gateway. It has many of the features to address the problem that is usually handled by the API Gateway. Request dispatching (including service discovery) One main feature of the gateway is to make less communication between client and microservcies. So, it initiates the parallel microservices if that is required by the client. From the client side, there will only be one hit. Gateway hits all the required services and waits for the results from all services. After obtaining the response from all the services, it combines the result and sends it back to the client. Reactive microservice designs can help you achieve this. Working with service discovery can give many extra features. It can mention which is the master node of service and which is the slave. Same goes for DB in case any write request can go to the master or read request can go to the slave. This is the basic rule, but users can apply so many rules on the basis of meta information provided by the API Gateway. Gateway can record the basic response time from each node of service instance. For higher priority API calls, it can be routed to the fastest responding node. Again, rules can be defined on the basis of the API Gateway you are using and how it will be implemented. Response transformation Being a first and single point of entry for all API calls, the API Gateway knows which type of client is calling a mobile, web client, or other external consumer; it can make the internal call to the client and give the data to different clients as per needs and configuration. Circuit breaker To handle the partial failure, the API Gateway uses a technique called circuit breaker pattern. A service failure in one service can cause the cascading failure in the flow to all the service calls in stack. The API Gateway can keep an eye on some threshold for any microservice. If any service passes that threshold, it marks that API as open circuit and decides not to make the call for a configured time. Hystrix (by Netflix) served this purpose efficiently. Default value in this is failing of 20 requests in 5 seconds. Developers can also mention the fall back for this open circuit. This fall back can be of dummy service. Once API starts giving results as expected, then gateway marks it as a closed service again. Pros and cons of API Gateway Using the API Gateway itself has its own pros and cons. In the previous section, we have described the advantages of using the API Gateway already. I will still try to make them in points as the pros of the API Gateway. Pros Microservice can focus on business logic Clients can get all the data in a single hit Authentication, logging, and monitoring can be handled by the API Gateway Gives flexibility to use completely independent protocols in which clients and microservice can talk It can give tailor-made results, as per the clients needs It can handle partial failure Addition to the preceding mentioned pros, some of the trade-offs are also to use this pattern. Cons It can cause performance degrade due to lots of happenings on the API Gateway With this, discovery service should be implemented Sometimes, it becomes the single point of failure Managing routing is an overhead of the pattern Adding additional network hope in the call Overall. it increases the complexity of the system Too much logic implementation in this gateway will lead to another dependency problem So, before using the API Gateway, both of the aspects should be considered. Decision of including the API Gateway in the system increases the cost as well. Before putting effort, cost, and management in this pattern, it is recommended to analysis how much you can gain from it. Example of API Gateway In this example, we will try to show only sample product pages that will fetch the data from service product detail to give information about the product. This example can be increased in many aspects. Our focus of this example is to only show how the API Gateway pattern works; so we will try to keep this example simple and small. This example will be using Zuul from Netflix as an API Gateway. Spring also had an implementation of Zuul in it, so we are creating this example with Spring Boot. For a sample API Gateway implementation, we will be using http://start.spring.io/ to generate an initial template of our code. Spring initializer is the project from Spring to help beginners generate basic Spring Boot code. A user has to set a minimum configuration and can hit the Generate Project button. If any user wants to set more specific details regarding the project, then they can see all the configuration settings by clicking on the Switch to the full version button, as shown in the following screenshot: Let's create a controller in the same package of main application class and put the following code in the file: @SpringBootApplication @RestController public class ProductDetailConrtoller { @Resource ProductDetailService pdService; @RequestMapping(value = "/product/{id}") public ProductDetail getAllProduct( @PathParam("id") String id) { return pdService.getProductDetailById(id); } }   In the preceding code, there is an assumption of the pdService bean that will interact with Spring data repository for product detail and get the result for the required product ID. Another assumption is that this service is running on port 10000. Just to make sure everything is running, a hit on a URL such as http://localhost:10000/product/1 should give some JSON as response. For the API Gateway, we will create another Spring Boot application with Zuul support. Zuul can be activated by just adding a simple @EnableZuulProxy annotation. The following is a simple code to start the simple Zuul proxy: @SpringBootApplication @EnableZuulProxy public class ApiGatewayExampleInSpring { public static void main(String[] args) { SpringApplication.run(ApiGatewayExampleInSpring.class, args); } }   Rest all the things are managed in configuration. In the application.properties file of the API Gateway, the content will be something as follows: zuul.routes.product.path=/product/** zuul.routes.produc.url=http://localhost:10000 ribbon.eureka.enabled=false server.port=8080  With this configuration, we are defining rules such as this: for any request for a URL such as /product/xxx, pass this request to http://localhost:10000. For outer world, it will be like http://localhost:8080/product/1, which will internally be transferred to the 10000 port. If we defined a spring.application.name variable as product in product detail microservice, then we don't need to define the URL path property here (zuul.routes.product.path=/product/** ), as Zuul, by default, will make it a URL/product. The example taken here for an API Gateway is not very intelligent, but this is a very capable API Gateway. Depending on the routes, filter, and caching defined in the Zuul's property, one can make a very powerful API Gateway. Summary In this article, you learned about the API Gateway, its need, and its pros and cons with the code example. Resources for Article:   Further resources on this subject: What are Microservices? [article] Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article]
Read more
  • 0
  • 0
  • 31663
Modal Close icon
Modal Close icon