Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-typescript-3-5-releases-with-omit-helper-improved-speed-excess-property-checks-and-more
Vincy Davis
30 May 2019
5 min read
Save for later

TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more

Vincy Davis
30 May 2019
5 min read
Yesterday, Daniel Rosenwasser, Program Manager at TypeScript, announced the release of TypeScript 3.5. This release has great new additions in compiler and language, editor tooling, some breaking changes as well. Some key features include speed improvements, ‘omit’ helper type, improved excess property checks, and more. The earlier version of TypeScript 3.4 was released two months ago. Compiler and Language Speed improvements Typescripts team have been focusing heavily on optimizing certain code paths and stripping down certain functionality, since the past release. This has resulted in TypeScript 3.5 being faster than TypeScript 3.3 for many incremental checks. The compile time of TypeScript 3.5 has also fallen compared to 3.4, but users have been alerted that code completion and any other editor operations would be much ‘snappier’. This release also includes several optimizations to compiler settings such as why files were looked up, where files were found, etc. It’s also been found that in TypeScript 3.5, the amount of time rebuilding can be reduced by as much as 68% compared to TypeScript 3.4. The ‘Omit’ helper type Usually, users create an object that omits certain properties. In TypeScript 3.5, a new version of ‘Omit’ has been defined. It will include its own  lib.d.ts which can be used everywhere. The compiler itself will use this ‘Omit’ type to express types created through object rest, destructuring declarations on generics. Improved excess property checks in union types TypeScript has this feature of excess property checking in object literals. In the earlier versions, certain excess properties were allowed in the object literal, even if it didn’t match between Point and Label. In this new version, the type-checker will verify that all the provided properties belong to some union member and have the appropriate type. The --allowUmdGlobalAccess flag In TypeScript 3.5, you can now reference UMD global declarations like export as namespace foo. This is possible from anywhere, even modules by using the new --allowUmdGlobalAccess flag. Smarter union type checking When checking against union types, TypeScript usually compares each constituent type in isolation. While assigning source to target, it typically involves checking whether the type of source is assignable to target. In TypeScript 3.5, when assigning to types with discriminant properties like in T, the language actually will go further and decompose types like S into a union of every possible inhabitant type. This was not possible in the previous versions. Higher order type inference from generic constructors TypeScript 3.4’s inference allowed newFn to be generic. In TypeScript 3.5, this behavior is generalized to work on constructor functions as well. This means that functions that operate on class components in certain UI libraries like React, can more correctly operate on generic class components. New Editing Tools Smart Select This will provide an API for editors to expand text selections farther outward in a syntactical manner.  This feature is cross-platform and available to any editor which can appropriately query TypeScript’s language server. Extract to type alias TypeScript 3.5 will now support a useful new refactoring, to extract types to local type aliases. However, for users who prefer interfaces over type aliases, an issue still exists for extracting object types to interfaces as well. Breaking changes Generic type parameters are implicitly constrained to unknown In TypeScript 3.5, generic type parameters without an explicit constraint are now implicitly constrained to unknown, whereas previously the implicit constraint of type parameters was the empty object type {}. { [k: string]: unknown } is no longer a wildcard assignment target TypeScript 3.5 has removed the specialized assignability rule to permit assignment to { [k: string]: unknown }. This change was made because of the change from {} to unknown, if generic inference has no candidates. Depending on the intended behavior of { [s: string]: unknown }, several alternatives are available: { [s: string]: any } { [s: string]: {} } object unknown any Improved excess property checks in union types Typescript 3.5 adds a type assertion onto the object (e.g. { myProp: SomeType } as ExpectedType) It also adds an index signature to the expected type to signal, that unspecified properties are expected (e.g. interface ExpectedType { myProp: SomeType; [prop: string]: unknown }) Fixes to unsound writes to indexed access types TypeScript allows you to represent the operation of accessing a property of an object via the name of that property. In TypeScript 3.5, samples will correctly issue an error. Most instances of this error represent potential errors in the relevant code. Object.keys rejects primitives in ES5 In ECMAScript 5 environments, Object.keys throws an exception if passed through  any non-object argument. In TypeScript 3.5, if target (or equivalently lib) is ES5, calls to Object.keys must pass a valid object. This change interacts with the change in generic inference from {} to unknown. The aim of this version of TypeScript is to make the coding experience faster and happier. In the announcement, Daniel has also given the 3.6 iteration plan document and the feature roadmap page, to give users an idea of what’s coming in the next version of TypeScript. Users are quite content with the new additions and breaking changes in TypeScript 3.5. https://twitter.com/DavidPapp/status/1130939572563697665 https://twitter.com/sebastienlorber/status/1133639683332804608 A user on Reddit comments, “Those are some seriously impressive improvements. I know it's minor, but having Omit built in is just awesome. I'm tired of defining it myself in every project.” To read more details of TypeScript 3.5, head over to the official announcement. 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed All Docker versions are now vulnerable to a symlink race attack
Read more
  • 0
  • 0
  • 20934

article-image-thread-synchronization-and-communication
Packt
19 Jun 2017
20 min read
Save for later

Thread synchronization and communication

Packt
19 Jun 2017
20 min read
In this article by Maya Posch, the author of the book Mastering C++ Multithreading, we will learn to work through and understand a basic multithreaded C++ application. While generally threads are used to work on a task more or less independently from other threads, there are many occasions where one would want to pass data between threads, or even control other threads, such as from a central task scheduler thread. This article looks at how such tasks are accomplished. Topics covered in this article include: Use of mutexes, locks and similar synchronization structures. The use of condition variables and signals to control threads. Safely passing and sharing data between threads. (For more resources related to this topic, see here.) Safety first The central problem with concurrency is that of ensuring safe access to shared resources, including when communicating between threads. There is also the issue of threads being able to communicate and synchronize themselves. What makes multithreaded programming such a challenge is to be able to keep track of each interaction between threads, ensure that each and every form of access is secured while not falling into the traps of deadlocks and data races. In this article we will look at a fairly complex example involving a task scheduler. This is a form of high-concurrency, high-throughput situation where many different requirements come together, with many potential traps, as we will see in a moment. The scheduler A good example of multithreading with a significant amount of synchronization and communication between threads is the scheduling of tasks. Hereby the goal is to accept incoming tasks and assign them to work threads as quickly as possible. Hereby a number of different approaches are possible. Often one has worker threads running in an active loop, constantly polling a central queue for new tasks. Disadvantages of this approach include the wasting of processor cycles on said polling and the congestion which forms at the synchronization mechanism used, generally a mutex.Furthermore, this active polling approach scales very poorly when the number of worker threads increase. Ideally, each worker thread would idly wait until it is needed again. To accomplish this, we have to approach the problem from the other side: not from the perspective of the worker threads, but of that of the queue. Much like the scheduler of of an operating system, it is the scheduler which is aware of both tasks which require processing, as well as available worker threads. In this approach, a central scheduler instance would accept new tasks and actively assign them to worker threads. Said scheduler instance may also manage these worker threads, such as their number and priority depending on the number of incoming tasks and the type of task or other properties. High-level view At its core, our scheduleror dispatcher is quite simple, functioning like a queue with all of the scheduling logic built into it: As one can see from the high-level view, there really isn't much to it. As we'll see in a moment, the actual implementation does however have a number of complications. Implementation As is usual, we start off with the main function, contained in main.cpp: #include "dispatcher.h" #include "request.h" #include <iostream> #include <string> #include <csignal> #include <thread> #include <chrono> using namespace std; sig_atomic_t signal_caught = 0; mutex logMutex; The custom headers we include are those for our dispatcher implementation, as well as the Request class we'll be using. Globally we define an atomic variable to be used with the signal handler, as well as a mutex which will synchronize the output (on the standard output) from our logging method. void sigint_handler(int sig) { signal_caught = 1; } Our signal handler function (for SIGINT signals) simply sets the global atomic variable we defined earlier. void logFnc(string text) { logMutex.lock(); cout << text <<"n"; logMutex.unlock(); } In our logging function we use the global mutex to ensure writing to the standard output is synchronized. int main() { signal(SIGINT, &sigint_handler); Dispatcher::init(10); In the main function we install the signal handler for SIGINT to allow us to interrupt the execution of the application. We also call the static init() function on the Dispatcher class to initialize it. cout <<"Initialised.n"; int cycles = 0; Request* rq = 0; while (!signal_caught && cycles < 50) { rq = new Request(); rq->setValue(cycles); rq->setOutput(&logFnc); Dispatcher::addRequest(rq); cycles++; } Next we set up the loop in which we will create new requests. In each cycle we create a new Request instance and use its setValue() function to set an integer value (current cycle number). We also set our logging function on the request instance before adding this new request to the Dispatcher using its static addRequest() function. This loop will continue until the maximum number of cycles have been reached, or SIGINT has been signaled, using Ctrl+C or similar. his_thread::sleep_for(chrono::seconds(5)); Dispatcher::stop(); cout <<"Clean-up done.n"; return 0; } Finally we wait for five seconds, using the thread's sleep_for() function and the chrono::seconds() function from the chrono STL header. We also call the stop() function on the Dispatcher before returning. Request class A request for the Dispatcher always derives from the pure virtual AbstractRequest class: #pragma once #ifndef ABSTRACT_REQUEST_H #define ABSTRACT_REQUEST_H class AbstractRequest { // public: virtual void setValue(int value) = 0; virtual void process() = 0; virtual void finish() = 0; }; #endif This class defines an API with three functions which a deriving class always has to implement, of which the process() and finish() functions are the most generic and likely to be used in any practical implementation. The setValue() function is specific to this demonstration implementation and would likely be adapted or extended to fit a real-life scenario. The advantage of using an abstract class as the basis for a request is that it allows the Dispatcher class to handle many different types of requests, as long as they all adhere to this same basic API. Using this abstract interface, we implement a basic Request class: #pragma once #ifndef REQUEST_H #define REQUEST_H #include "abstract_request.h" #include <string> using namespace std; typedef void (*logFunction)(string text); class Request : public AbstractRequest { int value; logFunction outFnc; public: void setValue(int value) { this->value = value; } void setOutput(logFunction fnc) { outFnc = fnc; } void process(); void finish(); }; #endif In its header file we first define the logging function pointer's format. After this we implement the request API, adding the setOutput() function to the base API, which accepts a function pointer for logging. Both setter functions merely assign the provided parameter to their respective private class members. Next, the class function implementations: #include "request.h" void Request::process() { outFnc("Starting processing request " + std::to_string(value) + "..."); // } void Request::finish() { outFnc("Finished request " + std::to_string(value)); }Both of these implementations are very basic, merely using the function pointer to output a string indicating the status of the worker thread. In a practical implementation, one would add the business logic to the process()function, with the finish() function containing any functionality to finish up a request, such as writing a map into a string. Worker class Next, the Worker class. This contains the logic which will be called by the dispatcher in order to process a request: #pragma once #ifndef WORKER_H #define WORKER_H #include "abstract_request.h" #include <condition_variable> #include <mutex> using namespace std; class Worker { condition_variable cv; mutex mtx; unique_lock<mutex> ulock; AbstractRequest* request; bool running; bool ready; public: Worker() { running = true; ready = false; ulock = unique_lock<mutex>(mtx); } void run(); void stop() { running = false; } void setRequest(AbstractRequest* request) { this->request = request; ready = true; } void getCondition(condition_variable* &cv); }; #endif Whereas the adding of a request to the dispatcher does not require any special logic, the Worker class does require the use of condition variables to synchronize itself with the dispatcher. For the C++11 threads API, this requires a condition variable, a mutex and a unique lock. The unique lock encapsulates the mutex and will ultimately be used with the condition variable as we will see in a moment. Beyond this we define methods to start and stop the worker, to set a new request for processing and to obtain access to its internal condition variable. Moving on, the rest of the implementation: #include "worker.h" #include "dispatcher.h" #include <chrono> using namespace std; void Worker::getCondition(condition_variable* &cv) { cv = &(this)->cv; } void Worker::run() { while (running) { if (ready) { ready = false; request->process(); request->finish(); } if (Dispatcher::addWorker(this)) { // Use the ready loop to deal with spurious wake-ups. while (!ready && running) { if (cv.wait_for(ulock, chrono::seconds(1)) == cv_status::timeout) { // We timed out, but we keep waiting unless // the worker is // stopped by the dispatcher. } } } } } Beyond the getter function for the condition variable, we define the run() function, which the dispatcher will run for each worker thread upon starting it. Its main loop merely checks that the stop() function hasn't been called yet, which would have set the running boolean value to false and ended the work thread. This is used by the dispatcher when shutting down, allowing it to terminate the worker threads. Since boolean values are generally atomic, setting and checking can be done simultaneously without risk or requiring a mutex. Moving on, the check of the ready variable is to ensure that a request is actually waiting when the thread is first run. On the first run of the worker thread, no request will be waiting and thus attempting to process one would result in a crash. Upon the dispatcher setting a new request, this boolean variable will be set to true. If a request is waiting, the ready variable will be set to false again, after which the request instance will have its process() and finish() functions called. This will run the business logic of the request on the worker thread's thread and finalize it. Finally, the worker thread adds itself to the dispatcher using its static addWorker() function. This function will return false if no new request was available, causing the worker thread to wait until a new request has become available. Otherwise the worker thread will continue with the processing of the new request that the dispatcher will have set on it. If asked to wait, we enter a new loop which will ensure that upon waking up from waiting for the condition variable to be signaled, we woke up because we got signaled by the dispatcher (ready variable set to true), and not because of a spurious wake-up. Last of all, we enter the actual wait() function of the condition variable, using the unique lock instance we created before, along with a timeout. If a timeout occurs, we can either terminate the thread, or keep waiting. Here we choose to do nothing and just re-enter the waiting loop. Dispatcher As the last item, we have the Dispatcher class itself: #pragma once #ifndef DISPATCHER_H #define DISPATCHER_H #include "abstract_request.h" #include "worker.h" #include <queue> #include <mutex> #include <thread> #include <vector> using namespace std; class Dispatcher { static queue<AbstractRequest*> requests; static queue<Worker*> workers; static mutex requestsMutex; static mutex workersMutex; static vector<Worker*> allWorkers; static vector<thread*> threads; public: static bool init(int workers); static bool stop(); static void addRequest(AbstractRequest* request); static bool addWorker(Worker* worker); }; #endif Most of this should look familiar by now. As one should have surmised by now, this is a fully static class. Moving on with its implementation: #include "dispatcher.h" #include <iostream> using namespace std; queue<AbstractRequest*> Dispatcher::requests; queue<Worker*> Dispatcher::workers; mutex Dispatcher::requestsMutex; mutex Dispatcher::workersMutex; vector<Worker*> Dispatcher::allWorkers; vector<thread*> Dispatcher::threads; bool Dispatcher::init(int workers) { hread* t = 0; Worker* w = 0; for (int i = 0; i < workers; ++i) { w = new Worker; allWorkers.push_back(w); = new thread(&Worker::run, w); hreads.push_back(t); } } After setting up the static class members, the init() function is defined. It starts the specified number of worker threads, keeping a reference to each worker and thread instance in their respective vector data structures. bool Dispatcher::stop() { for (int i = 0; i < allWorkers.size(); ++i) { allWorkers[i]->stop(); } cout <<"Stopped workers.n"; for (int j = 0; j < threads.size(); ++j) { hreads[j]->join(); cout <<"Joined threads.n"; } } In the stop() function each worker instance has its stop() function called. This will cause each worker thread to terminate, as we saw earlier in the Worker class description. Finally, we wait for each thread to join (that is, finish), prior to returning. void Dispatcher::addRequest(AbstractRequest* request) { workersMutex.lock(); if (!workers.empty()) { Worker* worker = workers.front(); worker->setRequest(request); condition_variable* cv; worker->getCondition(cv); cv->notify_one(); workers.pop(); workersMutex.unlock(); } else { workersMutex.unlock(); requestsMutex.lock(); requests.push(request); requestsMutex.unlock(); } } The addRequest() function is where things get interesting. In this one function a new request is added. What happens next to it depends on whether a worker thread is waiting for a new request or not. If no worker thread is waiting (worker queue is empty), the request is added to the request queue. The use of mutexes ensures that the access to these queues occurs safely, as the worker threads will simultaneously try to access both queues as well. An import gotcha to note here is the possibility of a deadlock. That is, a situation where two threads will hold the lock on a resource, with the other thread waiting for the first thread to release its lock before releasing its own. Every situation where more than one mutex is used in a single scope holds this potential. In this function the potential for deadlock lies in the releasing of the lock on the workers mutex and when the lock on the requests mutex is obtained. In the case that this function holds the workers mutex and tries to obtain the requests lock (when no worker thread is available), there is a chance that another thread holds the requests mutex (looking for new requests to handle), while simultaneously trying to obtain the workers mutex (finding no requests and adding itself to the workers queue). The solution hereby is simple: release a mutex before obtaining the next one. In the situation where one feels that more than one mutex lock has to be held it is paramount to examine and test one's code for potential deadlocks. In this particular situation the workers mutex lock is explicitly released when it is no longer needed, or before the requests mutex lock is obtained, preventing a deadlock. Another important aspect of this particular section of code is the way it signals a worker thread. As one can see in the first section of the if/else block, when the workers queue is not empty, a worker is fetched from the queue, has the request set on it and then has its condition variable referenced and signaled, or notified. Internally the condition variable uses the mutex we handed it before in the Worker class definition to guarantee only atomic access to it. When the notify_one() function (generally called signal() in other APIs) is called on the condition variable, it will notify the first thread in the queue of threads waiting for the condition variable to return and continue. In the Worker class'run() function we would be waiting for this notification event. Upon receiving it, the worker thread would continue and process the new request. The thread reference will then be removed from the queue until it adds itself again once it is done processing the request. bool Dispatcher::addWorker(Worker* worker) { bool wait = true; requestsMutex.lock(); if (!requests.empty()) { AbstractRequest* request = requests.front(); worker->setRequest(request); requests.pop(); wait = false; requestsMutex.unlock(); } else { requestsMutex.unlock(); workersMutex.lock(); workers.push(worker); workersMutex.unlock(); } return wait; } With this function a worker thread will add itself to the queue once it is done processing a request. It is similar to the earlier function in that the incoming worker is first actively matched with any request which may be waiting in the request queue. If none are available, the worker is added to the worker queue. Important to note here is that we return a boolean value which indicates whether the calling thread should wait for a new request, or whether it already has received a new request while trying to add itself to the queue. While this code is less complex than that of the previous function, it still holds the same potential deadlock issue due to the handling of two mutexes within the same scope. Here, too, we first release the mutex we hold before obtaining the next one. Makefile The Makefile for this dispatcher example is very basic again, gathering all C++ source files in the current folder and compiling them into a binary using g++: GCC := g++ OUTPUT := dispatcher_demo SOURCES := $(wildcard *.cpp) CCFLAGS := -std=c++11 -g3 all: $(OUTPUT) $(OUTPUT): $(GCC) -o $(OUTPUT) $(CCFLAGS) $(SOURCES) clean: rm $(OUTPUT) .PHONY: all Output After compiling the application, running it produces the following output for the fifty total requests: $ ./dispatcher_demo.exe Initialised. Starting processing request 1... Starting processing request 2... Finished request 1 Starting processing request 3... Finished request 3 Starting processing request 6... Finished request 6 Starting processing request 8... Finished request 8 Starting processing request 9... Finished request 9 Finished request 2 Starting processing request 11... Finished request 11 Starting processing request 12... Finished request 12 Starting processing request 13... Finished request 13 Starting processing request 14... Finished request 14 Starting processing request 7... Starting processing request 10... Starting processing request 15... Finished request 7 Finished request 15 Finished request 10 Starting processing request 16... Finished request 16 Starting processing request 17... Starting processing request 18... Starting processing request 0… At this point we we can already clearly see that even with each request taking almost no time to process, the requests are clearly being executed in parallel. The first request (request 0) only starts being processed after the 16th request, while the second request already finishes after the ninth request, long before this. The factors which determine which thread and thus which request is processed first depends on the OS scheduler and hardware-based scheduling. This clearly shows just how few assumptions one can be made about how a multithreaded application will be executed, even on a single platform. Starting processing request 5... Finished request 5 Starting processing request 20... Finished request 18 Finished request 20 Starting processing request 21... Starting processing request 4... Finished request 21 Finished request 4 Here the fourth and fifth requests also finish in a rather delayed fashion. Starting processing request 23... Starting processing request 24... Starting processing request 22... Finished request 24 Finished request 23 Finished request 22 Starting processing request 26... Starting processing request 25... Starting processing request 28... Finished request 26 Starting processing request 27... Finished request 28 Finished request 27 Starting processing request 29... Starting processing request 30... Finished request 30 Finished request 29 Finished request 17 Finished request 25 Starting processing request 19... Finished request 0 At this point the first request finally finishes. This may indicate that the initialization time for the first request will always delay it relative to the successive requests. Running the application multiple times can confirm this. It's important that if the order of processing is relevant, that this randomness does not negatively affect one's application. Starting processing request 33... Starting processing request 35... Finished request 33 Finished request 35 Starting processing request 37... Starting processing request 38... Finished request 37 Finished request 38 Starting processing request 39... Starting processing request 40... Starting processing request 36... Starting processing request 31... Finished request 40 Finished request 39 Starting processing request 32... Starting processing request 41... Finished request 32 Finished request 41 Starting processing request 42... Finished request 31 Starting processing request 44... Finished request 36 Finished request 42 Starting processing request 45... Finished request 44 Starting processing request 47... Starting processing request 48... Finished request 48 Starting processing request 43... Finished request 47 Finished request 43 Finished request 19 Starting processing request 34... Finished request 34 Starting processing request 46... Starting processing request 49... Finished request 46 Finished request 49 Finished request 45 Request 19 also became fairly delayed, showing once again just how unpredictable a multithreaded application can be. If we were processing a large data set in parallel here, with chunks of data in each request, we might have to pause at some points to account for these delays as otherwise our output cache might grow too large. As doing so would negatively affect an application's performance, one might have to look at low-level optimizations, as well as the scheduling of threads on specific processor cores in order to prevent this from happening. Stopped workers. Joined threads. Joined threads. Joined threads. Joined threads. Joined threads. Joined threads. Joined threads. Joined threads. Joined threads. Joined threads. Clean-up done. All ten worker threads which were launched in the beginning terminate here as we call the stop() function of the Dispatcher. Sharing data In this article's example we saw how to share information between threads in addition to the synchronizing of threads. This in the form of the requests we passed from the main thread into the dispatcher, from which each request gets passed on to a different thread. The essential idea behind the sharing of data between threads is that the data to be shared exists somewhere in a way which is accessible to two threads or more. After this we have to ensure that only one thread can modify the data, and that the data does not get modified while it's being read. Generally we would use mutexes or similar to ensure this. Using R/W-locks Readers-writer locks are a possible optimization here, because they allow multiple threads to read simultaneously from a single data source. If one has an application in which multiple worker threads read the same information repeatedly, it would be more efficient to use read-write locks than basic mutexes, because the attempts to read the data will not block the other threads. A read-write lock can thus be used as a more advanced version of a mutex, namely as one which adapts its behavior to the type of access. Internally it builds on mutexes (or semaphores) and condition variables. Using shared pointers First available via the Boost library and introduced natively with C++11, shared pointers are an abstraction of memory management using reference counting for heap-allocated instances. They are partially thread-safe, in that creating multiple shared pointer instances can be created, but the referenced object itself is not thread-safe. Depending on the application this may suffice, however. To make them properly thread-safe one can use atomics. Summary In this article we looked at how to pass data between threads in a safe manner as part of a fairly complex scheduler implementation. We also looked at the resulting asynchronous processing of said scheduler and considered some potential alternatives and optimizations for passing data between threads. At this point one should be able to safely pass data between threads, as well as synchronize the access to other shared resources. In the next article we will be looking at the native C++ threading and primitives API.  Resources for Article: Further resources on this subject: Multithreading with Qt [article] Understanding the Dependencies of a C++ Application [article] Boost.Asio C++ Network Programming [article]
Read more
  • 0
  • 0
  • 20849

article-image-microsoft-releases-typescript-3-7-with-much-awaited-features-like-optional-chaining-assertion-functions-and-more
Savia Lobo
06 Nov 2019
3 min read
Save for later

Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more

Savia Lobo
06 Nov 2019
3 min read
Yesterday, Microsoft announced the release of TypeScript 3.7 with new tooling features, optional chaining, nullish coalescing, assertion functions, and much more. This release also includes breaking features; a few changes in the DOM where the types in lib.dom.d.ts have been updated; the typeArguments property has been removed from the TypeReference interface. Also, TypeScript 3.7 emits get/set accessors in .d.ts files which can cause breaking changes for consumers on older versions of TypeScript like 3.5 and prior. TypeScript 3.6 users will not be impacted as the version was future-proofed for this feature. Let us have a look at other new features in TypeScript 3.7. What’s new in TypeScript 3.7? Optional Chaining TypeScript 3.7 implements Optional Chaining, one of the most highly-demanded ECMAScript features that was filed 5 years ago. Optional chaining lets one write code that can immediately stop running some expressions if it is run into a null or undefined. The star of the show in optional chaining is the new ?. operator for optional property accesses. Optional chaining also includes two other operations; optional element access, which acts similarly to optional property accesses, but allows us to access non-identifier properties (e.g. arbitrary strings, numbers, and symbols). The second one is an optional call, which allows to conditionally call expressions if they’re not null or undefined. Assertion Functions Assertion functions are a specific set of functions that throw an error if something unexpected happens. Assertions in JavaScript are often used to guard against improper types being passed in. Unfortunately in TypeScript, these checks could never be properly encoded. For loosely-typed code, this meant TypeScript was checking less, and for slightly conservative code it often forced users to use type assertions. Another alternative was to rewrite the code such that the language could analyze it. However, this was not convenient. To solve this, TypeScript 3.7 introduces a new concept called “assertion signatures” which models these assertion functions. The first type of assertion signature ensures that whatever condition is being checked must be true for the remainder of the containing scope. The other type of assertion signature doesn’t check for a condition but instead tells TypeScript that a specific variable or property has a different type. Build-Free Editing with Project References In TypeScript 3.7, when opening a project with dependencies, TypeScript will automatically use the source .ts/.tsx files instead. This means projects using project references will now see an improved editing experience where semantic operations are up-to-date. Website and Playground Updates TypeScript playground now includes awesome new features like quick fixes to fix errors, dark/high-contrast mode, and automatic type acquisition so you can import other packages. Each feature here is explained through interactive code snippets under the “what’s new” menu. Many users and developers are excited to try out TypeScript 3.7. https://twitter.com/kmsaldana1/status/1191768934648729600 https://twitter.com/mgechev/status/1191769805952438272 To know more about other new features in TypeScript 3.7, read the official release notes. Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 20800

article-image-have-microservices-killed-monolithic-software-architecture-for-good
Aaron Lazar
04 Jun 2018
6 min read
Save for later

Have Microservices killed the monolithic architecture? Maybe not!

Aaron Lazar
04 Jun 2018
6 min read
Microservices have been growing in popularity since the past few years, 2014 to be precise. Honestly speaking they weren’t that popular until around 2016 - take a look at the steep rise in the curve. The outbreak has happened over the past few years and there are quite a few factors contributing to their growth, like the cloud, distributed architectures, etc. Source: Google Trends Microservices allow for a clearer and refined architecture, with services built to work in isolation, without affecting the resilience and robustness of the application in any way. But does that mean that the Monolith is dead and only Microservices reign? Let’s find out, shall we? Those of you who participated in this year’s survey, I thank you for taking the time out to share such valuable information. For those of you who don’t know what the survey is all about, it a thing that we do every year, where thousands of developers, architects, managers, admins, share their insights with us, and we share our findings with the community. This year’s survey was as informative as the last, if not more! We had developers tell us so much about what they’re doing, where they see technology heading and what tools and techniques they use to stay relevant at what they do. So we took the opportunity and asked our respondents a question about the topic under discussion. Source: WWE.com Revelations If I asked a developer in 2018, what they thought would be the response, they’d instantly say that a majority would be for microservices. Source: Packtpub Skill Up Survey 2018 If you were the one who guessed the answer was going to be Yes, give yourself a firm pat on the back! It’s great to see that 1,603 people are throwing their hands up in the air and building microservices. On the other hand, it’s possible that it’s purely their manager’s decision (See how this forms a barrier to achieving business goals). Anyway, I was particularly concerned about the remaining 314 people who said ‘No’ (those who skipped answering, now is your chance to say something in the comments section below!). Why no Microservices? I thought I’d analyse the possibilities as to why one wouldn’t want to use the microservices pattern in their application architecture. It’s not like developers are migrating from monoliths to microservices, just because everyone else is doing it. Like any other architectural decision, there are several factors that need to be taken into consideration before making the switch. So here’s what I thought were some reasons why developers are sticking to monoliths. #1 One troll vs many elves: Complex times Well imagine you could be attacked by one troll or a hundred house elves. Which situation would you choose to be in if neither isn’t an option? I don’t know about you, but I’d choose the troll any day! Keeping the troll’s size aside, I’d be better off knowing I had one large enemy in front of me, rather than being surrounded by a hundred miniature ones. The same goes for microservices. More services means more complexity, more issues that could crop up. For developers, more services means that they would need to run or connect to all of them on their machine. Although there are tools that help solve this problem, you have to admit that it’s a task to run all services together as a whole application. On the other hand, Ops professionals are tasked to monitor and keep all these services up and running. #2 We lack the expertise Let alone having Developer Rockstars or Admin Ninjas (Oops, I shouldn’t be using those words now, find out why), if your organisation lacks experienced professionals, you’ve got a serious problem. What if there’s an organisation that has been having issues developing/managing a monolith itself. There’s no guarantee that they will be able to manage a microservices based application more effectively. It’s a matter of the organisation having enough hands on skills needed to perform these tasks. These skills are tough to acquire and it’s not simple for organisations to find the right talent. #3 Tower of Babel: Communication gaps In a monolith, communication happens within the application itself and the network channels exist internally. However, this isn’t the case for a microservices architecture as inter-service communication is necessary to keep everything running in tandem. This results in the generation of multiple points of failure, complicating things. To minimise failure, each service has a certain number of retries when trying to establish communication with another. When scaled up, these retries add a load on the database, what with communication formats having to follow strict rules to avoid complexity back again. It’s a vicious circle! #4 Rebuilding a monolith When you build an application based on the microservices architecture, you may benefit a great deal from robustness and reliability. However, microservices together form a large, complicated system, which can be managed by orchestration platforms like Kubernetes. Although, if individual teams are managing clusters of these services, it’s quite likely that orchestration, deployment and management of such a system will be a pain. #5 Burning in dependency hell Microservices are notorious for inviting developers to build services in various languages and then to glue them together. While this is an advantage to a certain extent, it complicates dependency management in the entire application. Moreover, dependencies get even more complicated when versions of tools don’t receive instantaneous support as they are updated. You and your team can go crazy keeping track of versions and dependencies that need to be managed to maintain smooth functioning of your application. So while the microservice architecture is hot, it is not always the best option and teams can actually end up making things worse if they choose to make the change unprepared. Yes, the cloud does benefit much more when applications are deployed as services, rather than as a monolith, but the renowned/infamous “lift and shift” method still exists and works when needed. Ultimately, if you think past the hype, the monolith is not really dead yet and is in fact still being deployed and run in several organisations. Finally, I want to stress that it’s critical that developers and architects take a well informed decision, keeping in mind all the above factors, before they choose an architecture. Like they say, “With great power comes great responsibility”, that’s exactly what great architecture is all about, rather than just jumping on the bandwagon. Building Scalable Microservices Why microservices and DevOps are a match made in heaven What is a multi layered software architecture?
Read more
  • 0
  • 0
  • 20796

article-image-python-governance-vote-results-are-here-the-steering-council-model-is-the-winner
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Python governance vote results are here: The steering council model is the winner

Prasad Ramesh
18 Dec 2018
3 min read
The election to select the governance model for Python following the stepping down of Guido van Rossum as the BDFL earlier this year has ended and PEP 8016 was selected as the winner. PEP 8016 is the steering council model that has a focus on providing a minimal and solid foundation for governance decisions. The vote has chosen a governance PEP that will be implemented on the Python project. The winner: PEP 8016 the steering council model Authored by Nathaniel J. Smith, and Donald Stufft, this proposal involves a model for Python governance based on a steering council. The council has vast authority, which they intend to use as rarely as possible, instead, they plan to use this power to establish standard processes. The steering council committee consists of five people. A general philosophy is followed—it's better to split up large changes into a series of small changes to be reviewed independently. As opposed to trying to do everything in one PEP, the focus is on providing a minimal and solid foundation for future governance decisions. This PEP was accepted on December 17, 2018. Goals of the steering council model The main goals of this proposal are: Sticking to the basics aka ‘be boring’. The authors don't think Python is a good place to experiment with new and untested governance models. Hence, this proposal sticks to mature, well-known, processes that have been tested previously. A high-level approach where the council does not involve much very common in large successful open source projects. The low-level details are directly derived from Django's governance. Being simple enough for minimum viable governance. The proposal attempts to slim things down to the minimum required, just enough to make it workable. The trimming includes the council, the core team, and the process for changing documentation. A goal is to ‘be comprehensive’. The things that need to be defined are covered well for future use. Having a clear set of rules will also help minimize confusion. To ‘be flexible and light-weight’. The authors are aware that to find the best processes for working together, it will take time and experimentation. Hence, they keep the document as minimal as possible, for maximal flexibility to adjust things later. The need for heavy-weight processes like whole-project votes is also minimized. The council will work towards maintaining the quality of and stability of the Python language and the CPython interpreter. Make contribution process easy, maintain relations with the core team, establish a decision-making process for PEPs, and so on. They have powers to make decisions on PEPs, enforce project code of conduct, etc. To know more about the election to the committee visit the Python website. NumPy drops Python 2 support. Now you need Python 3.5 or later. NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs Python 3.7.2rc1 and 3.6.8rc1 released
Read more
  • 0
  • 0
  • 20456

article-image-installing-and-setting-javafx-netbeans-and-eclipse-ide
Packt
17 Sep 2010
7 min read
Save for later

Installing and Setting up JavaFX for NetBeans and Eclipse IDE

Packt
17 Sep 2010
7 min read
(For more resources on JavaFX, see here.) Introduction Today, in the age of Web 2.0, AJAX, and the iPhone, users have come to expect their applications to provide a dynamic and engaging user interface that delivers rich graphical content, audio, and video, all wrapped in GUI controls with animated cinematic-like interactions. They want their applications to be connected to the web of information and social networks available on the Internet. Developers, on the other hand, have become accustomed to tools such as AJAX/HTML5 toolkits, Flex/Flash, Google Web Toolkit, Eclipse/NetBeans RCP, and others that allow them to build and deploy rich and web-connected client applications quickly. They expect their development languages to be expressive (either through syntax or specialized APIs) with features that liberate them from the tyranny of verbosity and empower them with the ability to express their intents declaritively. The Java proposition During the early days of the Web, the Java platform was the first to introduce rich content and interactivity in the browser using the applet technology (predating JavaScript and even Flash). Not too long after applets appeared, Swing was introduced as the unifying framework to create feature-rich applications for the desktop and the browser. Over the years, Swing matured into an amazingly robust GUI technology used to create rich desktop applications. However powerful Swing is, its massive API stack lacks the lightweight higher abstractions that application and content developers have been using in other development environments. Furthermore, the applet's plugin technology was (as admitted by Sun) neglected and failed in the browser-hosted rich applications against similar technologies such as Flash. Enter JavaFX The JavaFX is Sun's (now part of Oracle) answer to the next generation of rich, web-enabled, deeply interactive applications. JavaFX is a complete platform that includes a new language, development tools, build tools, deployment tools, and new runtimes to target desktop, browser, mobile, and entertainment devices such as televisions. While JavaFX is itself built on the Java platform, that is where the commonalities end. The new JavaFX scripting language is designed as a lightweight, expressive, and a dynamic language to create web-connected, engaging, visually appealing, and content-rich applications. The JavaFX platform will appeal to both technical designers and developers alike. Designers will find JavaFX Script to be a simple, yet expressive language, perfectly suited for the integration of graphical assets when creating visually-rich client applications. Application developers, on the other hand, will find its lightweight, dynamic type inference system, and script-like feel a productivity booster, allowing them to express GUI layout, object relationship, and powerful two-way data bindings all using a declarative and easy syntax. Since JavaFX runs on the Java Platform, developers are able to reuse existing Java libraries directly from within JavaFX, tapping into the vast community of existing Java developers, vendors, and libraries. This is an introductory article to JavaFX. Use its recipes to get started with the platform. You will find instructions on how to install the SDK and directions on how to set up your IDE. Installing the JavaFX SDK The JavaFX software development kit (SDK) is a set of core tools needed to compile, run, and deploy JavaFX applications. If you feel at home at the command line, then you can start writing code with your favorite text editor and interact with the SDK tools directly. However, if you want to see code-completion hints after each dot you type, then you can always use an IDE such as NetBeans or Eclipse to get you started with JavaFX (see other recipes on IDEs). This section outlines the necessary steps to set up the JavaFX SDK successfully on your computer. These instructions apply to JavaFX SDK version 1.2.x; future versions may vary slightly. Getting ready Before you can start building JavaFX applications, you must ensure that your development environment meets the minimum requirements. As of this writing, the following are the minimum requirements to run the current released version of JavaFX runtime 1.2. Minimum system requirements How to do it... The first step for installing the SDK on you machine is to download it from http://javafx.com/downloads/. Select the appropriate SDK version as shown in the next screenshot. Once you have downloaded the SDK for your corresponding system, follow these instructions for installation on Windows, Mac, Ubuntu, or OpenSolaris. Installation on Windows Find and double-click on the newly downloaded installation package (.exe file) to start. Follow the directions from the installer wizard to continue with your installation. Make sure to select the location for your installation. The installer will run a series of validations on your system before installation starts. If the installer finds no previously installed SDK (or the incorrect version), it will download a SDK that meets the minimum requirements (which lengthens your installation). Installation on Mac OS Prior to installation, ensure that your Mac OS meets the minimum requirements. Find and double-click on the newly downloaded installation package (.dmg file) to start. Follow the directions from the installer wizard to continue your installation. The Mac OS installer will place the installed files at the following location:/Library/Frameworks/JavaFX.framework/Versions/1.2. Installation on Ubuntu Linux and OpenSolaris Prior to installation, ensure that your Ubuntu or OpenSolaris environment meets the minimum requirements. Locate the newly downloaded installation package to start installation. For Linux, the file will end with *-linux-i586.sh. For OpenSolaris, the installation file will end with *-solaris-i586.sh. Move the file to the directory where you want to install the content of the SDK. Make the file executable (chmod 755) and run it. This will extract the content of the SDK in the current directory. The installation will create a new directory, javafx-sdk1.2, which is your JavaFX home location ($JAVAFX_HOME). Now add the JavaFX binaries to your system's $PATH variable, (export PATH=$PATH:$JAVAFX_HOME/bin). When your installation steps are completed, open a command prompt and validate your installation by checking the version of the SDK. $> javafx -version$> javafx 1.2.3_b36 You should get the current version number for your installed JavaFX SDK displayed. How it works... Version 1.2.x of the SDK comes with several tools and other resources to help developers get started with JavaFX development right away. The major (and more interesting) directories in the SDK include: Setting up JavaFX for the NetBeans IDE The previous recipe shows you how to get started with JavaFX using the SDK directly. However if you are more of a syntax-highlight, code-completion, click-to-build person, you will be delighted to know that the NetBeans IDE fully supports JavaFX development. JavaFX has first-class support within NetBeans, with functionalities similar to those found in Java development including: Syntax highlighting Code completion Error detection Code block formatting and folding In-editor API documentation Visual preview panel Debugging Application profiling Continuous background build And more... This recipe shows how to set up the NetBeans IDE for JavaFX development. You will learn how to configure NetBeans to create, build, and deploy your JavaFX projects. Getting ready Before you can start building JavaFX applications in the NetBeans IDE, you must ensure that your development environment meets the minimum requirements for JavaFX and NetBeans (see previous recipe Installing the JavaFX SDK for minimum requirements). Version 1.2 of the JavaFX SDK requires NetBeans version 6.5.1 (or higher) to work properly. How to do it... As a new NetBeans user (or first-time installer), you can download NetBeans and JavaFX bundled and ready to use. The bundle contains the NetBeans IDE and all other required JavaFX SDK dependencies to start development immediately. No additional downloads are required with this option. To get started with the bundled NetBeans, go to http://javafx.com/downloads/ and download the NetBeans + JavaFX bundle as shown in the next screenshot (versions will vary slightly as newer software become available).
Read more
  • 0
  • 0
  • 20291
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-awk-programming-langauge
Pavan Ramchandani
17 May 2018
9 min read
Save for later

That '70s language: AWK programming

Pavan Ramchandani
17 May 2018
9 min read
AWK is an interpreted programming language designed for text processing and report generation. It is typically used for data manipulation, such as searching for items within data, performing arithmetic operations, and restructuring raw data for generating reports in most Unix-like operating systems. Today, we will explore the AWK philosophy and different types of AWK that exist, starting from its original implementation in 1977 at AT&T's Laboratories, Inc. We will also look at the various implementation areas of AWK in data science today. Using AWK programs, one can handle repetitive text-editing problems with very simple and short programs. It is a pattern-action language; it searches for patterns in a given input and, when a match is found, it performs the corresponding action. The pattern can be made of strings, regular expressions, comparison operations on numbers, fields, variables, and so on. It reads the input files and splits each input line of the file into fields automatically. AWK has most of the well-designed features that every programming language should contain. Its syntax particularly resembles that of the C programming language. It is named after its original three authors: Alfred V. Aho Peter J. Weinberger Brian W. Kernighan AWK is a very powerful, elegant, and simple that every person dealing with text processing should be familiar with. This article is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. This book will introduce you to AWK programming language and get you hands-on working with practical implementation of AWK. Types of AWK The AWK language was originally implemented as an AWK utility on Unix. Today, most Linux distributions provide GNU implementation of AWK (GAWK), and a symlink for AWK is created from the original GAWK binary. The AWK utility can be categorized into the following three types, depending upon the type of interpreter it uses for executing AWK programs: AWK: This is the original AWK interpreter available from AT&T Laboratories. However, it is not used much nowadays and hence it might not be well-maintained. Its limitation is that it splits a line into a maximum 99 fields. It was updated and replaced in the mid-1980s with an enhanced version called New AWK (NAWK). NAWK: This is AT&T's latest development on the AWK interpreter. It is well-maintained by one of the original authors of AWK - Dr. Brian W. Kernighan. GAWK: This is the GNU project's implementation of the AWK programming language. All GNU/Linux distributions are shipped with GAWK by default and hence it is the most popular version of AWK. GAWK interpreter is fully compatible with AWK and NAWK. Beyond these, we also have other, less popular, AWK interpreters and translators, mentioned as follows. These variants are useful in operations when you want to translate your AWK program to C, C++, or Perl: MAWK: Michael Brennan interpreter for AWK. TAWK: Thompson Automation interpreter/compiler/Microsoft Windows DLL for AWK. MKSAWK: Mortice Kern Systems interpreter/compiler/for AWK. AWKCC: An AWK translator to C (might not be well-maintained). AWKC++: Brian Kernighan's AWK translator to C++ (experimental). It can be downloaded from: https://9p.io/cm/cs/who/bwk/awkc++.ps. AWK2C: An AWK translator to C. It uses GNU AWK libraries extensively. A2P: An AWK translator to Perl. It comes with Perl. AWKA: Yet another AWK translator to C (comes with the library), based on MAWK. It can be downloaded from: http://awka.sourceforge.net/download.html. When and where to use AWK AWK is simpler than any other utility for text processing and is available as the default on Unix-like operating systems. However, some people might say Perl is a superior choice for text processing, as AWK is functionally a subset of Perl, but the learning curve for Perl is steeper than that of AWK; AWK is simpler than Perl. AWK programs are smaller and hence quicker to execute. Anybody who knows the Linux command line can start writing AWK programs in no time. Here are a few use cases of AWK: Text processing Producing formatted text reports/labels Performing arithmetic operations on fields of a file Performing string operations on different fields of a file Programs written in AWK are smaller than they would be in other higher-level languages for similar text processing operations. AWK programs are interpreted on a GNU/Linux Terminal and thus avoid the compiling, debugging phase of software development in other languages. Getting started with installation This section describes how to set up the AWK environment on your GNU/Linux system, and we'll also discuss the workflow of AWK. Then, we'll look at different methods for executing AWK programs. Installation on Linux Generally, AWK is installed by default on most GNU/Linux distributions. Using the which command, you can check whether it is installed on your system or not. In case AWK is not installed on your system, you can do so in one of two ways: Using the package manager of the corresponding GNU/Linux system Compiling from the source code Let's take a look at each method in detail in the following sections. Using the package manager Different flavors of GNU/Linux distribution have different package-management utilities. If you are using a Debian-based GNU/Linux distribution, such as Ubuntu, Mint, or Debian, then you can install it using the Advance Package Tool (APT) package manager, as follows: [ shiwang@linux ~ ] $ sudo apt-get update -y [ shiwang@linux ~ ] $ sudo apt-get install gawk -y Similarly, to install AWK on an RPM-based GNU/Linux distribution, such as Fedora, CentOS, or RHEL, you can use the Yellowdog Updator Modified (YUM) package manager, as follows: [ root@linux ~ ] # yum update -y [ root@linux ~ ] # yum install gawk -y For installation of AWK on openSUSE, you can use the zypper (zypper command line) package-management utility, as follows: [ root@linux ~ ] # zypper update -y [ root@linux ~ ] # zypper install gawk -y Once the installation is finished, make sure AWK is accessible through the command line. We can check that using the which command, which will return the absolute path of AWK on our system: [ root@linux ~ ] # which awk /usr/bin/awk You can also use awk --version to find the AWK version on our system: [ root@linux ~ ] # awk --version Compiling from the source code Like every other open source utility, the GNU AWK source code is freely available for download as part of the GNU project. Previously, you saw how to install AWK using the package manager; now, you will see how to install AWK by compiling from its source code on the GNU/Linux distribution. The following steps are applicable to most of the GNU/Linux software for installation: Download the source code from a GNU project ftp site. Here, we will use the wget command line utility to download it, however you are free to choose any other program, such as curl, you feel comfortable with: [ shiwang@linux ~ ] $ wget http://ftp.gnu.org/gnu/gawk/gawk-4.1.3.tar.xz Extract the downloaded source code: [ shiwang@linux ~ ] $ tar xvf gawk-4.1.3.tar.xz Change your working directory and execute the configure file to configure the GAWK as per the working environment of your system: [ shiwang@linux ~ ] $ cd gawk-4.1.3 && ./configure Once the configure command completes its execution successfully, it will generate the make file. Now, compile the source code by executing the make command: [ shiwang@linux ~ ] $ make Type make install to install the programs and any data files and documentation. When installing into a prefix owned by root, it is recommended that the package be configured and built as a regular user, and only the make install phase is executed with root privileges: [ shiwang@linux ~ ] $ sudo make install Upon successful execution of these five steps, you have compiled and installed AWK on your GNU/Linux distribution. You can verify this by executing the which awk command in the Terminal or awk --version: [ root@linux ~ ] # which awk /usr/bin/awk Now you have a working AWK/GAWK installation and we are ready to begin AWK programming, but before that, our next section describes the workflow of the AWK interpreter. If you are running on macOS X, AWK, and not GAWK, would be installed as a default on it. For GAWK installation on macOS X, please refer to MacPorts for GAWK. Workflow of AWK Having a basic knowledge of the AWK interpreter workflow will help you to better understand AWK and will result in more efficient AWK program development. Hence, before getting your hands dirty with AWK programming, you need to understand its internals. The AWK workflow can be summarized as shown in the following figure: Let's take a look at each operation: READ OPERATION: AWK reads a line from the input stream (file, pipe, or stdin) and stores it in memory. It works on text input, which can be a file, the standard input stream, or from a pipe, which it further splits into records and fields: Records: An AWK record is a single, continuous data input that AWK works on. Records are bounded by a record separator, whose value is stored in the RS variable. The default value of RS is set to a newline character. So, the lines of input are considered records for the AWK interpreter. Records are read continuously until the end of the input is reached. Figure 1.2  shows how input data is broken into records and then goes further into how it is split into fields: Fields: Each record can further be broken down into individual chunks called fields. Like records, fields are bounded. The default field separator is any amount of whitespace, including tab and space characters. So by default, lines of input are further broken down into individual words separated by whitespace. You can refer to the fields of a record by a field number, beginning with 1. The last field in each record can be accessed by its number or with the NF special variable, which contains the number of fields in the current record, as shown in Figure 1.3: EXECUTE OPERATION: All AWK commands are applied sequentially on the input (records and fields). By default, AWK executes commands on each record/line. This behavior of AWK can be restricted by the use of patterns. REPEAT OPERATION: The process of read and execute is repeated until the end of the file is reached. The following flowchart depicts the workflow:   We introduced you to the AWK programming language and got ourselves a quick primer to get started with application development. If you found this post is useful, do check out the book Learning AWK Programming to learn more about the intricacies of AWK programming language for text processing. The oldest programming languages in use today What is the difference between functional and object oriented programming? Systems programming with Go in UNIX and Linux  
Read more
  • 0
  • 0
  • 20184

article-image-getting-started-salesforce-lightning-experience
Packt
02 Mar 2017
8 min read
Save for later

Getting Started with Salesforce Lightning Experience

Packt
02 Mar 2017
8 min read
In this article by Rakesh Gupta, author of the book Mastering Salesforce CRM Administration, we will start with the overview of the Salesforce Lightning Experience and its benefits, which takes the discussion forward to the various business use cases where it can boost the sales representatives’ productivity. We will also discuss different Sales Cloud and Service Cloud editions offered by Salesforce. (For more resources related to this topic, see here.) Getting started with Lightning Experience Lightning Experience is a new generation productive user interface designed to help your sales team to close more deals and sell quicker and smarter. The upswing in mobile usages is influencing the way people work. Sales representatives are now using mobile to research potential customers, get the details of nearby customer offices, socially connect with their customers, and even more. That's why Salesforce synced the desktop Lightning Experience with mobile Salesforce1. Salesforce Lighting Editions With its Summer'16 release, Salesforce announced the Lightning Editions of Sales Cloud and Service Cloud. The Lightning Editions are a completely reimagined packaging of Sales Cloud and Service Cloud, which offer additional functionality to their customers and increased productivity with a relatively small increase in cost. Sales Cloud Lightning Editions Sales Cloud is a product designed to automate your sales process. By implementing this, an organization can boost its sales process. It includes Campaign,Lead, Account, Contact,OpportunityReport, Dashboard, and many other features as well,. Salesforce offers various Sales Cloud editions, and as per business needs, an organization can buy any of these different editions, which are shown in the following image: Let’s take a closer look at the three Sales Cloud Lightning Editions: Lightning Professional: This edition is for small and medium enterprises (SMEs). It is designed for business needs where a full-featured CRM functionality is required. It provides the CRM functionality for marketing, sales, and service automation. Professional Edition is a perfect fit for small- to mid-sized businesses. After the Summer'16 release, in this edition, you can create a limited number of processes, record types, roles, profiles, and permission sets. For each Professional Edition license, organizations have to pay USD 75 per month. Lightning Enterprise: This edition is for businesses with large and complex business requirements. It includes all the features available in the Professional Edition, plus it provides advanced customization capabilities to automate business processes and web service API access for integration with other systems. Enterprise Editions also include processes, workflow, approval process, profile, page layout, and custom app development. In addition, organizations also get the Salesforce Identity feature with this edition. For each Enterprise Edition license, organizations have to pay USD 150 per month. Lightning Unlimited: This edition includes all Salesforce.com features for an entire enterprise. It provides all the features of Enterprise Edition and a new level of Platform flexibility for managing and sharing all of their information on demand. The key features of Salesforce.com Unlimited Edition (in addition to Enterprise features) are premier support, full mobile access, and increased storage limits. It also includes Work.com, Service Cloud, knowledge base, live agent chat, multiple sandboxes and unlimited custom app development. While purchasing Salesforce.com licenses, organizations have to negotiate with Salesforce to get the maximum number of sandboxes. To know more about these license types, please visit the Salesforce website at https://www.salesforce.com/sales-cloud/pricing/. Service Cloud Lightning Editions Service Cloud helps your organization to streamline the customer service process. Users can access it anytime, anywhere, and from any device. It will help your organization to close a case faster. Service agents can connect with customers through the agent console, meaning agents can interact with customers through multiple channels. Service Cloud includes case management, computer telephony integration (CTI), Service Cloud console, knowledge base, Salesforce communities, Salesforce Private AppExchange, premier+ success plan, report, and dashboards, with many other analytics features. The various Service Cloud Lightning Editions are shown in the following image: Let’s take a closer look at the three Service Cloud Lightning Edition: Lightning Professional: This edition is for SMEs. It provides CRM functionality for customer support through various channels. It is a perfect fit for small- to mid-sized businesses. It includes features, such as case management, CTI integration, mobile access, solution management, content library, reports, and analytics, along with Sales features such as opportunity management and forecasting. After the Summer'16 release, in this edition, you can create a limited number of processes, record types, roles, profiles, and permission sets. For each Professional Edition license, organizations have to pay USD 75 per month. Lightning Enterprise: This edition is for businesses with large and complex business requirements. It includes all the features available in the Professional edition, plus it provides advanced customization capabilities to automate business processes and web service API access for integration with other systems. It also includes Service console, Service contract and entitlement management, workflow and approval process, web chat, offline access, and knowledge base. Organizations get Salesforce Identity feature with this edition. For each Enterprise Edition license, organizations have to pay USD 150 per month. Lightning Unlimited: This edition includes all Salesforce.com features for an entire enterprise. It provides all the features of Enterprise Edition and a new level of platform flexibility for managing and sharing all of their information on demand. The key features of Salesforce.com Unlimited edition (in addition to the Enterprise features) are premier support, full mobile access, unlimited custom apps, and increased storage limits. It also includes Work.com, Service Cloud, knowledge base, live agent chat, multiple sandboxes, and unlimited custom app development. While purchasing the licenses, organizations have to negotiate with Salesforce to get the maximum number of sandboxes. To know more about these license types, please visit the Salesforce website at https://www.salesforce.com/service-cloud/pricing/. Creating a Salesforce developer account To get started with the given topics in this, it is recommended to use a Salesforce developer account. Using Salesforce production instance is not essential for practicing. If you currently do not have your developer account, you can create a new Salesforce developer account. The Salesforce developer account is completely free and can be used to practice newly learned concepts, but you cannot use this for commercial purposes. To create a Salesforce developer account follow these steps: Visit the website http://developer.force.com/. Click on the Sign Up button. It will open a sign up page; fill it out to create one for you. The signup page will look like the following screenshot: Once you register for the developer account, Salesforce.com will send you login details on the e-mail ID you have provided during the registration. By following the instructions in the e-mail, you are ready to get started with Salesforce. Enabling the Lightning Experience for Users Once you are ready to roll out the Lightning Experience for your users, navigate to the Lightning Setup page, which is available in Setup, by clicking Lightning Experience. The slider button at the bottom of the Lightning Setup page, shown in the following screenshot, enables Lightning Experience for your organization:. Flip that switch, and Lightning Experience will be enabled for your Salesforce organization. The Lightning Experience is now enabled for all standard profiles by default. Granting permission to users through Profile Depending on the number of users for a rollout, you have to decide how to enable the Lightning Experience for them. If you are planning to do a mass rollout, it is better to update Profiles. Business scenario:Helina Jolly is working as a system administrator in Universal Container. She has received a requirement to enable Lightning Experience for a custom profile, Training User. First of all, create a custom profile for the license type, Salesforce, and give it the name, Training User. To enable the Lightning Experience for a custom profile, follow these instructions: In the Lightning Experience user interface, click on page-level action-menu | ADMINISTRATION | Users | Profiles, and then select the Training User profile, as shown in the following screenshot: Then, navigate to theSystem Permission section, and select the Lightning Experience User checkbox. Granting permission to users through permission sets If you want to enable the Lightning Experience for a small group of users, or if you are not sure whether you will keep the Lightning Experience on for a group of users, consider using permission sets. Permission sets are mainly a collection of settings and permissions that give the users access to numerous tools and functions within Salesforce. By creating a permission set, you can grant the Lightning Experience user permission to the users in your organization. Switching between Lightning Experience and Salesforce Classic If you have enabled Lightning Experience for your users, they can use the switcher to switch back and forth between Lightning Experience and Salesforce Classic. The switcher is very smart. Every time a user switches, it remembers that user experience as their new default preference. So, if a user switches to Lightning Experience, it is now their default user experience until they switch back to Salesforce Classic. If you want to restrict your users to switch back to Salesforce Classic, you have to develop an Apex trigger or process with flow. When the UserPreferencesLightningExperiencePreferred field on the user object is true, then it redirects the user to the Lightning Experience interface. Summary In this article, we covered the overview of Salesforce Lightning Experience. We also covered various Salesforce editions available in the market. We also went through standard and custom objects. Resources for Article: Further resources on this subject: Configuration in Salesforce CRM [article] Salesforce CRM Functions [article] Introduction to vtiger CRM [article]
Read more
  • 0
  • 0
  • 20036

article-image-why-moving-from-a-monolithic-architecture-to-microservices-is-so-hard-gitlabs-jason-plum-breaks-it-down-kubeconcnc-talk
Amrata Joshi
19 Dec 2018
12 min read
Save for later

Why moving from a monolithic architecture to microservices is so hard, Gitlab’s Jason Plum breaks it down [KubeCon+CNC Talk]

Amrata Joshi
19 Dec 2018
12 min read
Last week, at the KubeCon+CloudNativeCon North America 2018, Jason Plum, Sr. software engineer, distribution at GitLab spoke about GitLab, Omnibus, and the concept of monolith and its downsides. He spent the last year working on the cloud native helm charts and breaking out a complicated pile of code. This article highlights few insights from Jason Plum’s talk on Monolith to Microservice: Pitchforks Not Included at the KubeCon + CloudNativeCon. Key takeaways “You could not have seen the future that you live in today, learn from what you've got in the past, learn what's available now and work your way to it.” - Jason Plum GitLab’s beginnings as the monolithic project provided the means for focused acceleration and innovation. The need to scale better and faster than the traditional models caused to reflect on our choices, as we needed to grow beyond the current architecture to keep up. New ways of doing things require new ways of looking at them. Be open minded, and remember your correct choices in the past could not see the future you live in. “So the real question people don't realize is what is GitLab?”- Jason Plum Gitlab is the first single application to have the entire DevOps lifecycle in a single Interface. Omnibus - The journey from a package to a monolith “We had a group of people working on a single product to binding that and then we took that, we bundled that. And we shipped it and we shipped it and we shipped it and we shipped it and all the twenties every month for the entire lifespan of this company we have done that, that's not been easy. Being a monolith made that something that was simple to do at scale.”- Jason Plum In the beginning it was simple as Ruby on Rails was on a single codebase and users had to deploy it from source. Just one gigantic code was used but that's not the case these days. Ruby on Rails is still used for the primary application but now a shim proxy called workhorse is used that takes the heavy lifting away from Ruby. It ensures the users and their API’s are are responsive. The team at GitLab started packaging this because doing everything from source was difficult. They created the Omnibus package which eventually became the gigantic monolith. Monoliths make sense because... Adding features is simple It’s easy as everything is one bundle Clear focus for Minimum Viable Product (MVP) Advantages of Omnibus Full-stack bundle provides all components necessary to use every feature of GitLab. Simple to install. Components can be individually enabled/disabled. East to distribute. Highly controlled, version locked components. Guaranteed configuration stability. The downsides of monoliths “The problem is this thing is massive” - Jason Plum The Omnibus package can work on any platform, any cloud and under any distribution. But the question is how many of us would want to manage fleets of VMs? This package has grown so much that it is 1.5 gigabytes and unpacked. It has all the features and is still usable. If a user downloads 500 megabytes as an installation package then it unpacks almost a gigabyte and a half. This package contains everything that is required to run the SaaS but the problem is that this package is massive. “The trick is Git itself is the reason that moving to cloud native was hard.” - Jason Plum While using Git, the users run a couple of commands, they push them and deploy the app. But at the core of that command is how everything is handled and how everything is put together. Git works with snapshots of the entire file. The number of files include, every file the user has and every version the user had. It also involves all the indexes and references and some optimizations. But the problem is the more the files, the harder it gets. “Has anybody ever checked out the Linux tree? You check out that tree, get your coffee, come back check out, any branch I don't care what it is and then dip that against current master. How many files just got read on the file system?” - Jason Plum When you come back you realize that all the files that are marked as different and between the two of them when you do diff, that information is not stored, it's not greeting and it is not even cutting it out. It is running differently on all of those files. Imagine how bad that gets when you have 10 million lines of code in a repository that's 15 years old ?  That’s expensive in terms of performance.  - Jason Plum Traditional methods - A big problem “Now let's actually go and make a branch make some changes and commit them right. Now you push them up to your fork and now you go into add if you on an M R. Now it's my job to do the thing that was already hard on your laptop, right? Okay cool, that's one of you, how about 10,000 people a second right do you see where this is going? Suddenly it's harder but why is this the problem?” - Jason Plum The answer is traditional methods, as they are quite slow. If we have hundreds of things in the fleet, accessing tens of machines that are massive and it still won’t work because the traditional methods are a problem. Is NFS a solution to this problem? NFS (Network File System) works well when there are just 10 or 100 people. But if a user is asked to manage an NFS server for 5,000 people, one might rather choose pitchfork. NFS is capable but it can’t work at such a scale. The Git team now has a mount that has to be on every single node, as the API code and web code and other processes which needs to be functional enough to read the files. The team has previously used Garrett, Lib Git to read the files on the file system. Every time, one reads the file, the whole file used to get pulled. This gave rise to another problem, disk i/o problems. Since, everybody tries to read the disparate set of files, the traffic increases. “Okay so we have definitely found a scaling limit now we can only push the traditional methods of up and out so far before we realize that that's just not going to work because we don't have big enough pipes, end of line. So now we've got all of this and we've just got more of them and more of them and more of them. And all of a sudden we need to add 15 nodes to the fleet and another 15 nodes to the fleet and another 15 nodes to the fleet to keep up with sudden user demand. With every single time we have to double something the choke points do not grow - they get tighter and tighter” - Jason Plum The team decided to take a second look at the problem and started working on a project called Gitaly. They took the API calls that the users would make to live Git. So the Git mechanics was sent over a GRPC and then Gitaly was put on the actual file servers. Further the users were asked to call for a diff on whatever they want and then Gitaly was asked for the response. There is no need of NFS now. “I can send a 1k packet get a 4k response instead of NFS and reading 10,000 files. We centralized everything across and this gives us the ability to actually meet throughput because that pipe that's not getting any bigger suddenly has 1/10 of the traffic going through it.” - Jason Plum This leaves more space for users to easily get to the file servers and further removes the need of NFS mounts for everything. Incase one node is lost then half of the fleet is not lost in an instant. How is Gitaly useful? With Gitaly the throughput requirement significantly reduced. The service nodes no more need disk access. It provides optimization for specific problems. How to solve Git’s performance related issue? For better optimization and performance it is important to treat it like a service or like a database. The file system is still in use and all of the accesses to the files are on the node where we have the best performance and best caching and there is no issue with regards to the network. “To take the monolith and rip a chunk out make it something else and literally prop the thing up, but how long are we going to be able to do this?” - Jason Plum If a user plans to upload something then he/she has to use a file system and which means that NFS hasn't gone away. Do we really need to have NFS because somebody uploaded a cat picture? Come on guys we can do better than that right?- Jason Plum The next solution was to take everything as a traditional file that does not get and move into object store as an option. This matters because there is no need to have a file system locally. The files can be handed over to a service that works well. And it could run on Prem in a cloud and can be handled by any number of men and service providers. Pets cattle is a popular term by CERN which means anything that can be replaced easily is cattle and anything that you have to care and feed for on a regular basis is a pet. The pet could be the stateful information, for example, database. The problem can be better explained with configuring the Omnibus at scale. If there are  hundreds of the VM’s and they are getting installed, further which the entire package is getting installed. So now there are 20 gigabytes per VM. The package needs to be downloaded for all the VM’s which means almost 500 megabytes. All the individual components can be configured out of the Omnibus. But even the load gets spreaded, it will still remain this big. And each of the nodes will at least take two minutes to come up from. So to speed up this process, the massive stack needs to be broken down into chunks and containers so they can be treated as individualized services. Also, there is no need of NFS as the components are no longer bound to the NFS disk. And this process would now take just five seconds instead of two minutes. A problem called legacy debt, a shared file system expectation which was a bugger. If there are separate containers and there is no shared disk then it could again give rise to a problem. “I can't do a shared disk because if we do shared disk through rewrite many. What's the major provider that will do that for us on every platform, anybody remember another three-letter problem.” - Jason Plum Then there came an interesting problem called workhorse, a smart proxy that talks to the UNIX sockets and not TCP. Though this problem got fixed. Time constraints - another problem “We can't break existing users and we can't have hiccups we have to think about everything ahead of time plan well and execute.” - Jason Plum Time constraints is a serious problem for a project’s developers, the development resources milestones, roadmaps deliverables. The new features would keep on coming into the project. The project would keep on functioning in the background but the existing users can’t be kept waiting. Is it possible to define individual component requirements? “Do you know how much CPU you need when idle versus when there's 10 people versus literally some guy clicking around and if files because he's one to look at what the kernel would like in 2 6 2 ?”- Jason Plum Monitoring helps to understand the component requirements. Metrics and performance data are few of the key elements for getting the exact component requirements. Other parameters like network, throughput, load balance, services etc also play an important role. But the problem is how to deal with throughput? How to balance the services? How to ensure that those services are always up? Then the other question comes up regarding the providers and load balancers as everyone doesn’t want to use the same load balancers or the same services. The system must support all the load balancers from all the major cloud providers and which is difficult. Issues with scaling “Maybe 50 percent for the thing that needs a lot of memory is a bad idea. I thought 50 percent was okay because when I ran a QA test against it, it didn't ever use more than 50 percent of one CPU. Apparently when I ran three more it now used 115 percent and I had 16 pounds and it fell over again.” - Jason Plum It's important to know what things needs to be scaled horizontally and which ones needs to be scaled vertically. To go automated or manual is also a crucial question. Also, it is equally important to understand which things should be configurable and how to tweak them as the use cases may vary from project to project. So, one should know how to go about a test and how to document a test. Issues with resilience “What happens to the application when a node, a whole node disappears off the cluster? Do you know how that behaves?” - Jason Plum It is important to understand which things shouldn't be on the same nodes. But the problem is how to recover it. These things are not known and by the time one understands the problem and the solution, it is too late. We need new ways of examining these issues and for planning the solution. Jason’s insightful talk on Monolith to Microservice gives a perfect end to the KubeCon + CloudNativeCon and is a must watch for everyone. Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNative RedHat contributes etcd, a distributed key-value store project, to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon Oracle introduces Oracle Cloud Native Framework at KubeCon+CloudNativeCon 2018
Read more
  • 0
  • 0
  • 19893

article-image-postgis-extension-pgrouting-for-calculating-driving-distance-tutorial
Pravin Dhandre
19 Jul 2018
5 min read
Save for later

PostGIS extension: pgRouting for calculating driving distance [Tutorial]

Pravin Dhandre
19 Jul 2018
5 min read
pgRouting is an extension of PostGIS and PostgreSQL geospatial database. It adds routing and other network analysis functionality. In this tutorial we will learn to work with pgRouting tool in estimating the driving distance from all nearby nodes which can be very useful in supply chain, logistics and transportation based applications. This tutorial is an excerpt from a book written by Mayra Zurbaran,Pedro Wightman, Paolo Corti, Stephen Mather, Thomas Kraft and Bborie Park titled PostGIS Cookbook - Second Edition. Driving distance is useful when user sheds are needed that give realistic driving distance estimates, for example, for all customers with five miles driving, biking, or walking distance. These estimates can be contrasted with buffering techniques, which assume no barrier to travelling and are useful for revealing the underlying structures of our transportation networks relative to individual locations. Driving distance (pgr_drivingDistance) is a query that calculates all nodes within the specified driving distance of a starting node. This is an optional function compiled with pgRouting; so if you compile pgRouting yourself, make sure that you enable it and include the CGAL library, an optional dependency for pgr_drivingDistance. We will start by loading a test dataset. You can get some really basic sample data from https://docs.pgrouting.org/latest/en/sampledata.html. In the following example, we will look at all users within a distance of three units from our starting point—that is, a proposed bike shop at node 2: SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ); The preceding command gives the following output: As usual, we just get a list from the pgr_drivingDistance table that, in this case, comprises sequence, node, edge cost, and aggregate cost. PgRouting, like PostGIS, gives us low-level functionality; we need to reconstruct what geometries we need from that low-level functionality. We can use that node ID to extract the geometries of all of our nodes by executing the following script: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT ST_AsText(the_geom) FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The preceding command gives the following output: But the output seen is just a cluster of points. Normally, when we think of driving distance, we visualize a polygon. Fortunately, we have the pgr_alphaShape function that provides us that functionality. This function expects id, x, and y values for input, so we will first change our previous query to convert to x and y from the geometries in edge_table_vertices_pgr: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The output is as follows: Now we can wrap the preceding script up in the alphashape function: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), So first, we will get our cluster of points. As we did earlier, we will explicitly convert the text to geometric points: alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), Now that we have points, we can create a line by connecting them: alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon(ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline))) FROM alphaline; Finally, we construct the line as a polygon using ST_MakePolygon. This requires adding the start point by executing ST_StartPoint in order to properly close the polygon. The complete code is as follows: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon( ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline)) ) FROM alphaline; Our first driving distance calculation can be better understood in the context of the following diagram, where we can reach nodes 9, 11, 13 from node 2 with a driving distance of 3: With this,  you can calculate the most optimistic distance route across different nodes in your transportation network. Want to explore more with PostGIS, check out PostGIS Cookbook - Second Edition and get access to complete range of PostGIS techniques and related extensions for better analytics on your spatial information. Top 7 libraries for geospatial analysis Using R to implement Kriging - A Spatial Interpolation technique for Geostatistics data Learning R for Geospatial Analysis
Read more
  • 0
  • 0
  • 19754
article-image-introducing-innative-an-aot-compiler-that-runs-webassembly-using-llvm-outside-the-sandbox-at-95-native-speed
Savia Lobo
28 May 2019
4 min read
Save for later

Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed

Savia Lobo
28 May 2019
4 min read
On May 17, a team of WebAssembly enthusiasts introduced InNative, an AOT (Ahead-Of-Time) compiler for WebAssembly using LLVM with a customizable level of sandboxing for Windows/Linux. It helps run WebAssembly Outside the Sandbox at 95% native speed. The team also announced an initial release of the inNative Runtime v0.1.0 for Windows and Linux, today. https://twitter.com/inNative_sdk/status/1133098611514830850 With the help of InNative, users can grab a precompiled SDK from GitHub, or build from source. If users turn off all the isolation, the LLVM optimizer can almost reach native speeds and nearly recreate the same optimized assembly that a fully optimized C++ compiler would give, while leveraging all the features of the host CPU. Given below are some benchmarks, adapted from these C++ benchmarks: Source: InNative This average benchmark has speed in microseconds and is compiled using GCC -O3 --march=native on WSL. “We usually see 75% native speed with sandboxing and 95% without. The C++ benchmark is actually run twice - we use the second run, after the cache has had time to warm up. Turning on fastmath for both inNative and GCC makes both go faster, but the relative speed stays the same”, the official website reads. “The only reason we haven’t already gotten to 99% native speed is because WebAssembly’s 32-bit integer indexes break LLVM’s vectorization due to pointer aliasing”, the WebAssembly researcher mentions. Once fixed-width SIMD instructions are added, native WebAssembly will close the gap entirely, as this vectorization analysis will have happened before the WebAssembly compilation step. Some features of InNative InNative has the same advantage as that of JIT compilers have, which is that it can always take full advantage of the native processor architecture. It can perform expensive brute force optimizations like a traditional AOT compiler, by caching its compilation result. By compiling on the target machine once, one can get the best of both, Just-In-Time and Ahead-Of-Time. It also allows webassembly modules to interface directly with the operating system. inNative uses its own unofficial extension to allow it to pass WebAssembly pointers into C functions as this kind of C interop is definitely not supported by the standard yet. However, there is a proposal for the same. inNative also lets the users write C libraries that expose themselves as WebAssembly modules, which would make it possible to build an interop library in C++. Once WebIDL bindings are standardized, it will be a lot easier to compile WebAssembly that binds to C APIs. This opens up a world of tightly integrated WebAssembly plugins for any language that supports calling standard C interfaces, integrated directly into the program. inNative lays the groundwork needed for us and it doesn’t need to be platform-independent, only architecture-independent. “We could break the stranglehold of i386 on the software industry and free developers to experiment with novel CPU architectures without having to worry about whether our favorite language compiles to it. A WebAssembly application built against POSIX could run on any CPU architecture that implements a POSIX compatible kernel!”, the official blog announced. A user on Hacker News commented, “The differentiator for InNative seems to be the ability to bypass the sandbox altogether as well as additional native interop with the OS. Looks promising!” Another user on Reddit, “This is really exciting! I've been wondering why we ship x86 and ARM assembly for years now, when we could more efficiently ship an LLVM-esque assembly that compiles on first run for the native arch. This could be the solution!” To know more about InNative in detail, head over to its official blog post. React Native VS Xamarin: Which is the better cross-platform mobile development framework? Tor Browser 8.5, the first stable version for Android, is now available on Google Play Store! Introducing SwiftWasm, a tool for compiling Swift to WebAssembly
Read more
  • 0
  • 0
  • 19717

article-image-cluster-basics-and-installation-centos-7
Packt
01 Feb 2016
8 min read
Save for later

Cluster Basics and Installation On CentOS 7

Packt
01 Feb 2016
8 min read
In this article by Gabriel A. Canepa, author of the book CentOS High Performance, we will review the basic principles of clustering and show you, step by step, how to set up two CentOS 7 servers as nodes to later use them as members of a cluster. (For more resources related to this topic, see here.) As part of this process, we will install CentOS 7 from scratch in a brand new server as our first cluster member, along with the necessary packages, and finally, configure key-based authentication for SSH access from one node to the other. Clustering fundamentals In computing, a cluster consists of a group of computers (which are referred to as nodes or members) that work together so that the set is seen as a single system from the outside. One typical cluster setup involves assigning a different task to each node, thus achieving a higher performance than if several tasks were performed by a single member on its own. Another classic use of clustering is helping to ensure high availability by providing failover capabilities to the set, where one node may automatically replace a failed member to minimize the downtime of one or several critical services. In either case, the concept of clustering implies not only taking advantage of the computing functionality of each member alone, but also maximizing it by complementing it with the others. As we just mentioned, HA (High-availability) clusters aim to eliminate system downtime by failing services from one node to another in case one of them experiences an issue that renders it inoperative. As opposed to switchover, which requires human intervention, a failover procedure is performed automatically by the cluster without any downtime. In other words, this operation is transparent to end users and clients from outside the cluster. On the other hand, HP (High-performance) clusters use their nodes to perform operations in parallel in order to enhance the performance of one or more applications. High-performance clusters are typically seen in scenarios involving applications that use large collections of data. Why CentOS? Just as the saying goes, Every journey begins with a small step, we will begin our own journey toward clustering by setting up the separate nodes that will make up our system. Our choice of operating system is Linux and CentOS, version 7, as the distribution, that being the latest available release of CentOS as of today. The binary compatibility with Red Hat Enterprise Linux © (which is one of the most well-used distributions in enterprise and scientific environments) along with its well-proven stability are the reasons behind this decision. CentOS 7 along with its previous versions of the distribution are available for download, free of charge, from the project's website at http://www.centos.org/. In addition, specific details about the release can always be consulted in the CentOS wiki, http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7. Among the distinguishing features of CentOS 7, I would like to name the following: It includes systemd as the central system management and configuration utility It uses XFS as the default filesystem It only supports the x86_64 architecture Downloading CentOS To download CentOS, go to http://www.centos.org/download/ and click on one of the three options outlined in the following figure: Download options for CentOS 7 These options are detailed as follows: DVD ISO (~4 GB) is an .iso file that can be burned into regular DVD optical media and includes the common tools. Download this file if you have immediate access to a reliable Internet connection that you can use to download other packages and utilities. Everything ISO (~7 GB) is an .iso file with the complete set of packages that are made available in the base repository of CentOS 7. Download this file if you do not have access to a reliable Internet connection or if your plan contemplates the possibility of installing or populating a local or network mirror. The alternative downloads link will take you to a public directory within an official nearby CentOS mirror, where the previous options are available as well as others, including different choices of desktop versions (GNOME or KDE) and the minimal .iso file (~570 MB), which contains the bare bone packages of the distribution. As the minimal install is sufficient for our purpose at hand, we can install other needed packages using yum later, that is, the recommended .iso file to download. CentOS-7.X-YYMM-x86_64-Minimal.iso Here, X indicates the current update number of CentOS 7 and YYMM represent the year and month, both in two-digit notation, when the source code this version is based on was released. CentOS-7.0-1406-x86_64-Minimal.iso This tells us the source code this release is based on dates from the month of June, 2014. Independently of our preferred download method, we will need this .iso file in order to begin with the installation. In addition, feel free to burn it to optical media or a USB drive. Setting up CentOS 7 nodes If you do not have dedicated hardware that you can use to set up the nodes of your cluster, you can still create one using virtual machines over some virtualization software, such as Oracle Virtualbox © or VMware ©, for example. The following setup is going to be performed on a Virtualbox VM with 1 GB of RAM and 30 GB of disk space. We will use the default partitioning schema over LVM as suggested by the installation process. Installing CentOS 7 The splash screen shown in the following screenshot is the first step in the installation process. Highlight Install CentOS 7 using the up and down arrows and press Enter: Splash screen before starting the installation of CentOS 7 Select English (or your preferred installation language) and click on Continue, as shown in the following screenshot: Selecting the language for the installation of CentOS 7 In the following screenshot, you can choose a keyboard layout, set the current date and time, choose a partitioning method, connect the main network interface, and assign a unique hostname for the node. We will name the current node node01 and leave the rest of the settings as default (we will configure the extra network card later). Then, click on Begin installation: Configure keyboard layout, date and time, network and hostname, and partitioning schema While the installation continues in the background, we will be prompted to set the password for the root account and create an administrative user for the node. Once these steps have been confirmed, the corresponding warnings no longer appear, as shown in the following screenshot: Setting the password for root and creating an administrative user account When the process is completed, click on Finish configuration and the installation will finish configuring the system and devices. When the system is ready to boot on its own, you will be prompted to do so. Remove the installation media and click on Reboot. Now, we can proceed with setting up our network interfaces. Setting up the network infrastructure Our rather basic network infrastructure consists of 2 CentOS 7 boxes, with the node01 [192.168.0.2] and node02 [192.168.0.3] host names, respectively, and a gateway router called simply gateway [192.168.0.1]. In CentOS, network cards are configured using scripts in the /etc/sysconfig/network-scripts directory. This is the minimum content that is needed in /etc/sysconfig/network-scripts/ifcfg-enp0s3 for our purposes: HWADDR="08:00:27:C8:C2:BE" TYPE="Ethernet" BOOTPROTO="static" NAME="enp0s3" ONBOOT="yes" IPADDR="192.168.0.2" NETMASK="255.255.255.0" GATEWAY="192.168.0.1" PEERDNS="yes" DNS1="8.8.8.8" DNS2="8.8.4.4" Note that the UUID and HWADDR values will be different in your case. In addition, be aware that cluster machines need to be assigned a static IP address—never leave that up to DHCP! In the preceding configuration file, we used Google's DNS, but if you wish, feel free to use another DNS. When you're done making changes, save the file and restart the network service in order to apply them: systemctl restart network.service # Restart the network service You can verify that the previous changes have taken effect (shown in the Restarting the network service and verifying settings figure) with the following two commands: systemctl status network.service # Display the status of the network service And the changes have also taken effect due to this command: ip addr | grep 'inet addr' # Display the IP addresse Restarting the network service and verifying settings You can disregard all error messages related to the loopback interface, as shown in preceding screenshot. However, you will need to examine carefully any error messages related to the enp0s3 interface, if any, and get them resolved in order to proceed further. The second interface will be called enp0sX, where X is typically 8. You can verify with the following command (shown in the following figure): ip link show Displaying NIC information As for the configuration file of enp0s8, you can safely create it, copying the contents of ifcfg-enp0s3. Do not forget, however, to change the hardware (MAC) address as returned by the information on the NIC and leave the IP address field blank for now. ip link show enp0s8 cp /etc/sysconfig/network-scripts/ifcfg-enp0s3 /etc/sysconfig/network-scripts/ifcfg-enp0s8 Then, restart the network service. Note that you will also need to set up at least a basic DNS resolution method. Considering that we will set up a cluster with 2 nodes only, we will use /etc/hosts for this purpose. Edit /etc/hosts with the following content: 192.168.0.2 node01 192.168.0.3 node02 192.168.0.1 gateway Summary In this article, we reviewed how to install the operating system and listed the necessary software components to implement the basic cluster functionality. Resources for Article: Further resources on this subject: CentOS 7's new firewalld service[article] Mastering CentOS 7 Linux Server[article] Resource Manager on CentOS 6[article]
Read more
  • 0
  • 0
  • 19543

Packt
15 Mar 2017
13 min read
Save for later

About Java Virtual Machine – JVM Languages

Packt
15 Mar 2017
13 min read
In this article by Vincent van der Leun, the author of the book, Introduction to JVM Languages, you will learn the history of the JVM and five important languages that run on the JVM. (For more resources related to this topic, see here.) While many other programming languages have come in and gone out of the spotlight, Java always managed to return to impressive spots, either near to, and lately even on, the top of the list of the most used languages in the world. It didn't take language designers long to realize that they as well could run their languages on the JVM—the virtual machine that powers Java applications—and take advantage of its performance, features, and extensive class library. In this article, we will take a look at common JVM use cases and various JVM languages. The JVM was designed from the ground up to run anywhere. Its initial goal was to run on set-top boxes, but when Sun Microsystems found out the market was not ready in the mid '90s, they decided to bring the platform to desktop computers as well. To make all those use cases possible, Sun invented their own binary executable format and called it Java bytecode. To run programs compiled to Java bytecode, a Java Virtual Machine implementation must be installed on the system. The most popular JVM implementations nowadays are Oracle's free but partially proprietary implementation and the fully open source OpenJDK project (Oracle's Java runtime is largely based on OpenJDK). This article covers the following subjects: Popular JVM use cases Java language Scala language Clojure language Kotlin language Groovy The Java platform as published by Google on Android phones and tablets is not covered in this article. One of the reasons is that the Java version used on Android is still based on the Java 6 SE platform from 2006. However, some of the languages covered in this article can be used with Android. Kotlin, in particular, is a very popular choice for modern Android development. Popular use cases Since the JVM platform was designed with a lot of different use cases in mind, it will be no surprise that the JVM can be a very viable choice for very different scenarios. We will briefly look at the following use cases: Web applications Big data Internet of Things (IoT) Web applications With its focus on performance, the JVM is a very popular choice for web applications. When built correctly, applications can scale really well if needed across many different servers. The JVM is a well-understood platform, meaning that it is predictable and many tools are available to debug and profile problematic applications. Because of its open nature, the monitoring of JVM internals is also very well possible. For web applications that have to serve thousands of users concurrently, this is an important advantage. The JVM already plays a huge role in the cloud. Popular examples of companies that use the JVM for core parts of their cloud-based services include Twitter (famously using Scala), Amazon, Spotify, and Netflix. But the actual list is much larger. Big data Big data is a hot topic. When data is regarded too big for traditional databases to be analyzed, one can set up multiple clusters of servers that will process the data. Analyzing the data in this context can, for example, be searching for something specific, looking for patterns, and calculating statistics. This data could have been obtained from data collected from web servers (that, for example, logged visitor's clicks), output obtained from external sensors at a manufacturer plant, legacy servers that have been producing log files over many years, and so forth. Data sizes can vary wildly as well, but often, will take up multiple terabytes in total. Two popular technologies in the big data arena are: Apache Hadoop (provides storage of data and takes care of data distribution to other servers) Apache Spark (uses Hadoop to stream data and makes it possible to analyze the incoming data) Both Hadoop and Spark are for the most part written in Java. While both offer interfaces for a lot of programming languages and platforms, it will not be a surprise that the JVM is among them. The functional programming paradigm focuses on creating code that can run safely on multiple CPU cores, so languages that are fully specialized in this style, such as Scala or Clojure, are very appropriate candidates to be used with either Spark or Hadoop. Internet of Things - IoT Portable devices that feature internet connectivity are very common these days. Since Java was created with the idea of running on embedded devices from the beginning, the JVM is, yet again, at an advantage here. For memory constrained systems, Oracle offers Java Micro Edition Embedded platform. It is meant for commercial IoT devices that do not require a standard graphical or console-based user interface. For devices that can spare more memory, the Java SE Embedded edition is available. The Java SE Embedded version is very close to the Java Standard Edition discussed in this article. When running a full Linux environment, it can be used to provide desktop GUIs for full user interaction. Java SE Embedded is installed by default on Raspbian, the standard Linux distribution of the popular Raspberry Pi low-cost, credit card-sized computers. Both Java ME Embedded and Java SE Embedded can access the General Purpose input/output (GPIO) pins on the Raspberry Pi, which means that sensors and other peripherals connected to these ports can be accessed by Java code. Java Java is the language that started it all. Source code written in Java is generally easy to read and comprehend. It started out as a relatively simple language to learn. As more and more features were added to the language over the years, its complexity increased somewhat. The good news is that beginners don't have to worry about the more advanced topics too much, until they are ready to learn them. Programmers that want to choose a different JVM language from Java can still benefit from learning the Java syntax, especially once they start using libraries or frameworks that provide Javadocs as API documentation. Javadocs is a tool that generates HTML documentation based on special comments in the source code. Many libraries and frameworks provide the HTML documents generated by Javadocs as part of their documentation. While Java is not considered a pure Object Orientated Programming (OOP) language because of its support for primitive types, it is still a serious OOP language. Java is known for its verbosity, it has strict requirements for its syntax. A typical Java class looks like this: package com.example; import java.util.Date; public class JavaDemo { private Date dueDate = new Date(); public void getDueDate(Date dueDate) { this.dueDate = dueDate; } public Date getValue() { return this.dueDate; } } A real-world example would implement some other important additional methods that were omitted for readability. Note that declaring the dueDate variable, the Date class name has to be specified twice; first, when declaring the variable type and the second time, when instantiating an object of this class. Scala Scala is a rather unique language. It has a strong support for functional programming, while also being a pure object orientated programming language at the same time. While a lot more can be said about functional programming, in a nutshell, functional programming is about writing code in such a way that existing variables are not modified while the program is running. Values are specified as function parameters and output is generated based on their parameters. Functions are required to return the same output when specifying the same parameters on each call. A class is supposed to not hold internal states that can change over time. When data changes, a new copy of the object must be returned and all existing copies of the data must be left alone. When following the rules of functional programming, which requires a specific mindset of programmers, the code is safe to be executed on multiple threads on different CPU cores simultaneously. The Scala installation offers two ways of running Scala code. It offers an interactive shell where code can directly be entered and is run right away. This program can also be used to run Scala source code directly without manually first compiling it. Also offered is scalac, a traditional compiler that compiles Scala source code to Java bytecode and compiles to files with the .class extension. Scala comes with its own Scala Standard Library. It complements the Java Class Library that is bundled with the Java Runtime Environment (JRE) and installed as part of the Java Developers Kit (JDK). It contains classes that are optimized to work with Scala's language features. Among many other things, it implements its own collection classes, while still offering compatibility with Java's collections. Scala's equivalent of the code shown in the Java section would be something like the following: package com.example import java.util.Date class ScalaDemo(var dueDate: Date) { } Scala will generate the getter and setter methods automatically. Note that this class does not follow the rules of functional programming as the dueDate variable is mutable (it can be changed at any time). It would be better to define the class like this: class ScalaDemo(val dueDate: Date) { } By defining dueDate with the val keyword instead of the var keyword, the variable has become immutable. Now Scala only generates a getter method and the dueDate can only be set when creating an instance of ScalaDemo class. It will never change during the lifetime of the object. Clojure Clojure is a language that is rather different from the other languages covered in this article. It is a language largely inspired by the Lisp programming language, which originally dates from the late 1950s. Lisp stayed relevant by keeping up to date with technology and times. Today, Common Lisp and Scheme are arguably the two most popular Lisp dialects in use today and Clojure is influenced by both. Unlike Java and Scala, Clojure is a dynamic language. Variables do not have fixed types and when compiling, no type checking is performed by the compiler. When a variable is passed to a function that it is not compatible with the code in the function, an exception will be thrown at run time. Also noteworthy is that Clojure is not an object orientated language, unlike all other languages in this article. Clojure still offers interoperability with Java and the JVM as it can create instances of objects and can also generate class files that other languages on the JVM can use to run bytecode compiled by Clojure. Instead of demonstrating how to generate a class in Clojure, let's write a function in Clojure that would consume a javademo instance and print its dueDate: (defn consume-javademo-instance [d] (println (.getDueDate d))) This looks rather different from the other source code in this article. Code in Clojure is written by adding code to a list. Each open parenthesis and the corresponding closing parenthesis in the preceding code starts and ends a new list. The first entry in the list is the function that will be called, while the other entries of that list are its parameters. By nesting the lists, complex evaluations can be written. The defn macro defines a new function that will be called consume-javademo-instance. It takes one parameter, called d. This parameter should be the javademo instance. The list that follows is the body of the function, which prints the value of the getDueDate function of the passed javademo instance in the variable, d. Kotlin Like Java and Scala, Kotlin, is a statically typed language. Kotlin is mainly focused on object orientated programming but supports procedural programming as well, so the usage of classes and objects is not required. Kotlin's syntax is not compatible with Java; the code in Kotlin is much less verbose. It still offers a very strong compatibility with Java and the JVM platform. The Kotlin equivalent of the Java code would be as follows: import java.util.Date data class KotlinDemo(var dueDate: Date) One of the more noticeable features of Kotlin is its type system, especially its handling of null references. In many programming languages, a reference type variable can hold a null reference, which means that a reference literally points to nothing. When accessing members of such null reference on the JVM, the dreaded NullPointerException is thrown. When declaring variables in the normal way, Kotlin does not allow references to be assigned to null. If you want a variable that can be null, you'll have to add the question mark (?)to its definition: var thisDateCanBeNull: Date? = Date() When you now access the variable, you'll have to let the compiler know that you are aware that the variable can be null: if (thisDateCanBeNull != null) println("${thisDateCanBeNull.toString()}") Without the if check, the code would refuse to compile. Groovy Groovy was an early alternative language for the JVM. It offers, for a large degree, Java syntax compatibility, but the code in Groovy can be much more compact because many source code elements that are required in Java are optional in Groovy. Like Clojure and mainstream languages such as Python, Groovy is a dynamic language (with a twist, as we will discuss next). Unusually, while Groovy is a dynamic language (types do not have to be specified when defining variables), it still offers optional static compilation of classes. Since statically compiled code usually performs better than dynamic code, this can be used when the performance is important for a particular class. You'll give up some convenience when switching to static compilation, though. Some other differences with Java is that Groovy supports operator overloading. Because Groovy is a dynamic language, it offers some tricks that would be very hard to implement with Java. It comes with a huge library of support classes, including many wrapper classes that make working with the Java Class Library a much more enjoyable experience. A JavaDemo equivalent in Groovy would look as follows: @Canonical class GroovyDemo { Date dueDate } The @Canonical annotation is not necessary but recommended because it will generate some support methods automatically that are used often and required in many use cases. Even without it, Groovy will automatically generate the getter and setter methods that we had to define manually in Java. Summary We started by looking at the history of the Java Virtual Machine and studied some important use cases of the Java Virtual Machine: web applications, big data, and IoT (Internet of Things). We then looked at five important languages that run on the JVM: Java (a very readable, but also very verbose statically typed language), Scala (both a strong functional and OOP programming language), Clojure (a non-OOP functional programming language inspired by Lisp and Haskell), Kotlin (a statically typed language, that protects the programmer from very common NullPointerException errors), and Groovy (a dynamic language with static compiler support that offers a ton of features). Resources for Article: Further resources on this subject: Using Spring JMX within Java Applications [article] Tuning Solr JVM and Container [article] So, what is Play? [article]
Read more
  • 0
  • 0
  • 19452
article-image-python-3-when-use-object-oriented-programming
Packt
12 Aug 2010
11 min read
Save for later

Python 3: When to Use Object-oriented Programming

Packt
12 Aug 2010
11 min read
(For more resources on Python 3, see here.) Treat objects as objects This may seem obvious, but you should generally give separate objects in your problem domain a special class in your code. The process is generally to identify objects in the problem and then model their data and behaviors. Identifying objects is a very important task in object-oriented analysis and programming. But it isn't always as easy as counting the nouns in a short paragraph, as we've been doing. Remember, objects are things that have both data and behavior. If we are working with only data, we are often better off storing it in a list, set, dictionary, or some other Python data structure. On the other hand, if we are working with only behavior, with no stored data, a simple function is more suitable. An object, however, has both data and behavior. Most Python programmers use built-in data structures unless (or until) there is an obvious need to define a class. This is a good thing; there is no reason to add an extra level of abstraction if it doesn't help organize our code. Sometimes, though, the "obvious" need is not so obvious. A Python programmer often starts by storing data in a few variables. As our program expands, we will later find that we are passing the same set of related variables to different functions. This is the time to think about grouping both variables and functions into a class. If we are designing a program to model polygons in two-dimensional space, we might start with each polygon being represented as a list of points. The points would be modeled as two-tuples (x,y) describing where that point is located. This is all data, stored in two nested data structures (specifically, a list of tuples): square = [(1,1), (1,2), (2,2), (2,1)] Now, if we want to calculate the distance around the perimeter of the polygon, we simply need to sum the distances between the two points, but to do that, we need a function to calculate the distance between two points. Here are two such functions: import mathdef distance(p1, p2): return math.sqrt((p1[0]-p2[0])**2 + (p1[1]-p2[1])**2)def perimeter(polygon): perimeter = 0 points = polygon + [polygon[0]] for i in range(len(polygon)): perimeter += distance(points[i], points[i+1]) return perimeter Now, as object-oriented programmers, we clearly recognize that a polygon class could encapsulate the list of points (data) and the perimeter function (behavior). Further, a point class, might encapsulate the x and y coordinates and the distance method. But should we do this? For the previous code, maybe, maybe not. We've been studying object-oriented principles long enough that we can now write the object-oriented version in record time: import mathclass Point: def __init__(self, x, y): self.x = x self.y = y def distance(self, p2): return math.sqrt((self.x-p2.x)**2 + (self.y-p2.y)**2)class Polygon: def __init__(self): self.vertices = [] def add_point(self, point): self.vertices.append((point))def perimeter(self): perimeter = 0 points = self.vertices + [self.vertices[0]] for i in range(len(self.vertices)): perimeter += points[i].distance(points[i+1]) return perimeter Now, to understand the difference a little better, let's compare the two APIs in use. Here's how to calculate the perimeter of a square using the object-oriented code: >>> square = Polygon()>>> square.add_point(Point(1,1))>>> square.add_point(Point(1,2))>>> square.add_point(Point(2,2))>>> square.add_point(Point(2,1))>>> square.perimeter()4.0 That's fairly succinct and easy to read, you might think, but let's compare it to the function-based code: >>> square = [(1,1), (1,2), (2,2), (2,1)]>>> perimeter(square)4.0 Hmm, maybe the object-oriented API isn't so compact! On the other hand, I'd argue that it was easier to read than the function example: How do we know what the list of tuples is supposed to represent in the second version? How do we remember what kind of object (a list of two-tuples? That's not intuitive!) we're supposed to pass into the perimeter function? We would need a lot of external documentation to explain how these functions should be used. In contrast, the object-oriented code is relatively self documenting, we just have to look at the list of methods and their parameters to know what the object does and how to use it. By the time we wrote all the documentation for the functional version, it would probably be longer than the object-oriented code. Besides, code length is a horrible indicator of code complexity. Some programmers (thankfully, not many of them are Python coders) get hung up on complicated, "one liners", that do incredible amounts of work in one line of code. One line of code that even the original author isn't able to read the next day, that is. Always focus on making your code easier to read and easier to use, not shorter. As a quick exercise, can you think of any ways to make the object-oriented Polygon as easy to use as the functional implementation? Pause a moment and think about it. Really, all we have to do is alter our Polygon API so that it can be constructed with multiple points. Let's give it an initializer that accepts a list of Point objects. In fact, let's allow it to accept tuples too, and we can construct the Point objects ourselves, if needed: def __init__(self, points = []): self.vertices = [] for point in points: if isinstance(point, tuple): point = Point(*point) self.vertices.append(point) This example simply goes through the list and ensures that any tuples are converted to points. If the object is not a tuple, we leave it as is, assuming that it is either a Point already, or an unknown duck typed object that can act like a Point. As we can see, it's not always easy to identify when an object should really be represented as a self-defined class. If we have new functions that accept a polygon argument, such as area(polygon) or point_in_polygon(polygon, x, y), the benefits of the object-oriented code become increasingly obvious. Likewise, if we add other attributes to the polygon, such as color or texture, it makes more and more sense to encapsulate that data into a class. The distinction is a design decision, but in general, the more complicated a set of data is, the more likely it is to have functions specific to that data, and the more useful it is to use a class with attributes and methods instead. When making this decision, it also pays to consider how the class will be used. If we're only trying to calculate the perimeter of one polygon in the context of a much greater problem, using a function will probably be quickest to code and easiest to use "one time only". On the other hand, if our program needs to manipulate numerous polygons in a wide variety of ways (calculate perimeter, area, intersection with other polygons, and more), we have most certainly identified an object; one that needs to be extremely versatile. Pay additional attention to the interaction between objects. Look for inheritance relationships; inheritance is impossible to model elegantly without classes, so make sure to use them. Composition can, technically, be modeled using only data structures; for example, we can have a list of dictionaries holding tuple values, but it is often less complicated to create an object, especially if there is behavior associated with the data. Don't rush to use an object just because you can use an object, but never neglect to create a class when you need to use a class. Using properties to add behavior to class data Python is very good at blurring distinctions; it doesn't exactly help us to "think outside the box". Rather, it teaches us that the box is in our own head; "there is no box". Before we get into the details, let's discuss some bad object-oriented theory. Many object-oriented languages (Java is the most guilty) teach us to never access attributes directly. They teach us to write attribute access like this: class Color: def __init__(self, rgb_value, name): self._rgb_value = rgb_value self._name = name def set_name(self, name): self._name = name def get_name(self): return self._name The variables are prefixed with an underscore to suggest that they are private (in other languages it would actually force them to be private). Then the get and set methods provide access to each variable. This class would be used in practice as follows: >>> c = Color("#ff0000", "bright red")>>> c.get_name()'bright red'>>> c.set_name("red")>>> c.get_name()'red' This is not nearly as readable as the direct access version that Python favors: class Color: def __init__(self, rgb_value, name): self.rgb_value = rgb_value self.name = namec = Color("#ff0000", "bright red")print(c.name)c.name = "red" So why would anyone recommend the method-based syntax? Their reasoning is that someday we may want to add extra code when a value is set or retrieved. For example, we could decide to cache a value and return the cached value, or we might want to validate that the value is a suitable input. In code, we could decide to change the set_name() method as follows: def set_name(self, name): if not name: raise Exception("Invalid Name") self._name = name Now, in Java and similar languages, if we had written our original code to do direct attribute access, and then later changed it to a method like the above, we'd have a problem: Anyone who had written code that accessed the attribute directly would now have to access the method; if they don't change the access style, their code will be broken. The mantra in these languages is that we should never make public members private. This doesn't make much sense in Python since there isn't any concept of private members! Indeed, the situation in Python is much better. We can use the Python property keyword to make methods look like a class attribute. If we originally wrote our code to use direct member access, we can later add methods to get and set the name without changing the interface. Let's see how it looks: class Color: def __init__(self, rgb_value, name): self.rgb_value = rgb_value self._name = name def _set_name(self, name): if not name: raise Exception("Invalid Name") self._name = name def _get_name(self): return self._namename = property(_get_name, _set_name) If we had started with the earlier non-method-based class, which set the name attribute directly, we could later change the code to look like the above. We first change the name attribute into a (semi-) private _name attribute. Then we add two more (semi-) private methods to get and set that variable, doing our validation when we set it. Finally, we have the property declaration at the bottom. This is the magic. It creates a new attribute on the Color class called name, which now replaces the previous name attribute. It sets this attribute to be a property, which calls the two methods we just created whenever the property is accessed or changed. This new version of the Color class can be used exactly the same way as the previous version, yet it now does validation when we set the name: >>> c = Color("#0000ff", "bright red")>>> print(c.name)bright red>>> c.name = "red">>> print(c.name)red>>> c.name = ""Traceback (most recent call last): File "<stdin>", line 1, in <module> File "setting_name_property.py", line 8, in _set_name raise Exception("Invalid Name")Exception: Invalid Name So if we'd previously written code to access the name attribute, and then changed it to use our property object, the previous code would still work, unless it was sending an empty property value, which is the behavior we wanted to forbid in the first place. Success! Bear in mind that even with the name property, the previous code is not 100% safe. People can still access the _name attribute directly and set it to an empty string if they wanted to. But if they access a variable we've explicitly marked with an underscore to suggest it is private, they're the ones that have to deal with the consequences, not us.
Read more
  • 0
  • 0
  • 19267

article-image-introduction-bluestacks
Packt
05 Sep 2013
4 min read
Save for later

Introduction to BlueStacks

Packt
05 Sep 2013
4 min read
(For more resources related to this topic, see here.) So, what is BlueStacks? BlueStacks is a suite of tools designed to allow you to run Android apps easily on a Windows or Mac computer. The following screenshot shows how it looks: At the time of writing, there are two elements to the BlueStacks suite, which are listed as follows: App Player: This is the engine, which runs the Android apps Cloud Connect: This is a synchronization tool As the BlueStacks tools can be freely downloaded, anyone with a PC running on Windows or Mac can download them and start experimenting with their capabilities. This article will walk you through the process of running BlueStacks on a computer and show you some of the ways in which you can make the most out of this emerging technology. There are other ways by which you can run an emulation of Android on your computer. You can, for instance, run a virtual machine or install the Android Software Development Kit (SDK). These assume a degree of technical understanding that isn't necessarily required with BlueStacks, making BlueStacks the quickest and easiest way of running apps on your computer. BlueStacks is particularly interesting for users of Windows 8 tablets, as it opens up a whole library of mature software designed for a touch interface. This is particularly useful for those wanting to use many, free, or cheap Android apps on their laptop or tablet. It is worth noting that, at the time of writing this article, these tools are beta releases, so it is important that you take time to report the bugs that you may find to the developers through their website. The ongoing development and success of the software depends upon this feedback and results in a better product. If you become reliant on a particular feature, it is a good idea to highlight your love to the developers too. This can help influence which features are to be kept and improved upon as the product matures. App Player BlueStacks App Player allows a Windows or Mac user to run Android apps on their desktop or laptop. It does this by running an emulated version of Android within a window that you can interact with using your keyboard and mouse. The App Player can be downloaded and installed for free from the BlueStacks website, http://www.bluestacks.com. Currently, there are two main versions available for different operating systems that are enlisted as follows: Mac OS X Windows XP, Vista, 7, and 8 Once you have installed the software, an Android emulator runs on your machine. This is a light version of Android that can access app stores so that you can download and run free and paid apps and content. Most apps are compatible with App Player; however, there are some which are not (for technical reasons) and some which have been prevented by the App developers from running. If you are running any another operating system on your computer, the more computing power you can make available to the App Player the better. Otherwise, you might experience slow loading apps or worse still ones that do not function properly. To increase your chances of success, first try running App Player without running any other applications (for example, Word). Cloud Connect Cloud Connect provides a means to synchronize the apps running on an existing phone or tablet with the App Player. This means that you do not have to manually install lots of apps. Instead, you install an app on your device and sign up so that your App Player has exactly the same apps as your device. Summary Thus we learned the basics of BlueStacks and saw a brief of App Player and Cloud Connect feature of BlueStacks Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] New Connectivity APIs – Android Beam [Article]
Read more
  • 0
  • 0
  • 19239
Modal Close icon
Modal Close icon