Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-using-python-automation-to-interact-with-network-devices-tutorial
Melisha Dsouza
21 Feb 2019
15 min read
Save for later

Using Python Automation to interact with network devices [Tutorial]

Melisha Dsouza
21 Feb 2019
15 min read
In this tutorial, we will learn new ways to interact with network devices using Python. We will understand how to configure network devices using configuration templates, and also write a modular code to ensure high reusability of the code to perform repetitive tasks. We will also see the benefits of parallel processing of tasks and the efficiency that can be gained through multithreading. This Python tutorial has been taken from the second edition of Practical Network Automation. Read it here. Interacting with network devices Python is widely used to perform network automation. With its wide set of libraries (such as Netmiko and Paramiko), there are endless possibilities for network device interactions for different vendors. Let us understand one of the most widely used libraries for network interactions. We will be using Netmiko to perform our network interactions. Python provides a well-documented reference for each of the modules, and, for our module, the documentation can be found at pypi.org.  For installation, all we have to do is go into the folder from the command line where python.exe is installed or is present. There is a subfolder in that location called scripts. Inside the folder, we have two options that can be used for installing the easy_install.exe or pip.exe modules. Installing the library for Python can be done in two ways: The syntax of easy_install is as follows: easy_install <name of module> For example, to install Netmiko, the following command is run: easy_install netmiko The syntax of pip install is as follows: pip install <name of module> For example: pip install netmiko Here's an example of a simple script to log in to the router (an example IP is 192.168.255.249 with a username and password of cisco) and show the version: from netmiko import ConnectHandler device = ConnectHandler(device_type='cisco_ios', ip='192.168.255.249', username='cisco', password='cisco') output = device.send_command("show version") print (output) device.disconnect() The output of the execution of code against a router is as follows: As we can see in the sample code, we call the ConnectHandler function from the Netmiko library, which takes four inputs (platform type, IP address of device, username, and password): Netmiko works with a variety of vendors. Some of the supported platform types and their abbreviations to be called in Netmiko are as follows: a10: A10SSH, accedian: AccedianSSH, alcatel_aos: AlcatelAosSSH, alcatel_sros: AlcatelSrosSSH, arista_eos: AristaSSH, aruba_os: ArubaSSH, avaya_ers: AvayaErsSSH, avaya_vsp: AvayaVspSSH, brocade_fastiron: BrocadeFastironSSH, brocade_netiron: BrocadeNetironSSH, brocade_nos: BrocadeNosSSH, brocade_vdx: BrocadeNosSSH, brocade_vyos: VyOSSSH, checkpoint_gaia: CheckPointGaiaSSH, ciena_saos: CienaSaosSSH, cisco_asa: CiscoAsaSSH, cisco_ios: CiscoIosBase, cisco_nxos: CiscoNxosSSH, cisco_s300: CiscoS300SSH, cisco_tp: CiscoTpTcCeSSH, cisco_wlc: CiscoWlcSSH, cisco_xe: CiscoIosBase, cisco_xr: CiscoXrSSH, dell_force10: DellForce10SSH, dell_powerconnect: DellPowerConnectSSH, eltex: EltexSSH, enterasys: EnterasysSSH, extreme: ExtremeSSH, extreme_wing: ExtremeWingSSH, f5_ltm: F5LtmSSH, fortinet: FortinetSSH, generic_termserver: TerminalServerSSH, hp_comware: HPComwareSSH, hp_procurve: HPProcurveSSH, huawei: HuaweiSSH, juniper: JuniperSSH, juniper_junos: JuniperSSH, linux: LinuxSSH, mellanox_ssh: MellanoxSSH, mrv_optiswitch: MrvOptiswitchSSH, ovs_linux: OvsLinuxSSH, paloalto_panos: PaloAltoPanosSSH, pluribus: PluribusSSH, quanta_mesh: QuantaMeshSSH, ubiquiti_edge: UbiquitiEdgeSSH, vyatta_vyos: VyOSSSH, vyos: VyOSSSH Depending upon the selection of the platform type, Netmiko can understand the returned prompt and the correct way to SSH into the specific device. Once the connection is made, we can send commands to the device using the send_command method. Once we get the return value, the value stored in the output variable is displayed, which is the string output of the command that we sent to the device. The last line, which uses the disconnect function, ensures that the connection is terminated cleanly once we are done with our task. For configuration (for example, we need to provide a description to the FastEthernet 0/0 router interface), we use Netmiko, as shown in the following example: from netmiko import ConnectHandler print ("Before config push") device = ConnectHandler(device_type='cisco_ios', ip='192.168.255.249', username='cisco', password='cisco') output = device.send_command("show running-config interface fastEthernet 0/0") print (output) configcmds=["interface fastEthernet 0/0", "description my test"] device.send_config_set(configcmds) print ("After config push") output = device.send_command("show running-config interface fastEthernet 0/0") print (output) device.disconnect() The output of the execution of the preceding code is as follows: As we can see, for config push, we do not have to perform any additional configurations but just specify the commands in the same order as we send them manually to the router in a list, and pass that list as an argument to the send_config_set function. The output in Before config push is a simple output of the FastEthernet0/0 interface, but the output under After config push now has the description that we configured using the list of commands. In a similar way, we can pass multiple commands to the router, and Netmiko will go into configuration mode, write those commands to the router, and exit config mode. If we want to save the configuration, we use the following command after the send_config_set command: device.send_command("write memory") This ensures that the router writes the newly pushed configuration in memory. Additionally, for reference purposes across the book, we will be referring to the following GNS3 simulated network: In this topology, we have connected four routers with an Ethernet switch. The switch is connected to the local loopback interface of the computer, which provides the SSH connectivity to all the routers. We can simulate any type of network device and create topology based upon our specific requirements in GNS3 for testing and simulation. This also helps in creating complex simulations of any network for testing, troubleshooting, and configuration validations. The IP address schema used is the following: rtr1: 192.168.20.1 rtr2: 192.168.20.2 rtr3: 192.168.20.3 rtr4: 192.168.20.4 Loopback IP of computer: 192.168.20.5 The credentials used for accessing these devices are the following: Username: test Password: test Let us start from the first step by pinging all the routers to confirm their reachability from the computer. The code is as follows: import socket import os s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) for n in range(1, 5): server_ip="192.168.20.{0}".format(n) rep = os.system('ping ' + server_ip) if rep == 0: print ("server is up" ,server_ip) else: print ("server is down" ,server_ip) The output of running the preceding code is as follows: As we can see in the preceding code, we use the range command to iterate over the IPs 192.168.20.1-192.168.20.4. The server_ip variable in the loop is provided as an input to the ping command, which is executed for the response. The response stored in the rep variable is validated with a value of 0 stating that the router can be reached, and a value of 1 means the router is not reachable. As a next step, to validate whether the routers can successfully respond to SSH, let us fetch the value of uptime from the show version command: show version | in uptime The code is as follows: from netmiko import ConnectHandler username = 'test' password="test" for n in range(1, 5): ip="192.168.20.{0}".format(n) device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show version | in uptime") print (output) device.disconnect() The output of running the preceding command is as follows: Using Netmiko, we fetched the output of the command from each of the routers and printed a return value. A return value for all the devices confirms SSH attainability, whereas failure would have returned an exception, causing the code to abruptly end for that particular router. If we want to save the configuration, we use the following command after the send_config_set command: device.send_command("write memory") This ensures that the router writes the newly pushed configuration in memory. Network device configuration using template With all the routers reachable and accessible through SSH, let us configure a base template that sends the Syslog to a Syslog server and additionally ensures that only information logs are sent to the Syslog server. Also, after configuration, a validation needs to be performed to ensure that logs are being sent to the Syslog server. The logging server info is as follows: Logging server IP: 192.168.20.5  Logging port: 514 Logging protocol: TCP Additionally, a loopback interface (loopback 30) needs to be configured with the {rtr} loopback interface description. The code lines for the template are as follows: logging host 192.168.20.5 transport tcp port 514 logging trap 6 interface loopback 30 description "{rtr} loopback interface" To validate that the Syslog server is reachable and that the logs sent are informational, use the show logging command. In the event that  the output of the command contains the text: Trap logging: level informational: This confirms that the logs are sent as informational Encryption disabled, link up: This confirms that the Syslog server is reachable The code to create the configuration, push it on to the router and perform the validation, is as follows: from netmiko import ConnectHandler template="""logging host 192.168.20.5 transport tcp port 514 logging trap 6 interface loopback 30 description "{rtr} loopback interface\"""" username = 'test' password="test" #step 1 #fetch the hostname of the router for the template for n in range(1, 5): ip="192.168.20.{0}".format(n) device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show run | in hostname") output=output.split(" ") hostname=output[1] generatedconfig=template.replace("{rtr}",hostname) #step 2 #push the generated config on router #create a list for generateconfig generatedconfig=generatedconfig.split("\n") device.send_config_set(generatedconfig) #step 3: #perform validations print ("********") print ("Performing validation for :",hostname+"\n") output=device.send_command("show logging") if ("encryption disabled, link up"): print ("Syslog is configured and reachable") else: print ("Syslog is NOT configured and NOT reachable") if ("Trap logging: level informational" in output): print ("Logging set for informational logs") else: print ("Logging not set for informational logs") print ("\nLoopback interface status:") output=device.send_command("show interfaces description | in loopback interface") print (output) print ("************\n") The output of running the preceding command is as follows: Another key aspect to creating network templates is understanding the type of infrastructure device for which the template needs to be applied. As we generate the configuration form templates, there are times when we want to save the generated configurations to file, instead of directly pushing on devices. This is needed when we want to validate the configurations or even keep a historic repository for the configurations that are to be applied on the router. Let us look at the same example, only this time, the configuration will be saved in files instead of writing back directly to routers. The code to generate the configuration and save it as a file is as follows: from netmiko import ConnectHandler import os template="""logging host 192.168.20.5 transport tcp port 514 logging trap 6 interface loopback 30 description "{rtr} loopback interface\"""" username = 'test' password="test" #step 1 #fetch the hostname of the router for the template for n in range(1, 5): ip="192.168.20.{0}".format(n) device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show run | in hostname") output=output.split(" ") hostname=output[1] generatedconfig=template.replace("{rtr}",hostname) #step 2 #create different config files for each router ready to be pushed on routers. configfile=open(hostname+"_syslog_config.txt","w") configfile.write(generatedconfig) configfile.close() #step3 (Validation) #read files for each of the router (created as routername_syslog_config.txt) print ("Showing contents for generated config files....") for file in os.listdir('./'): if file.endswith(".txt"): if ("syslog_config" in file): hostname=file.split("_")[0] fileconfig=open(file) print ("\nShowing contents of "+hostname) print (fileconfig.read()) fileconfig.close() The output of running the preceding command is as follows: In a similar fashion to the previous example, the configuration is now generated. However, this time, instead of being pushed directly on routers, it is stored in different files with filenames based upon router names for all the routers that were provided in input. In each case, a .txt file is created (here is a sample filename that will be generated during execution of the script: rtr1_syslog_config.txt for the rtr1 router). As a final validation step, we read all the .txt files and print the generated configuration for each of the text files that has the naming convention containing syslog_config in the filename. There are times when we have a multi-vendor environment, and to manually create a customized configuration is a difficult task. Let us see an example in which we leverage a library (PySNMP) to fetch details regarding the given devices in the infrastructure using Simple Network Management Protocol (SNMP). For our test, we are using the SNMP community key mytest on the routers to fetch their model/version. The code to get the version and model of router, is as follows: #snmp_python.py from pysnmp.hlapi import * for n in range(1, 3): server_ip="192.168.20.{0}".format(n) errorIndication, errorStatus, errorIndex, varBinds = next( getCmd(SnmpEngine(), CommunityData('mytest', mpModel=0), UdpTransportTarget((server_ip, 161)), ContextData(), ObjectType(ObjectIdentity('SNMPv2-MIB', 'sysDescr', 0))) ) print ("\nFetching stats for...", server_ip) for varBind in varBinds: print (varBind[1]) The output of running the preceding command is as follows: As we see in this, the SNMP query was performed on a couple of routers (192.168.20.1 and 192.168.20.2). The SNMP query was performed using the standard Management Information Base (MIB), sysDescr. The return value of the routers against this MIB request is the make and model of the router and the current OS version it is running on. Using SNMP, we can fetch many vital statistics of the infrastructure and can generate configurations based upon the return values. This ensures that we have standard configurations even with a multi-vendor environment. As a sample, let us use the SNMP approach to determine the number of interfaces that a particular router has and, based upon the return values, we can dynamically generate a configuration irrespective of any number of interfaces available on the device. The code to fetch the available interfaces in a router is as follows: #snmp_python_interfacestats.py from pysnmp.entity.rfc3413.oneliner import cmdgen cmdGen = cmdgen.CommandGenerator() for n in range(1, 3): server_ip="192.168.20.{0}".format(n) print ("\nFetching stats for...", server_ip) errorIndication, errorStatus, errorIndex, varBindTable = cmdGen.bulkCmd( cmdgen.CommunityData('mytest'), cmdgen.UdpTransportTarget((server_ip, 161)), 0,25, '1.3.6.1.2.1.2.2.1.2' ) for varBindTableRow in varBindTable: for name, val in varBindTableRow: print('%s = Interface Name: %s' % (name.prettyPrint(), val.prettyPrint())) The output of running the preceding command is as follows: Using the snmpbulkwalk, we query for the interfaces on the router. The result from the query is a list that is parsed to fetch the SNMP MIB ID for the interfaces, along with the description of the interface. Multithreading A key focus area while performing operations on multiple devices is how quickly we can perform the actions. To put this into perspective, if each router takes around 10 seconds to log in, gather the output, and log out, and we have around 30 routers that we need to get this information from, we would need 10*30 = 300 seconds for the program to complete the execution. If we are looking for more advanced or complex calculations on each output, which might take up to a minute, then it will take 30 minutes for just 30 routers. This starts becoming very inefficient when our complexity and scalability grows. To help with this, we need to add parallelism to our programs.  Let us log in to each of the routers and fetch the show version using a parallel calling (or multithreading): #parallel_query.py from netmiko import ConnectHandler from datetime import datetime from threading import Thread startTime = datetime.now() threads = [] def checkparallel(ip): device = ConnectHandler(device_type='cisco_ios', ip=ip, username='test', password='test') output = device.send_command("show run | in hostname") output=output.split(" ") hostname=output[1] print ("\nHostname for IP %s is %s" % (ip,hostname)) for n in range(1, 5): ip="192.168.20.{0}".format(n) t = Thread(target=checkparallel, args= (ip,)) t.start() threads.append(t) #wait for all threads to completed for t in threads: t.join() print ("\nTotal execution time:") print(datetime.now() - startTime) The output of running the preceding command is as follows: The calling to the same set of routers being done in parallel takes approximately 8 seconds to fetch the results. Summary In this tutorial, we learned how to interact with Network devices through Python and got familiar with an extensively used library of Python (Netmiko) for network interactions. You also learned how to interact with multiple network devices using a simulated lab in GNS3 and got to know the device interaction through SNMP. Additionally, we also touched base on multithreading, which is a key component in scalability through various examples. To learn how to make your network robust by leveraging the power of Python, Ansible and other network automation tools, check out our book Practical Network Automation - Second Edition. AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 117463

article-image-creating-a-simple-modular-application-in-java-11-tutorial
Prasad Ramesh
20 Feb 2019
11 min read
Save for later

Creating a simple modular application in Java 11 [Tutorial]

Prasad Ramesh
20 Feb 2019
11 min read
Modular programming enables one to organize code into independent, cohesive modules, which can be combined to achieve the desired functionality. This article is an excerpt from a book written by Nick Samoylov and Mohamed Sanaulla titled Java 11 Cookbook - Second Edition. In this book, you will learn how to implement object-oriented designs using classes and interfaces in Java 11. The complete code for the examples shown in this tutorial can be found on GitHub. You should be wondering what this modularity is all about, and how to create a modular application in Java. In this article, we will try to clear up the confusion around creating modular applications in Java by walking you through a simple example. Our goal is to show you how to create a modular application; hence, we picked a simple example so as to focus on our goal. Our example is a simple advanced calculator, which checks whether a number is prime, calculates the sum of prime numbers, checks whether a number is even, and calculates the sum of even and odd numbers. Getting ready We will divide our application into two modules: The math.util module, which contains the APIs for performing the mathematical calculations The calculator module, which launches an advanced calculator How to do it Let's implement the APIs in the com.packt.math.MathUtil class, starting with the isPrime(Integer number) API: public static Boolean isPrime(Integer number){ if ( number == 1 ) { return false; } return IntStream.range(2,num).noneMatch(i -> num % i == 0 ); } Implement the sumOfFirstNPrimes(Integer count) API: public static Integer sumOfFirstNPrimes(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isPrime(j)) .limit(count).sum(); } Let's write a function to check whether the number is even: public static Boolean isEven(Integer number){ return number % 2 == 0; } The negation of isEven tells us whether the number is odd. We can have functions to find the sum of the first N even numbers and the first N odd numbers, as shown here: public static Integer sumOfFirstNEvens(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isEven(j)) .limit(count).sum(); } public static Integer sumOfFirstNOdds(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> !isEven(j)) .limit(count).sum(); } We can see in the preceding APIs that the following operations are repeated: An infinite sequence of numbers starting from 1 Filtering the numbers based on some condition Limiting the stream of numbers to a given count Finding the sum of numbers thus obtained Based on our observation, we can refactor the preceding APIs and extract these operations into a method, as follows: Integer computeFirstNSum(Integer count, IntPredicate filter){ return IntStream.iterate(1,i -> i+1) .filter(filter) .limit(count).sum(); } Here, count is the limit of numbers we need to find the sum of, and filter is the condition for picking the numbers for summing. Let's rewrite the APIs based on the refactoring we just did: public static Integer sumOfFirstNPrimes(Integer count){ return computeFirstNSum(count, (i -> isPrime(i))); } public static Integer sumOfFirstNEvens(Integer count){ return computeFirstNSum(count, (i -> isEven(i))); } public static Integer sumOfFirstNOdds(Integer count){ return computeFirstNSum(count, (i -> !isEven(i))); So far, we have seen a few APIs around mathematical computations. These APIs are part of our com.packt.math.MathUtil class. The complete code for this class can be found at Chapter03/2_simple-modular-math-util/math.util/com/packt/math, in the codebase downloaded for this book. Let's make this small utility class part of a module named math.util. The following are some conventions we use to create a module: Place all the code related to the module under a directory named math.util and treat this as our module root directory. In the root folder, insert a file named module-info.java. Place the packages and the code files under the root directory. What does module-info.java contain? The following: The name of the module The packages it exports, that is, the one it makes available for other modules to use The modules it depends on The services it uses The service for which it provides implementation Our math.util module doesn't depend on any other module (except, of course, the java.base module). However, it makes its API available for other modules (if not, then this module's existence is questionable). Let's go ahead and put this statement into code: module math.util{ exports com.packt.math; } We are telling the Java compiler and runtime that our math.util module is exporting the code in the com.packt.math package to any module that depends on math.util. The code for this module can be found at Chapter03/2_simple-modular-math-util/math.util. Now, let's create another module calculator that uses the math.util module. This module has a Calculator class whose work is to accept the user's choice for which mathematical operation to execute and then the input required to execute the operation. The user can choose from five available mathematical operations: Prime number check Even number check Sum of N primes Sum of N evens Sum of N odds Let's see this in code: private static Integer acceptChoice(Scanner reader){ System.out.println("************Advanced Calculator************"); System.out.println("1. Prime Number check"); System.out.println("2. Even Number check"); System.out.println("3. Sum of N Primes"); System.out.println("4. Sum of N Evens"); System.out.println("5. Sum of N Odds"); System.out.println("6. Exit"); System.out.println("Enter the number to choose operation"); return reader.nextInt(); } Then, for each of the choices, we accept the required input and invoke the corresponding MathUtil API, as follows: switch(choice){ case 1: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isPrime(number)){ System.out.println("The number " + number +" is prime"); }else{ System.out.println("The number " + number +" is not prime"); } break; case 2: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isEven(number)){ System.out.println("The number " + number +" is even"); } break; case 3: System.out.println("How many primes?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d primes is %d", count, MathUtil.sumOfFirstNPrimes(count))); break; case 4: System.out.println("How many evens?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d evens is %d", count, MathUtil.sumOfFirstNEvens(count))); break; case 5: System.out.println("How many odds?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d odds is %d", count, MathUtil.sumOfFirstNOdds(count))); break; } The complete code for the Calculator class can be found at Chapter03/2_simple-modular-math-util/calculator/com/packt/calculator/Calculator.java. Let's create the module definition for our calculator module in the same way we created it for the math.util module: module calculator{ requires math.util; } In the preceding module definition, we mentioned that the calculator module depends on the math.util module by using the required keyword. The code for this module can be found at Chapter03/2_simple-modular-math-util/calculator. Let's compile the code: javac -d mods --module-source-path . $(find . -name "*.java") The preceding command has to be executed from Chapter03/2_simple-modular-math-util. Also, you should have the compiled code from across both the modules, math.util and calculator, in the mods directory. Just a single command and everything including the dependency between the modules is taken care of by the compiler. We didn't require build tools such as ant to manage the compilation of modules. The --module-source-path command is the new command-line option for javac, specifying the location of our module source code. Let's execute the preceding code: java --module-path mods -m calculator/com.packt.calculator.Calculator The --module-path command, similar to --classpath, is the new command-line option  java, specifying the location of the compiled modules. After running the preceding command, you will see the calculator in action: Congratulations! With this, we have a simple modular application up and running. We have provided scripts to test out the code on both Windows and Linux platforms. Please use run.bat for Windows and run.sh for Linux. How it works Now that you have been through the example, we will look at how to generalize it so that we can apply the same pattern in all our modules. We followed a particular convention to create the modules: |application_root_directory |--module1_root |----module-info.java |----com |------packt |--------sample |----------MyClass.java |--module2_root |----module-info.java |----com |------packt |--------test |----------MyAnotherClass.java We place the module-specific code within its folders with a corresponding module-info.java file at the root of the folder. This way, the code is organized well. Let's look into what module-info.java can contain. From the Java language specification (http://cr.openjdk.java.net/~mr/jigsaw/spec/lang-vm.html), a module declaration is of the following form: {Annotation} [open] module ModuleName { {ModuleStatement} } Here's the syntax, explained: {Annotation}: This is any annotation of the form @Annotation(2). open: This keyword is optional. An open module makes all its components accessible at runtime via reflection. However, at compile-time and runtime, only those components that are explicitly exported are accessible. module: This is the keyword used to declare a module. ModuleName: This is the name of the module that is a valid Java identifier with a permissible dot (.) between the identifier names—similar to math.util. {ModuleStatement}: This is a collection of the permissible statements within a module definition. Let's expand this next. A module statement is of the following form: ModuleStatement: requires {RequiresModifier} ModuleName ; exports PackageName [to ModuleName {, ModuleName}] ; opens PackageName [to ModuleName {, ModuleName}] ; uses TypeName ; provides TypeName with TypeName {, TypeName} ; The module statement is decoded here: requires: This is used to declare a dependency on a module. {RequiresModifier} can be transitive, static, or both. Transitive means that any module that depends on the given module also implicitly depends on the module that is required by the given module transitively. Static means that the module dependence is mandatory at compile time, but optional at runtime. Some examples are requires math.util, requires transitive math.util, and requires static math.util. exports: This is used to make the given packages accessible to the dependent modules. Optionally, we can force the package's accessibility to specific modules by specifying the module name, such as exports com.package.math to claculator. opens: This is used to open a specific package. We saw earlier that we can open a module by specifying the open keyword with the module declaration. But this can be less restrictive. So, to make it more restrictive, we can open a specific package for reflective access at runtime by using the opens keyword—opens com.packt.math. uses: This is used to declare a dependency on a service interface that is accessible via java.util.ServiceLoader. The service interface can be in the current module or in any module that the current module depends on. provides: This is used to declare a service interface and provide it with at least one implementation. The service interface can be declared in the current module or in any other dependent module. However, the service implementation must be provided in the same module; otherwise, a compile-time error will occur. We will look at the uses and provides clauses in more detail in the Using services to create loose coupling between the consumer and provider modules recipe. The module source of all modules can be compiled at once using the --module-source-path command-line option. This way, all the modules will be compiled and placed in their corresponding directories under the directory provided by the -d option. For example, javac -d mods --module-source-path . $(find . -name "*.java") compiles the code in the current directory into a mods directory. Running the code is equally simple. We specify the path where all our modules are compiled into using the command-line option --module-path. Then, we mention the module name along with the fully qualified main class name using the command-line option -m, for example, java --module-path mods -m calculator/com.packt.calculator.Calculator. In this tutorial, we learned to create a simple modular Java application. To learn more Java 11 recipes, check out the book Java 11 Cookbook - Second Edition. Brian Goetz on Java futures at FOSDEM 2019 7 things Java programmers need to watch for in 2019 Clojure 1.10 released with Prepl, improved error reporting and Java compatibility
Read more
  • 0
  • 0
  • 49465

article-image-gaussian-prototypical-networks-in-meta-learning-tutorial
Prasad Ramesh
19 Feb 2019
7 min read
Save for later

Gaussian prototypical networks in meta-learning [Tutorial]

Prasad Ramesh
19 Feb 2019
7 min read
A Gaussian prototypical network is a variant of a prototypical network. A prototypical network learns the embeddings of the data points and how it builds the class prototype by taking the mean embeddings of each class and use the class prototype for performing classification. This article is an excerpt from a book written by Sudharsan Ravichandiran titled Hands On Meta-Learning with Python. In this book, you will learn the prototypical network along with its variants. In a Gaussian prototypical network, along with generating embeddings for the data points, we add a confidence region around them, characterized by a Gaussian covariance matrix. Having a confidence region helps in characterizing the quality of individual data points and would be useful in the case of noisy and less homogeneous data. So, in Gaussian prototypical networks, the output of the encoder will be embeddings, as well as the covariance matrix. Instead of using the full covariance matrix, we either include a radius or diagonal component from the covariance matrix along with the embeddings: Radius component: If we use the radius component of the covariance matrix, then the dimension of our covariance matrix would be 1, as the radius is just a single number. Diagonal component: If we use the diagonal component of the covariance matrix, then the dimension of our covariance matrix would be the same as the embedding matrix dimension. Also, instead of using the covariance matrix directly, we use the inverse of a covariance matrix. We can convert the raw covariance matrix into the inverse covariance matrix using any of the following methods. Let Sraw be the covariance matrix and S be the inverse covariance matrix: S = 1 + Softplus(Sraw) S = 1 + sigmoid(Sraw) S = 1+ 4 * sigmoid(Sraw) S = offset + scale * softplus(Sraw/div), where offset and scale are trainable parameters So, the encoder, along with generating embedding for the input, also returns the covariance matrix. We use either the diagonal or radius components of the covariance matrix. Also, instead of using a covariance matrix directly, we use the inverse covariance matrix. But what is the use of having the covariance matrix along with the embeddings? As said earlier, it adds the confidence region around the data points and is very useful in the case of noisy data. Look at the following diagram. Let's say we have two classes, A and B. The dark dots represent the embeddings of the data point, and the circles around the dark dots indicate the covariance matrices. A big dotted circle represents the overall covariance matrix for a class. A star in the middle indicates the class prototype. As you can see, having this covariance matrix around the embeddings gives us a confidence region around the data point and for class prototypes: Let's better understand this by looking at the code. Let's say we have an image, X, and we want to generate embeddings for the image. Let's represent the covariance matrix by sigma. First, we select what component of the covariance matrix we want to use—that is, whether we want to use the diagonal or radius component. If we use the radius component, then our covariance matrix dimension would be just one. If we opt for the diagonal component, then the size of the covariance matrix would be same as the embedding dimension: if component =='radius': covariance_matrix_dim = 1 else: covariance_matrix_dim = embedding_dim Now, we define our encoder. Since our input is an image, we use a convolutional block as our encoder. So, we define the size of filters, a number of filters, and the pooling layer size: filters = [3,3,3,3] num_filters = [64,64,64,embedding_dim +covariance_matrix_dim] pools = [2,2,2,2] We initialize embeddings as our image, X: previous_channels = 1 embeddings = X weight = [] bias = [] conv_relu = [] conv = [] conv_pooled = [] Then, we perform the convolutional operation and get the embeddings: for i in range(len(filters)): filter_size = filters[i] num_filter = num_filters[i] pool = pools[i] weight.append(tf.get_variable("weights_"+str(i), shape=[filter_size, filter_size, previous_channels, num_filter]) bias.append(tf.get_variable("bias_"+str(i), shape=[num_filter])) conv.append(tf.nn.conv2d(embeddings, weight[i], strides=[1,1,1,1], padding='SAME') + bias[i]) conv_relu.append(tf.nn.relu(conv[i])) conv_pooled.append(tf.nn.max_pool(conv_relu[i], ksize = [1,pool,pool,1], strides=[1,pool,pool,1], padding = "VALID")) previous_channels = num_filter embeddings = conv_pooled [i] We take the output of the last convolutional layer as our embeddings and reshape the result to have embeddings, as well as the covariance matrix: X_encoded = tf.reshape(embeddings,[-1,embedding_dim + covariance_matrix_dim ]) Now, we split the embeddings and raw covariance matrix, as we need to convert the raw covariance matrix into the inverse covariance matrix: embeddings, raw_covariance_matrix = tf.split(X_encoded, [embedding_dim, covariance_matrix_dim], 1) Next, we calculate the inverse of a covariance matrix using any of the discussed methods: if inverse_transform_type == "softplus": offset = 1.0 scale = 1.0 inv_covariance_matrix = offset + scale * tf.nn.softplus(raw_covariance_matrix) elif inverse_transform_type == "sigmoid": offset = 1.0 scale = 1.0 inv_covariance_matrix = offset + scale * tf.sigmoid(raw_covariance_matrix) elif inverse_transform_type == "sigmoid_2": offset = 1.0 scale = 4.0 inv_covariance_matrix = offset + scale * tf.sigmoid(raw_covariance_matrix) elif inverse_transform_type == "other": init = tf.constant(1.0) scale = tf.get_variable("scale", initializer=init) div = tf.get_variable("div", initializer=init) offset = tf.get_variable("offset", initializer=init) inv_covariance_matrix = offset + scale * tf.nn.softplus(raw_covariance_matrix/div) So far, we have seen that we calculate the covariance matrix along with embeddings of an input. What's next? How can we compute the class prototype? The class prototype, , can be computed as follows: In this equation, is the diagonal of the inverse covariance matrix, denotes the embeddings and superscript c denotes the class. After computing the prototype for each of the classes, we learn the embedding of the query point. Let  be the embedding of a query point. Then, we compute the distance between the query point embedding and class prototype as follows: Finally, we predict the class of a query set ( ), which has the minimum distance with the class prototype: The algorithm for Gaussian prototypical networks Now, we will better understand the Gaussian prototypical network by going through it step by step: Let's say we have a dataset, D = {(x1, y1,), (x2, y2), ... (xi, yi)}, where x is the feature and y is the label. Let's say we have a binary label, which means we have only two classes, 0 and 1. We will sample data points at random without replacement from each of the classes from our dataset, D, and create our support set, S. Similarly, we sample data points at random per class and create the query set, Q. We will pass the support set to our embedding function, f(). The embedding function will generate the embeddings for our support set, along with the covariance matrix. We calculate the inverse of the covariance matrix. We compute the prototype of each class in the support set as follows: In this equation, is the diagonal of the inverse covariance matrix, denotes the embeddings of the support set and superscript c denotes the class. After computing the prototype of each class in the support set, we learn the embeddings for the query set, Q. Let's say x' is the embedding of the query point. We calculate the distance of the query point embeddings to the class prototypes as follows: After calculating the distance between the class prototype and query set embeddings, we predict the class of the query set as a class that has a minimum distance, as follows: In this tutorial, we learned about the Gaussian prototypical network, which, uses embeddings, and the covariance matrix to compute the class prototype. To learn more about meta-learning in Python, check out the book Hands-On Meta-Learning with Python. What is Meta-Learning? Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block “Deep meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI)”, Sudharsan Ravichandiran
Read more
  • 0
  • 0
  • 4437

article-image-how-to-create-a-native-mobile-app-with-react-native-tutorial
Bhagyashree R
19 Feb 2019
12 min read
Save for later

How to create a native mobile app with React Native [Tutorial]

Bhagyashree R
19 Feb 2019
12 min read
React Native was developed by Facebook, along with the lines of the React framework. Instead of rendering components to a browser's DOM, React Native (RN) invokes native APIs to create internal components that are handled through your JS code. There are some differences between the usual HTML elements and RN's components, but they are not too hard to overcome. With this tool, you are actually building a native app that looks and behaves exactly like any other native application, except that you use a single language, JS, for both Android and iOS development. This article is taken from the book  Modern JavaScript Web Development Cookbook by Federico Kereki.  This book is a perfect blend of solutions for traditional JavaScript development and modern areas that developers have recently been exploring with JavaScript. This problem-solving guide teaches you popular problems solving techniques for JavaScript on servers, browsers, mobile phones, and desktops. To follow along with the examples implemented in this article, you can download the code from the book's GitHub repository. In this article, we'll see how to install and use React Native to build a mobile application. We will also see how to add development tools like ESLint, Flow, and Prettier. Setting up a RN application There are three ways to set up a RN application: manually, which you won't want to do; secondly, with packages, using the react-native-cli command-line interface; or lastly, by using a package very similar to create-react-native-app (or CRAN). We start by getting a command-line utility, which will include plenty of other packages: npm install create-react-native-app -g Afterward, we can create and run a simple project with just three commands: create-react-native-app yourprojectname cd yourprojectname npm start How it works... When you run your app, it starts a server at your machine, at port 19000 or 19001, to which you will connect using the Expo application. You can download Expo from its official website, which is available for both Android or iOS. Install it by following the instructions onscreen: When you open the Expo app for the first time, it will look like the following screenshot: Note that both the phone and your machine must be in the same local network, and your machine must also allow connections to ports 19000 and 19001; you may have to modify your firewall for this to work. After you use the Scan QR Code option, there will be some synchronization, and soon you'll get to see your basic code running with no problems: Furthermore, if you modify the App.js source code, the changes will be immediately reflected in your device, which means all is well! To make sure this happens, shake the phone to enable the debugging menu, and make sure that Live Reload and Hot Reloading are enabled. You'll also require Remote JS Debugging for later. Your phone should look as follows: Adding development tools Next, we need to add all the development tools required. We want to have ESLint for code checking, Prettier for formatting, and Flow for data types. CRAN takes care of including Babel and Jest, so we won't have to do anything for those two. How to do it... As opposed React, where we need to add a special rewiring package in order to work with specific configurations, in RN, we can just add some packages and configuration files, and we'll be ready to go. Adding ESLint For ESLint, we'll have quite a list of packages we want: npm install --save-dev \ eslint eslint-config-recommended eslint-plugin-babel \ eslint-plugin-flowtype eslint-plugin-react eslint-plugin-react-native We'll require a separate .eslintrc file, as in the case with React. The appropriate contents include the following: { "parser": "babel-eslint", "parserOptions": { "ecmaVersion": 2017, "sourceType": "module", "ecmaFeatures": { "jsx": true } }, "env": { "node": true, "browser": true, "es6": true, "jest": true, "react-native/react-native": true }, "extends": [ "eslint:recommended", "plugin:flowtype/recommended", "plugin:react/recommended", "plugin:react-native/all" ], "plugins": ["babel", "flowtype", "react", "react-native"], "rules": { "no-console": "off", "no-var": "error", "prefer-const": "error", "flowtype/no-types-missing-file-annotation": 0 } } Adding Flow Having completed that, ESLint is set to recognize our code, but we have to configure Flow as well: npm install --save-dev flow flow-bin flow-coverage-report flow-typed We'll have to add a couple of lines to the scripts section of package.json: "scripts": { "start": "react-native-scripts start", . . . "flow": "flow", "addTypes": "flow-typed install" }, Then, we have to initialize the working directories of Flow: npm run flow init The contents of the .flowconfig file look like this: [ignore] .*/node_modules/.* [include] [libs] [lints] all=warn untyped-type-import=off unsafe-getters-setters=off [options] include_warnings=true [strict] Adding Prettier There's not much to installing Prettier, all we need is an npm command, plus the .prettierrc file. For the former, just use the following command: npm install --save-dev prettier For configuration, we can use the contents of this .prettierrc file: { "tabWidth": 4, "printWidth": 75 } How it works... Let's check that everything is OK. We'll start by looking at the App.js file that was created by CRAN, and we can immediately verify that the tools work—because a problem is detected! Have a look at the following screenshot: The rule that fails is a new one, from eslint-plugin-react-native: no-color-literals, because we are using constants in styling, which could prove to be a maintenance headache in the future. We can solve that by adding a variable, and we'll use a type declaration to make sure Flow is also running. The new code should be as follows: // Source file: App.original.fixed.js /* @flow */ import React from "react"; import { StyleSheet, Text, View } from "react-native"; export default class App extends React.Component<> { render() { return ( <View style={styles.container}> <Text>Open up App.js to start working on your app!</Text> <Text>Changes you make will automatically reload.</Text> <Text>Shake your phone to open the developer menu.</Text> </View> ); } } const white: string = "#fff"; const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: white, alignItems: "center", justifyContent: "center" } }); Using native components Working with RN is very much like working with React—there are components, state, props, life cycle events, and so on—but there is a key difference: your own components won't be based on HTML but on specific RN ones. For instance, you won't be using <div> elements, but rather <View> ones, which will be then mapped by RN to a UIView for iOS, or to an Android.View for Android. Getting ready We will start with an example of countries and regions page, which you can find in the book's GitHub repository.  Since we are using PropTypes, we'll need that package. Install it with the following command: npm install prop-types --save Then, we'll have to install some packages, starting with Redux and relatives. Actually, CRAN already includes redux and react-redux, so we don't need those, but redux-thunk isn't included.  We can install it using the following command: npm install react react-redux redux-thunk --save We'll also be using axios for async calls: npm install axios --save Our final step will be to run the server code (you can find it in the GitHub repo) so that our app will be able to do async calls. After downloading the server code from the GitHub repo, go to the directory, and just enter the following command: node out/restful_server.js. Let's now see how we can modify our code to make it appropriate for RN. How to do it... Since RN uses its own components, your HTML experience will be of little use. Here, we'll see some changes, but in order to derive the full benefits of all of RN's possibilities, you'll have to study its components on your own. Let's start with the <RegionsTable> component, which is rather simple: // Source file: src/regionsApp/regionsTable.component.js . . . render() { if (this.props.list.length === 0) { return ( <View> <Text>No regions.</Text> </View> ); } else { const ordered = [...this.props.list].sort( (a, b) => (a.regionName < b.regionName ? -1 : 1) ); return ( <View> {ordered.map(x => ( <View key={x.countryCode + "-" + x.regionCode}> <Text>{x.regionName}</Text> </View> ))} </View> ); } } Notice that there are no changes in the rest of the component, and all your React knowledge is still valid; you just have to adjust the output of your rendering method. Next, we'll change the <CountrySelect> component to use <Picker>, which is sort of similar, but we'll require some extra modifications. Let's take a look at our component, highlighting the parts where changes are needed: // Source file: src/regionsApp/countrySelect.component.js /* @flow */ import React from "react"; import PropTypes from "prop-types"; import { View, Text, Picker } from "react-native"; export class CountrySelect extends React.PureComponent<{ dispatch: ({}) => any }> { static propTypes = { loading: PropTypes.bool.isRequired, currentCountry: PropTypes.string.isRequired, list: PropTypes.arrayOf(PropTypes.object).isRequired, onSelect: PropTypes.func.isRequired, getCountries: PropTypes.func.isRequired }; componentDidMount() { if (this.props.list.length === 0) { this.props.getCountries(); } } onSelect = value => this.props.onSelect(value); render() { if (this.props.loading) { return ( <View> <Text>Loading countries...</Text> </View> ); } else { const sortedCountries = [...this.props.list].sort( (a, b) => (a.countryName < b.countryName ? -1 : 1) ); return ( <View> <Text>Country:</Text> <Picker onValueChange={this.onSelect} prompt="Country" selectedValue={this.props.currentCountry} > <Picker.Item key={"00"} label={"Select a country:"} value={""} /> {sortedCountries.map(x => ( <Picker.Item key={x.countryCode} label={x.countryName} value={x.countryCode} /> ))} </Picker> </View> ); } } } Lots of changes! Let's go through them in the order they occur: An unexpected change: if you want a <Picker> component to display its current value, you must set its selectedValue property; otherwise, even if the user selects a country, the change won't be seen onscreen. We'll have to provide an extra prop, currentCountry, which we'll get from the store, so we can use it as the selectedValue for our list. The fired event when the user selects a value is also different; the event handler will be called directly with the chosen value, instead of with an event from which to work with event.target.value. We have to replace the <select> element with <Picker>, and provide a prompt text prop that will be used when the expanded list is shown onscreen. We have to use <Item> elements for the individual options, noting that the label to be displayed is now a prop. Let's not forget the change when connecting the list of countries to the store; we'll only have to add an extra property to the getProps() function: // Source file: src/regionsApp/countrySelect.connected.js const getProps = state => ({ list: state.countries, currentCountry: state.currentCountry, loading: state.loadingCountries }); Now, all we need to do is see how the main app is set up. Our App.js code will be quite simple: // Source file: App.js /* @flow */ import React from "react"; import { Provider } from "react-redux"; import { store } from "./src/regionsApp/store"; import { Main } from "./src/regionsApp/main"; export default class App extends React.PureComponent<> { render() { return ( <Provider store={store}> <Main /> </Provider> ); } } This is pretty straightforward. The rest of the setup will be in the main.js file, which has some interesting details: // Source file: src/regionsApp/main.js /* @flow */ import React from "react"; import { View, StatusBar } from "react-native"; import { ConnectedCountrySelect, ConnectedRegionsTable } from "."; export class Main extends React.PureComponent<> { render() { return ( <View> <StatusBar hidden /> <ConnectedCountrySelect /> <ConnectedRegionsTable /> </View> ); } } Apart from the usage of <View> wherever we would previously have used <div> (a change to which you should already have gotten used to), there's an added detail: we don't want the status bar to show, so we use the <StatusBar> element, and make sure to hide it. How it works... Just for variety, instead of using my mobile phone, as I did earlier in this article, I decided to use an emulated device. After starting the application with npm start, I started my device, and soon got the following: If the user touches the <Picker> element, a popup will be displayed, listing the countries that were received from our Node server, as shown in the following screenshot: When the user actually taps on a country, the onValueChange event is fired, and after calling the server, the list of regions is displayed, as follows: Everything works, and is using native components; great! By the way, if you were not very sure about the selectedValue problem we described, just omit that prop, and when the user picks on a country, you'll get a bad result: This article walked you through the installation and set up the process of React Native and other development tools for developing the mobile version of a web app. If you found this post useful, do check out the book, Modern JavaScript Web Development Cookbook.  You will learn how to create native mobile applications for Android and iOS with React Native, build client-side web applications using React and Redux, and much more. React Native 0.59 RC0 is now out with React Hooks, and more The React Native team shares their open source roadmap, React Suite hits 3.4.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 80263

article-image-understand-how-to-access-the-dark-web-with-tor-browser-tutorial
Savia Lobo
16 Feb 2019
8 min read
Save for later

Understand how to access the Dark Web with Tor Browser [Tutorial]

Savia Lobo
16 Feb 2019
8 min read
According to the Tor Project website: “Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security. The Tor network is a group of volunteer-operated servers that allows people to improve their privacy and security on the Internet. Tor's users employ this network by connecting through a series of virtual tunnels rather than making a direct connection, thus allowing both organizations and individuals to share information over public networks without compromising their privacy. Along the same line, Tor is an effective censorship circumvention tool, allowing its users to reach otherwise blocked destinations or content. Tor can also be used as a building block for software developers to create new communication tools with built-in privacy features.” This article is an excerpt taken from the book, Hands-On Dark Web Analysis written by Sion Retzkin. In this book, you will learn how to install operating systems and Tor Browser for privacy, security, and anonymity while accessing them. In this article, we will understand what Tor and the Tor browser is and how to install it in several ways. Tor (which is an acronym for The Onion Router, by the way) is a privacy-focused network that hides your traffic, by routing it through multiple random servers on the Tor network. So, instead of the packets that make up your communication with another party (person or organization), going from point A to B directly, using Tor, they will jump all over the place, between multiple servers, before reaching point B, hiding the trail. Additionally, the packets that make up the traffic (or communication) in the Tor network are wrapped in special layers, which only show the previous server or step that the packet came from, and the next step, hiding the entire route effectively. Tor Browser is a web browser, based on Firefox that was created for the purpose of accessing the Tor network, securely and privately. Even if you use Tor, this doesn't mean that you're secure. Why is that? Because Tor Browser has software vulnerabilities, the same as every other browser. It's also based on Firefox, so it inherits some of its vulnerabilities from there as well. You can minimize attack vectors by applying common security sense, and by employing various tools to try to limit or prevent malicious activity, related to infecting the Tor Browser or the host running it. Installing Tor on Linux Let's start with a classic installation, by accessing the Tor Project website, via a browser. The default browser that ships with Ubuntu is Firefox, which is what we'll use. Although you might think that this would be the best way to install Tor Browser, it's actually the least secure, since the Tor Project website is continuously targeted by hackers and might have any number of security or privacy issues on it. Instead of just downloading Tor Browser and immediately installing it (which is dangerous), you can either download the file and verify its hash (to verify that it is indeed the correct one), or you could install it through other methods, for example, via the Terminal, by using Linux commands, or from the Ubuntu Software Center. We'll start by going over the steps to download Tor Browser from the Tor Project website: After booting your Linux installation, open your browser Enter the following address and navigate to it: https://www.torproject.org/download/download-easy.html.en#linux. Notice that the URL takes you directly to the Linux download section of the Tor Project website. I usually prefer this direct method, rather than starting with Google (or any other search engine), searching for Tor, and then accessing the Tor Project website, since, as you may know, Google collects information about users accessing it, and the whole idea of this book is to maintain our privacy and security. Also, always verify that you're accessing the Tor Project website via HTTPS. Choose the correct architecture (32 or 64 bit), and click the Download link. You'll be able to choose what you want to do with the file—open it with Ubuntu's Archive Manager, or save it to a location on the disk: Downloading Tor Browser   Again, the quickest way to go would be to open the compressed file, but the more secure way would be to download the file and to verify its hash, before doing anything else. The Tor Project provides GNU Privacy Guard (GPG) signature files, with each version of Tor Browser. You will need to install GnuPG on your Linux OS, if it isn't there already, in order to be able to verify the hash of the browser package. To do so, just open the Terminal and type in the following: sudo apt install gnupg Enter your password when required, and the installation will commence. Most Linux installations already include gnupg, as can be seen in the following screenshot: Installing GnuPG    After installing GnuPG, you need to import the key that signed the package. According to the Tor Project website, the Tor Browser import key is 0x4e2C6e8793298290. The Tor Project updates and changes the keys from time to time, so you can always navigate to:  https://www.torproject.org/docs/verifying-signatures.html.en to find the current import key if the one in the book doesn't work. The command to import the key is as follows: gpg --keyserver pool.sks-keyservers.net --recv-keys 0x4e2C6e8793298290 This is followed by this: gpg --fingerprint 0x4e2C6e8793298290 This will tell you whether the key fingerprint is correct. You should see the following: Verify key fingerprint Now, you need to download the .asc file, which is found on the Tor Browser Downloads page, next to the relevant package of the browser (it appears as sig, short for signature): ASC file location   You can find the Tor Browser download page here: https://www.torproject.org/projects/torbrowser.html Now, you can verify the signature of the package, using the ASC file. To do so, enter the following command in the Terminal: gpg --verify tor-browser-linux64-7.5.6_en-US.tar.xz.asc tor-browser-linux64-7.5.6_en-US.tar.xz Note the 64 that I marked in bold. If your OS is 32-bit, change the number to 32. The result you should get is as follows: Verifying the signature   After verifying the hash (signature) of the Tor Browser package, you can install it. You can do so by either: Double-clicking the Tor Browser package file (which will open up the Archive Manager program), clicking Extract, and choosing the location of your choice. Right-clicking the file and choosing Extract here or Extract to and choosing a location. After extracting, perform the following steps: Navigate to the location you defined. Double-click on the Start-tor-browser.desktop file to launch Tor Browser. Press Trust and Launch in the window that appears: Launching Tor   Notice that the filename and icon changed to Tor Browser. Press Connect and you will be connected to the Tor network, and will be able to browse it, using Tor Browser: Connecting to Tor   Before we discuss using Tor Browser, let's talk about alternative ways to install it, for example, by using the Ubuntu Software application. Start by clicking on the Ubuntu Software icon: Ubuntu Software Search for Tor Browser, then click on the relevant result: Tor Browser in Ubuntu Software Then, click Install. After entering your password, the installation process will start. When it ends, click Launch to start Tor Browser. Installing Tor Browser via the Terminal, from the downloaded package Another way to install Tor is to use commands, via the Terminal. There are several ways to do so, as follows: First, download the required Tor Browser package from the website Verify the download, as we discussed before, and then keep the Terminal open Navigate to the location where you downloaded Tor, by entering the following command: cd path/Tor_Browser_Directory For example, note the following: cd /downloads/tor-browser_en_US Then, launch Tor Browser by running the following: ./start-tor-browser.desktop Never launch Tor as root (or with the sudo command). Installing the Tor Browser entirely via the Terminal Next, we'll discuss how to install Tor entirely via the Terminal: First, launch the Terminal, as before. Then, execute the following command: sudo apt install torbrowser-launcher This command will install the Tor Browser. We need root access to install an app, not to launch it. You can then run Tor by executing the following command: ./start-tor-browser.desktop Thus, in this post, we talked about Tor, Tor Browser, how to install it in several ways, and how to use it. If you've enjoyed this post and want to know more about the concept of the Deep Web and the Dark Web and their significance in the security sector, head over to the book  Hands-On Dark Web Analysis. Tor Project gets its first official mobile browser for Android, the privacy-friendly Tor Browser Tor Browser 8.0 powered by Firefox 60 ESR released How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 34007

article-image-highlights-from-jack-dorseys-live-interview-by-kara-swisher-on-twitter-on-lack-of-diversity-tech-responsibility-physical-safety-and-more
Natasha Mathur
14 Feb 2019
7 min read
Save for later

Highlights from Jack Dorsey’s live interview by Kara Swisher on Twitter: on lack of diversity, tech responsibility, physical safety and more

Natasha Mathur
14 Feb 2019
7 min read
Kara Swisher, Recode co-founder, interviewed Jack Dorsey, Twitter CEO, yesterday over Twitter. The interview ( or ‘Twitterview’)  was conducted in tweets using the hashtag #KaraJack. It started at 5 pm ET and lasted for around 90-minutes. Let’s have a look at the top highlights from the interview. https://twitter.com/karaswisher/status/1095440667373899776 On Fixing what is broke on Social Media and Physical safety Swisher asked Dorsey why he isn’t moving faster in his efforts to fix the disaster that has been caused so far on social media. To this Dorsey replied that Twitter was trying to do “too much” in the past but that they have become better at prioritizing now. The number one focus for them now is a person’s “physical safety” i.e. the offline ramifications for Twitter users off the platform. “What people do offline with what they see online”, says Dorsey. Some examples of ‘offline ramifications’ being “doxxing” (harassment technique that reveals a person’s personal information on the internet) and coordinated harassment campaigns. Dorsey further added that replies, searches, trends, mentions on Twitter are where most of the abuse happens and are the shared spaces people take advantage of. “We need to put our physical safety above all else. We don’t have all the answers just yet. But that’s the focus. I think it clarifies a lot of the work we need to do. Not all of it of course”, said Dorsey. On Tech responsibility and improving the health of digital conversation on Twitter When Swisher asked Dorsey what grading would he give to Silicon Valley and himself for embodying tech responsibility, he replied with “C” for himself. He said that Twitter has made progress but it’s scattered and ‘not felt enough’. He did not comment on what he thought of Silicon Valley’s work in this area. Swisher further highlighted that the goal of improving Twitter conversations have only remained empty talk so far. She asked Dorsey if Twitter has made any actual progress in the last 18-24 months when it comes to addressing the issues regarding the “health of conversation” (which eventually plays into safety). Dorsey said these issues are the most important thing right now that they need to fix and it’s a failure on Twitter’s part to ‘put the burden on victims’. He did not share a specific example of improvements made to the platform to further this goal. Swisher then questioned him on how he intends on fixing the issue, Dorsey mentioned that: Twitter intends to be more proactive when it comes to enforcing healthy conversations so that reporting/blocking becomes the last resort. He mentioned that Twitter takes actions against all offenders who go against its policies but that the system works reactively to someone who reports it. “If they don’t report, we don’t see it. Doesn’t scale. Hence the need to focus on proactive”, said Dorsey. Since Twitter is constantly evolving its policies to address the ‘current issues’, it's rooting these in fundamental human rights (UN) and is making physical safety the top priority alongside privacy. On lack of diversity https://twitter.com/jack/status/1095459084785004544 Swisher questioned Dorsey on his negligence towards addressing the issues. “I think it is because many of the people who made Twitter never ever felt unsafe,” adds Swisher. Dorsey admits that the “lack of diversity” didn’t help with the empathy of what people (especially women) experience on Twitter every day. He further adds that Twitter should be reflective of the people that it’s trying to serve, which is why they established a trust and safety council to get feedback. Swisher then asks him to provide three concrete examples of what Twitter has done to fix this. Dorsey mentioned that Twitter has: evolved its policies ( eg; misgendering policy). prioritized proactive enforcement by using machine learning to downrank bad actors, meaning, they'll look at the probability of abuse from any one account. This is because if someone else is abusing one account then they’re probably doing the same on other accounts. Given more user control in a product, such as muting of accounts with no profile picture, etc. More focus on coordinated behavior/gaming. On Dorsey’s dual CEO role Swisher asked him why he insists on being the CEO of two publicly traded companies (Twitter and Square Inc.) that both require maximum effort at the same time. Dorsey said that his main focus is on building leadership in both and that it’s not his ambition to be CEO of multiple companies “just for the sake of that”. She further questioned him if he has any plans in mind to hire someone as his “number 2”. Dorsey said it’s better to spread that kind of responsibility across several people as it reduces dependencies and the company gets more options for future leadership. “I’m doing everything I can to help both. Effort doesn’t come down to one person. It’s a team”, he said. On Twitter breaks, Donald Trump and Elon Musk When initially asked about what Dorsey feels about people not feeling good after being for a while on Twitter, he said he feels “terrible” and that it's depressing. https://twitter.com/jack/status/1095457041844334593 “We made something with one intent. The world showed us how it wanted to use it. A lot has been great. A lot has been unexpected. A lot has been negative. We weren’t fast enough to observe, learn, and improve”, said Dorsey. He further added that he does not feel good about how Twitter tends to incentivize outrage, fast takes, short term thinking, echo chambers, and fragmented conversations. Swisher then questioned Dorsey on whether Twitter has ever intended on suspending Donald Trump and if Twitter’s business/engagement would suffer when Trump is no longer the president. Dorsey replied that Twitter is independent of any account or person and that although the number of politics conversations has increased on Twitter, that’s just one experience. He further added that Twitter is ready for 2020 elections and that it has partnered up with government agencies to improve communication around threats. https://twitter.com/jack/status/1095462610462433280 Moreover, on being asked about the most exciting influential on Twitter, Dorsey replied with Elon Musk. He said he likes how Elon is focused on solving existential problems and sharing his thinking openly. On being asked he thought of how Alexandria Ocasio Cortez is using Twitter, he replied that she is ‘mastering the medium’. Although Swisher managed to interview Dorsey over Twitter, the ‘Twitterview’ got quite confusing soon and went out of order. The conversations seemed all over the place and as Kurt Wagner, tech journalist from Recode puts it, “in order to find a permanent thread of the chat, you had to visit one of either Kara or Jack’s pages and continually refresh”. This made for a difficult experience overall and points towards the current flaws within the conversation system on Twitter. Many users tweeted out their opinion regarding the same: https://twitter.com/RTKumaraSwamy/status/1095542363890446336 https://twitter.com/waltmossberg/status/1095454665305739264 https://twitter.com/kayvz/status/1095472789870436352 https://twitter.com/sukienniko/status/1095520835861864448 https://twitter.com/LauraGaviriaH/status/1095641232058011648 Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral Jack Dorsey discusses the rumored ‘edit tweet’ button and tells users to stop caring about followers
Read more
  • 0
  • 0
  • 12418
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-implementing-a-non-blocking-cross-service-communication-with-webclienttutorial
Amrata Joshi
13 Feb 2019
10 min read
Save for later

Implementing a non-blocking cross-service communication with WebClient[Tutorial]

Amrata Joshi
13 Feb 2019
10 min read
The  WebClient is the reactive replacement for the old RestTemplate.  However, in WebClient, we have a functional API that fits better with the reactive approach and offers built-in mapping to Project Reactor types such as Flux or Mono. This article is an excerpt taken from the book Hands-On Reactive Programming in Spring 5 written by Oleh Dokuka and Igor Lozynskyi. This book covers the difference between a reactive system and reactive programming, the basics of reactive programming in Spring 5 and much more. In this article, you will understand the basics of non-blocking cross-service communication with WebClient, reactive WebSocket API, server-side WebSocket API, and much more. WebClient.create("http://localhost/api") // (1) .get() // (2) .uri("/users/{id}", userId) // (3) .retrieve() // (4) .bodyToMono(User.class) // (5) .map(...) // (6) .subscribe(); // In the preceding example, we create a WebClient instance using a factory method called create, shown at point 1. Here, the create method allows us to specify the base URI, which is used internally for all future HTTP calls. Then, in order to start building a call to a remote server, we may execute one of the WebClient methods that sounds like an HTTP method. In the previous example, we used WebClient#get, shown at point (2). Once we call the WebClient#get method, we operate on the request builder instance and can specify the relative path in the uri method, shown at point (3). In addition to the relative path, we can specify headers, cookies, and a request body. However, for simplicity, we have omitted those settings in this case and moved on to composing the request by calling the retrieve or exchange methods. In this example, we use the retrieve method, shown at point (4). This option is useful when we are only interested in retrieving the body and performing further processing. Once the request is set up, we may use one of the methods that help us with the conversion of the response body. Here, we use the bodyToMono method, which converts the incoming payload of the User to Mono, shown at point (5). Finally, we can build the processing flow of the incoming response using the Reactor API, and execute the remote call by calling the subscribe method. WebClient follows the behavior described in the Reactive Streams specification. This means that only by calling the subscribe method will WebClient wire the connection and start sending the data to the remote server. Even though, in most cases, the most common response processing is body processing, there are some cases where we need to process the response status, headers, or cookies. For example, let's build a call to our password checking service and process the response status in a custom way using the WebClient API: class DefaultPasswordVerificationService // (1) implements PasswordVerificationService { // final WebClient webClient; // (2) // public DefaultPasswordVerificationService( // WebClient.Builder webClientBuilder // ) { // this.webClient = webClientBuilder // (2.1) .baseUrl("http://localhost:8080") // .build(); // } // @Override // (3) public Mono<Void> check(String raw, String encoded) { // return webClient // .post() // (3.1) .uri("/check") // .body(BodyInserters.fromPublisher( // (3.2) Mono.just(new PasswordDTO(raw, encoded)), // PasswordDTO.class // )) // .exchange() // (3.3) .flatMap(response -> { // (3.4) if (response.statusCode().is2xxSuccessful()) { // (3.5) return Mono.empty(); // } // else if(resposne.statusCode() == EXPECTATION_FAILD) { // return Mono.error( // (3.6) new BadCredentialsException(...) // ); // } // return Mono.error(new IllegalStateException()); // }); // } // } // The following numbered list describes the preceding code sample: This is the implementation of the PasswordVerificationService interface. This is the initialization of the WebClient instance. It is important to note that we use a WebClient instance per class here, so we do not have to initialize a new one on each execution of the check method. Such a technique reduces the need to initialize a new instance of WebClient and decreases the method's execution time. However, the default implementation of WebClient uses the Reactor-Netty HttpClient, which in default configurations shares a common pool of resources among all the HttpClient instances. Hence, the creation of a new HttpClient instance does not cost that much. Once the constructor of DefaultPasswordVerificationService is called, we start initializing webClient and use a fluent builder, shown at point (2.1), in order to set up the client. This is the implementation of the check method. Here, we use the webClient instance in order to execute a post request, shown at point (3.1). In addition, we send the body, using the body method, and prepare to insert it using the BodyInserters#fromPublisher factory method, shown in (3.2). We then execute the exchange method at point (3.3), which returns Mono<ClientResponse>. We may, therefore, process the response using the flatMap operator, shown in (3.4). If the password is verified successfully, as shown at point (3.5), the check method returns Mono.empty. Alternatively, in the case of an EXPECTATION_FAILED(417) status code, we may return the Mono of BadCredentialsExeception, as shown at point (3.6). As we can see from the previous example, in a case where it is necessary to process the status code, headers, cookies, and other internals of the common HTTP response, the most appropriate method is the exchange method, which returns ClientResponse. As mentioned, DefaultWebClient uses the Reactor-Netty HttpClient in order to provide asynchronous and non-blocking interaction with the remote server. However, DefaultWebClient is designed to be able to change the underlying HTTP client easily. For that purpose, there is a low-level reactive abstraction around the HTTP connection, which is called org.springframework.http.client.reactive.ClientHttpConnector. By default, DefaultWebClient is preconfigured to use ReactorClientHttpConnector, which is an implementation of the ClientHttpConnector interface. Starting from Spring WebFlux 5.1, there is a JettyClientHttpConnector implementation, which uses the reactive HttpClient from Jetty. In order to change the underlying HTTP client engine, we may use the WebClient.Builder#clientConnector method and pass the desired instance, which might be either a custom implementation or the existing one. In addition to the useful abstract layer, ClientHttpConnector may be used in a raw format. For example, it may be used for downloading large files, on-the-fly processing, or just simple byte scanning. We will not go into details about ClientHttpConnector; we will leave this for curious readers to look into themselves. Reactive WebSocket API We have now covered most of the new features of the new WebFlux module. However, one of the crucial parts of the modern web is a streaming interaction model, where both the client and server can stream messages to each other. In this section, we will look at one of the most well-known duplex protocols for duplex client-server communication, called WebSocket. Despite the fact that communication over the WebSocket protocol was introduced in the Spring Framework in early 2013 and designed for asynchronous message sending, the actual implementation still has some blocking operations. For instance, both writing data to I/O or reading data from I/O are still blocking operations and therefore both impact on the application's performance. Therefore, the WebFlux module has introduced an improved version of the infrastructure for WebSocket. WebFlux offers both client and server infrastructure. We are going to start by analyzing the server-side WebSocket and will then cover the client-side possibilities. Server-side WebSocket API WebFlux offers WebSocketHandler as the central interface for handling WebSocket connections. This interface has a method called handle, which accepts WebSocketSession. The WebSocketSession class represents a successful handshake between the client and server and provides access to information, including information about the handshake, session attributes, and the incoming stream of data. In order to learn how to deal with this information, let's consider the following example of responding to the sender with echo messages: class EchoWebSocketHandler implements WebSocketHandler { // (1) @Override // public Mono<Void> handle(WebSocketSession session) { // (2) return session // (3) .receive() // (4) .map(WebSocketMessage::getPayloadAsText) // (5) .map(tm -> "Echo: " + tm) // (6) .map(session::textMessage) // (7) .as(session::send); // (8) } // } As we can see from the previous example, the new WebSocket API is built on top of the reactive types from Project Reactor. Here, at point (1), we provide an implementation of the WebSocketHandler interface and override the handle method at point (2). Then, we use the WebSocketSession#receive method at point (3) in order to build the processing flow of the incoming WebSocketMessage using the Flux API. WebSocketMessage is a wrapper around DataBuffer and provides additional functionalities, such as translating the payload represented in bytes to text in point (5). Once the incoming message is extracted, we prepend to that text the "Echo: " suffix shown at point (6), wrap the new text message in the WebSocketMessage, and send it back to the client using the WebSocketSession#send method. Here, the send method accepts Publisher<WebSocketMessage> and returns Mono<Void> as the result. Therefore, using the as operator from the Reactor API, we may treat Flux as Mono<Void> and use session::send as a transformation function. Apart from the WebSocketHandler interface implementation, setting up the server-side WebSocket API requires configuring additional HandlerMapping and WebSocketHandlerAdapter instances. Consider the following code as an example of such a configuration: @Configuration // (1) public class WebSocketConfiguration { // @Bean // (2) public HandlerMapping handlerMapping() { // SimpleUrlHandlerMapping mapping = // new SimpleUrlHandlerMapping(); // (2.1) mapping.setUrlMap(Collections.singletonMap( // (2.2) "/ws/echo", // new EchoWebSocketHandler() // )); // mapping.setOrder(-1); // (2.3) return mapping; // } // @Bean // (3) public HandlerAdapter handlerAdapter() { // return new WebSocketHandlerAdapter(); // } // } The preceding example can be described as follows: This is the class that is annotated with @Configuration. Here, we have the declaration and setup of the HandlerMapping bean. At point (2.1), we create SimpleUrlHandlerMapping, which allows setup path-based mapping, shown at point (2.2), to WebSocketHandler. In order to allow SimpleUrlHandlerMapping to be handled prior to other HandlerMapping instances, it should be a higher priority. This is the declaration of the HandlerAdapter bean, which is WebSocketHandlerAdapter. Here, WebSocketHandlerAdapter plays the most important role, since it upgrades the HTTP connection to the WebSocket one and then calls the WebSocketHandler#handle method. Client-side WebSocket API Unlike the WebSocket module (which is based on WebMVC), WebFlux provides us with client-side support too. In order to send a WebSocket connection request, we have the WebSocketClient class. WebSocketClient has two central methods to execute WebSocket connections, as shown in the following code sample: public interface WebSocketClient { Mono<Void> execute( URI url, WebSocketHandler handler ); Mono<Void> execute( URI url, HttpHeaders headers, WebSocketHandler handler ); } As we can see, WebSocketClient uses the same WebSockeHandler interface in order to process messages from the server and send messages back. There are a few WebSocketClient implementations that are related to the server engine, such as the TomcatWebSocketClient implementation or the JettyWebSocketClient implementation. In the following example, we will look at ReactorNettyWebSocketClient: WebSocketClient client = new ReactorNettyWebSocketClient(); client.execute( URI.create("http://localhost:8080/ws/echo"), session -> Flux .interval(Duration.ofMillis(100)) .map(String::valueOf) .map(session::textMessage) .as(session::send) ); The preceding example shows how we can use ReactorNettyWebSocketClient to wire a WebSocket connection and start sending periodic messages to the server. To summarize, we learned the basics of non-blocking cross-service communication with WebClient, reactive WebSocket API, server-side WebSocket API, and much more. To know more about the reactive system and reactive programming, check out the book, Hands-On Reactive Programming in Spring 5 written by Oleh Dokuka and Igor Lozynskyi.  Getting started with React Hooks by building a counter with useState and useEffect Implementing Dependency Injection in Swift [Tutorial] Reactive programming in Swift with RxSwift and RxCocoa [Tutorial]
Read more
  • 0
  • 0
  • 31324

article-image-getting-started-with-react-hooks-by-building-a-counter-with-usestate-and-useeffect
Guest Contributor
12 Feb 2019
7 min read
Save for later

Getting started with React Hooks by building a counter with useState and useEffect

Guest Contributor
12 Feb 2019
7 min read
React 16 added waves of new features, improving the way we build web applications. The most impactful update is the new Hooks feature in version 16.8. Hooks allow us to write functional React components that manage state and side effects, making our code cleaner and providing the ability to easily to share functionality. React is not removing class components, but they cause many problems and are a detriment to upcoming code optimizations. The vision for Hooks is that all new components will be written using the API, resulting in more scalable web applications with better code. This tutorial will walk you through Hooks step-by-step and teach the core hook functionality by building a counter app. An overview of hooks Hooks provide the ability to manage state and side effects in functional components while also providing a simple interface to control the component lifecycle. The 4 built-in hooks provided by React are useState, useEffect, useReducer, and useContext. useState replaces the need for this.state used in class components useEffect manages side effects of the app by controlling the componentDidMount, componentDidUpdate, and componentWillUnmount lifecycle methods. useContext allows us to subscribe to the React context useReducer is similar to useState but allows for more complex state updates. The two main hook functions that you will use are, useState and useEffect, which manage the standard React state and lifecycle. useReducer is used to manage more complex state and useContext is a hook to pass values from the global React context to a component. With the core specification updating frequently, it’s essential to find tutorials to learn React. You can also build your own custom hooks, which can contain the primitive hooks exposed by React. You are able to extract component state into reusable functions that can be accessed by any component. Higher-order components and render props have traditionally been the way to share functionality, but these methods can lead to a bloated component tree with a confusing glob of nested React elements. Hooks offer a straightforward way to DRY out your code by simply importing the custom hook function into your component. Building counter with hooks To build our counter, we will use Create React App to bootstrap the application. You can install the package globally or use npx from the command line: npx create-react-app react-hooks-counter cd react-hooks-counter React Hooks is a brand new feature, so ensure you have v16.8.x installed. Inside your package.json, the version of react and react-dom should look similar to the code snippet below. If not, update them and reinstall using the yarn command. The foundation of hooks is that they are utilized inside functional components. To start, let’s convert the boilerplate file inside src/App.js to a functional component and remove the content. At the top of the file, we can import useState and useEffect from React. import React, { useState, useEffect } from 'react'; The most straightforward hook is useState since its purpose is to maintain a single value, so let’s begin there. The function takes an initial value and returns an array of arguments, with the item at the 0 index containing the state value, and the item at the 1 index containing a function to update the value. We will initialize our count to 0 and name the return variables count and setCount. const [count, setCount] = useState(0); NOTE: The returned value of the useState is an array. To simplify the syntax, we use array destructuring to extract the elements at the 0 and 1 index. Inside our rendered React component, we will display the count and provide a button to increment the count by 1 by using setCount. With a single function, we have eliminated the need to have a class component along with this.state and this.setState to manage our data. Every time you click the increment button, the count will increase by 1. Since we are using a hook, React recognizes this change in state and will re-render the DOM with this updated value. To demonstrate the extensibility of the state updates, we will add buttons increment the count by 2, 5, and 10 as well. We will also DRY out our code by storing these values in an array. We iterate over this array using the .map() function, which will return an array of React components. React will treat this as sibling elements in the DOM. You are now able to increment the count by different values. Now we will integrate the useEffect hook. This hook enables you to manage side effects and handle asynchronous events. The most notable and frequently used side effect is an API call. We will mimic the async nature of an API call using a setTimeout function. We will make a fake API request on the component’s mount that will initialize a random integer 1–10 to our count after waiting 1 second. We will also have an additional useEffect that will update the document title (a side effect) with the current count to show how it responds to a change in state. The useEffect hook takes a function as an argument. useEffect replaces the componentDidMount, componentDidUpdate, and componentWillUnmount class methods. When the state of the component mounts or updates, React will execute the callback function. If your callback function returns a function itself, React will execute this during componentWillUnmount. First, let’s create our effect to update the document title. Inside the body of our function, we declare useEffect which sets document.title = 'Count = ' + count in the callback. When the state count updates, you should see your tab title also updating simultaneously. For the final step, we will create a mock API call that returns an integer to update the state count. We use a setTimeout and a function that returns a Promise because this simulates the time required to wait for an API request to return and the associated return value of a promise, which allows us to handle the response asynchronously. To mock an API, we create a mockApi function above our component. It returns a promise with a resolved random integer between 1 and 10. A common pattern is to make fetch requests in the componentDidMount. To reproduce this is in our functional component, we will add another useState to manage a hasFetched variable: const [hasFetched, setFetch] = useState(false). This is used to prevent the mockApi from being executed on subsequent updates. Our fetch hook will be an async function, so we will use async/await to handle the result. Inside our useEffect function, we will first check if the hasFetched has been executed. If it has not, we call mockApi and setCount with a result to initialize our value and then flip our hasFetched flag to true. Visual indicators are essential for UX and provide feedback for your users of the application status. Since we are waiting for an initial count value, we want to hide our buttons and display “Loading…” text on the screen if the hasFetched is false. This results in the following behavior: The final code Wrapping Up This article introduced hooks and showed how to implement useState and useEffect to simplify your class components into simple functional components. While this is a big win for React developers, the power of hooks is fully realized with the ability to combine them to create custom hooks. This allows you to extract logic and build modular functionality that can seamlessly be shared among React components without the overhead of HOCs or render props. You simply import your custom hook function, and any component can implement it. The only caveat is that all hook functions must follow the rules of hooks. Author Bio Trey Huffine A JavaScript fanatic. He is a software engineer in Silicon Valley building products using React, Node, and Go. Passionate for making the world a better place through code. Reactive programming in Swift with RxSwift and RxCocoa [Tutorial] React 16.8 releases with the stable implementation of Hooks PrimeReact 3.0.0 is now out with Babylon create-react-app template
Read more
  • 0
  • 0
  • 70434

article-image-5-blog-posts-that-could-make-you-a-better-python-programmer
Sam Wood
11 Feb 2019
2 min read
Save for later

5 blog posts that could make you a better Python programmer

Sam Wood
11 Feb 2019
2 min read
Python is one of the most important languages to master. It’s top rated, fast growing, and in demand by businesses around the globe. There’s a host of excellent insight across the web about how to become a better programmer with Python. Here’s five blogs we think you need to read to upgrade your skills and knowledge. 1. A Brief History of Python Did you know Python is actually older than Java, R and JavaScript? If you want to be a better Python programmer, it pays to know your history. This quick blog post takes you through the language's journey from Christmas hobby project to its modern ascendancy with version 3. 2. Do you write Python Code or Pythonic Code? Are you writing code in Python, or code for Python? When people talk about Pythonic code they mean that the code uses Python idioms well, that is natural or displays fluency in the language. Are you writing code like you would write Java or C++? This 4-minute blog post gives quick tips on how to make your code Pythonic. 3. The Singleton Python Design Pattern in Depth The singleton pattern is a powerful design pattern that allows you to create only one instance of data. You’d generally use it for things like the logging class and its subclasses, managing a connection to a database, or use read-only singletons to store some global states. This in-depth blog post takes you through the three principle ways to implement singletons, for better Python code. 4. Why is Python so good for artificial intelligence and machine learning? 5 Experts Explain. Python is the breakout language of data, zooming ahead of rival R to be dominant in the field of artificial intelligence and machine learning. But what is it about the programming language that makes it so well suited for this fast-growing field? In this blog post, five artificial intelligence experts all weigh in on what they think makes Python perfect for AI and machine learning. 5. Top 7 Python Programming Books You Need To Read That’s right - we put a list in our list. But if you really want to become a better Python programmer, you’ll want to get to grips with this stack of amazing Python books. Whether you’re a complete beginner or more experienced, these seven Python titles are the perfect way to upgrade your knowledge.
Read more
  • 0
  • 0
  • 42157

article-image-implementing-dependency-injection-in-swift-tutorial
Bhagyashree R
11 Feb 2019
14 min read
Save for later

Implementing Dependency Injection in Swift [Tutorial]

Bhagyashree R
11 Feb 2019
14 min read
In software development, it's always recommended to split the system into loosely coupled modules that can work independently as much as they can. Dependency Injection (DI) is a pattern that helps to reach this goal, creating a maintainable and testable system. It is often confused with complex and over-configurable frameworks that permit us to add DI to our code; in reality, it is a simple pattern that can be added without too much effort. This article is taken from the book Hands-On Design Patterns with Swift by Florent Vilmart, Giordano Scalzo, and Sergio De Simone.  This book demonstrates how to apply design patterns and best practices in real-life situations, whether that's for new or already existing Swift projects. You’ll begin with a quick refresher on Swift, the compiler, the standard library, and the foundation, followed by the Cocoa design patterns to follow up with the creational, structural, and behavioral patterns as defined by the GoF.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we'll see what Dependency Injection is, where it comes, and how it's defined so that we can then discuss various methods to implement it, having a clear understanding of its principles. Dependency Injection, a primer Dependency Injection is one of the most misunderstood concepts in computer programming. This is because the Dependency Injection borders are quite blurry and they could overlap with other object-oriented programming concepts. Let's start with a formal definition given by Wikipedia: "In software engineering, Dependency Injection is a software design pattern that implements inversion of control for resolving dependencies." To be honest, this is not really clear: what is Inversion of Control? Why is it useful for resolving dependencies? In procedural programming, each object interacts with all of its collaborators in a direct way and also instantiates them directly. In Inversion Of Control, this flow is managed by a third party, usually, a framework that calls the objects and receives notifications. An example of this is an implementation of a UI engine. In a UI Engine, there are two parts: the Views and the Models part. The Views part handles all the interaction with the users, such as tapping buttons and rendering labels, whereas the Models part is responsible for business logic. Usually, the application code goes in the Models part, and the connections with the Views are done via callbacks that are called by the engine when the user interacts with a button or a text field. The paradigm changes from an imperative style where the algorithm is a sequence of actions, like in do this then do that, to an event style, when the button is tapped then call the server. The control of the actions is thus inverted. Instead of being the model that does things, the model now receives calls. Inversion of Control is often called Hollywood Principle. The essence of this principle is, "Don't call us, we'll call you," which is a response you might hear after auditioning for a role in Hollywood. In procedural programming, the flow of the program is determined by the modules that are statically connected together: ContactsView talks to ContactsCoreData and  ContactsProductionRemoteService, and each object instantiate its next collaborator. In Inversion of Control, ContactsView talks to a generic ContactsStore and a generic ContactsRemoteService whose concrete implementation could change depending on the context. If it is during the tests, an important role is played by the entity that manages how to create and connect all the objects together. After having defined the concept of IoC, let's give a simpler definition of DI by James Shore: "Dependency Injection" is a 25-dollar term for a 5-cent concept. [...] Dependency Injection means giving an object its instance variables. Really. That's it." The first principle of the book Design Patterns by the Gang of Four is "Program to an interface, not an implementation" which means that the objects need to know each other only by their interface and not by their implementation. After having defined how all the classes in software will collaborate with each other, this collaboration can be designed as a graph. The graph could be implemented connecting together the actual implementation of the classes, but following the first principle mentioned previously, we can do it using the interfaces of the same objects: the Dependency Injection is a way of building this graph passing the concrete classes to the objects. Four ways to use Dependency Injection Dependency Injection is used ubiquitously in Cocoa too, and in the following examples, we'll see code snippets both from Cocoa and typical client-side code. Let's take a look at the following four sections to learn how to use Dependency Injection. Constructor Injection The first way to do DI is to pass the collaborators in the constructor, where they are then saved in private properties. Let's have as an example on e-commerce app, whose Basket is handled both locally and remotely. The BasketClient class orchestrates the logic, saves locally in BasketStore, and synchronizes remotely with BasketService: protocol BasketStore { func loadAllProduct() -> [Product] func add(product: Product) func delete(product: Product) } protocol BasketService { func fetchAllProduct(onSuccess: ([Product]) -> Void) func append(product: Product) func remove(product: Product) } struct Product { let id: String let name: String //... } Then in the constructor of BasketClient, the concrete implementations of the protocols are passed: class BasketClient { private let service: BasketService private let store: BasketStore init(service: BasketService, store: BasketStore) { self.service = service self.store = store } func add(product: Product) { store.add(product: product) service.append(product: product) calculateAppliedDiscount() //... } // ... private func calculateAppliedDiscount() { // ... } } In Cocoa and Cocoa Touch, the Apple foundation libraries, there are a few examples of this pattern. A notable example is NSPersistentStore in CoreData: class NSPersistentStore: NSObject { init(persistentStoreCoordinator root: NSPersistentStoreCoordinator?, configurationName name: String?, URL url: NSURL, options: [NSObject: AnyObject]?) var persistentStoreCoordinator: NSPersistentStoreCoordinator? { get } } In the end, Dependency Injection as defined by James Shore is all here: define the collaborators with protocols and then pass them in the constructor. This is the best way to do DI. After the construction, the object is fully formed and it has a consistent state. Also, by just looking at the signature of init, the dependencies of this object are clear. Actually, the Constructor Injection is not only the most effective, but it's also the easiest. The only problem is who has to create the object graph? The parent object? The AppDelegate? We'll discuss that point in the Where to bind the dependencies section. Property Injection We have already agreed that Construction Injection is the best way to do DI, so why bother finding other methods? Well, it is not always possible to define the constructor the way we want. A notable example is doing DI with ViewControllers that are defined in storyboards. Given we have a BasketViewController that orchestrates the service and the store, we must pass them as properties: class BasketViewController: UIViewController { var service: BasketService? var store: BasketStore? // ... } This pattern is less elegant than the previous one: The ViewController isn't in the right state until all the properties are set Properties introduce mutability, and immutable classes are simpler and more efficient The properties must be defined as optional, leading to add question marks everywhere They are set by an external object, so they must be writeable and this could potentially permit something else to overwrite the value set at the beginning after a while There is no way to enforce the validity of the setup at compile-time However, something can be done: The properties can be set as implicitly unwrapped optional and then required in viewDidLoad. This is as a static check, but at least they are checked at the first sensible opportunity, which is when the view controller has been loaded. A function setter of all the properties prevents us from partially defining the collaborator list. The class BasketViewController must then be written as: class BasketViewController: UIViewController { private var service: BasketService! private var store: BasketStore! func set(service: BasketService, store: BasketStore) { self.service = service self.store = store } override func viewDidLoad() { super.viewDidLoad() precondition(service != nil, "BasketService required") precondition(store != nil, "BasketStore required") // ... } } The Properties Injection permits us to have overridable properties with a default value. This can be useful in the case of testing. Let's consider a dependency to a wrapper around the time: class CheckoutViewController: UIViewController { var time: Time = DefaultTime() } protocol Time { func now() -> Date } struct DefaultTime: Time { func now() -> Date { return Date() } } In the production code, we don't need to do anything, while in the testing code we can now inject a particular date instead of always return the current time. This would permit us of testing how the software will behave in the future, or in the past. A dependency defined in the same module or framework is Local. When it comes from another module or framework, it's Foreign. A Local dependency can be used as a default value, but a Foreign cannot, otherwise it would introduce a strong dependency between the modules. Method Injection This pattern just passes a collaborator in the method: class BasketClient { func add(product: Product, to store: BasketStore) { store.add(product: product) calculateAppliedDiscount() //... } // ... private func calculateAppliedDiscount() { // ... } } This is useful when the object has several collaborators, but most of them are just temporary and it isn't worth having the relationship set up for the whole life cycle of the object. Ambient Context The final pattern, Ambient Context, is similar to the Singleton. We still have a single instance as a static variable, but the class has multiple subclasses with different behaviors, and each static variable is writeable with a static function: class Analytics { static private(set) var instance: Analytics = NoAnalytics() static func setAnaylics(analitics: Analytics) { self.instance = analitics } func track(event: Event) { fatalError("Implement in a subclass") } } class NoAnalytics: Analytics { override func track(event: Event) {} } class GoogleAnalytics: Analytics { override func track(event: Event) { //... } } class AdobeAnalytics: Analytics { override func track(event: Event) { //... } } struct Event { //... } This pattern should be used only for universal dependencies, representing some cross-cutting concerns, such as analytics, logging, and times and dates. This pattern has some advantages. The dependencies are always accessible and don't need to change the API. It works well for cross-cutting concerns, but it doesn't fit in other cases when the object isn't unique. Also, it makes the dependency implicit and it represents a global mutable state that sometimes can lead to issues that are difficult to debug. DI anti-patterns When we try to implement a new technique, it is quite easy to lose control and implement it in the wrong way. Let's see then the most common anti-patterns in Dependency Injection. Control Freak The first one is pretty easy to spot: we are not using the Injection at all. Instead of being Injected, the dependency is instantiated inside the object that depends on it: class FeaturedProductsController { private let restProductsService: ProductsService init() { self.restProductsService = RestProductsService(configuration: Configuration.loadFromBundleId()) } } In this example, ProductsService could have been injected in the constructor but it is instantiated there instead. Mark Seeman, in his book Dependency Injection in .NET, Chapter 5.1 - DI anti-patterns, calls it Control Freak because it describes a class that will not relinquish its dependencies. The Control Freak is the dominant DI anti-pattern and it happens every time a class directly instantiates its dependencies, instead of relying on the Inversion of Control for that. In the case of the example, even though the rest of the class is programmed against an interface, there is no way of changing the actual implementation of ProductsService and the type of concrete class that it is, it will always be RestProductsService. The only way to change it is to modify the code and compile it again, but with DI it should be possible to change the behavior at runtime. Sometimes, someone tries to fix the Control Freak anti-pattern using the factory pattern, but the reality is that the only way to fix it is to apply the Inversion of Control for the dependency and inject it in the constructor: class FeaturedProductsController { private let productsService: ProductsService init(service: ProductsService) { self.productsService = service } } As already mentioned, Control Freak is the most common DI anti-pattern; pay particular attention so you don't slip into its trap. Bastard Injection Constructor overloads are fairly common in Swift codebases, but these could lead to the Bastard Injection anti-pattern. A common scenario is when we have a constructor that lets us inject a Test Double, but it also has a default parameter in the constructor: class TodosService { let repository: TodosRepository init(repository: TodosRepository = SqlLiteTodosRepository()) { self.repository = repository } } The biggest problem here is when the default implementation is a Foreign dependency, which is a class defined using another module; this creates a strong relationship between the two modules, making it impossible to reuse the class without including the dependent module too. The reason someone is tempted to write a default implementation it is pretty obvious since it is an easy way to instantiate the class just with TodoService() without the need of Composition Root or something similar. However, this nullifies the benefits of DI and it should be avoided removing the default implementation and injecting the dependency. Service Locator The final anti-pattern that we will explore is the most dangerous one: the Service Locator. It's funny because this is often considered a good pattern and is widely used, even in the famous Spring framework. Originally, the Service Locator pattern was defined in Microsoft patterns & practices' Enterprise Library, as Mark Seeman writes in his book Dependency Injection in .NET, Chapter 5.4 - Service Locator, but now he is advocating strongly against it. Service Locator is a common name for a service that we can query for different objects that were previously registered in it. As mentioned, it is a tricky one because it makes everything seem OK, but in fact, it nullifies all the advantage of the Dependency Injection: let locator = ServiceLocator.instance locator.register( SqlLiteTodosRepository(), forType: TodosRepository.self) class TodosService { private let repository: TodosRepository init() { let locator = ServiceLocator.instance self.repository = locator.resolve(TodosRepository.self) } } Here we have a service locator as a singleton, to whom we register the classes we want to resolve. Instead of injecting the class into the constructor, we just query from the service. It looks like the Service Locator has all the advantages of Dependency Injection, it provides testability and extensibility since we can use different implementations without changing the client. It also enables parallel development and separated configuration from the usage. But it has some major disadvantages. With DI, the dependencies are explicit; it's enough to look at the signature of the constructor or the exposed properties to understand what the dependencies for a class are. With a Service Locator, these dependencies are implicit, and the only way to find them is to inspect the implementation, which breaks the encapsulation. Also, all the classes are depending on the Service Locator and this makes the code tightly coupled with it. If we want to reuse a class, other then that class, we also need to add the Service Locator in our project, which could be in a different module and then adding the whole module as dependency where we wanted just to use one class. Service Locator could also give us the impression that we are not using DI at all because all the dependencies are hidden inside the classes. In this article, we covered the different flavors of dependency injection and examines how each can solve a particular set of problems in real-world scenarios. If you found this post useful, do check out the book, Hands-On Design Patterns with Swift. From learning about the most sought-after design patterns to comprehensive coverage of architectural patterns and code testing, this book is all you need to write clean, reusable code in Swift. Implementing Dependency Injection in Google Guice [Tutorial] Implementing Dependency Injection in Spring [Tutorial] Dagger 2.17, a dependency injection framework for Java and Android, is now out!
Read more
  • 0
  • 0
  • 28296
article-image-reactive-programming-in-swift-with-rxswift-and-rxcocoa-tutorial
Bhagyashree R
10 Feb 2019
10 min read
Save for later

Reactive programming in Swift with RxSwift and RxCocoa [Tutorial]

Bhagyashree R
10 Feb 2019
10 min read
The basic idea behind Reactive Programming (RP) is that of asynchronous data streams, such as the stream of events that are generated by mouse clicks, or a piece of data coming through a network connection. Anything can be a stream; there are really no constraints. The only property that makes it sensible to model any entity as a stream is its ability to change at unpredictable times. The other half of the picture is the idea of observers, which you can think of as agents that subscribe to receive notifications of new events in a stream. In between, you have ways of transforming those streams, combining them, creating new streams, filtering them, and so on. You could look at RP as a generalization of Key-Value Observing (KVO), a mechanism that is present in the macOS and iOS SDKs since their inception. KVO enables objects to receive notifications about changes to other objects' properties to which they have subscribed as observers. An observer object can register by providing a keypath, hence the name, into the observed object. This article is taken from the book Hands-On Design Patterns with Swift by Florent Vilmart, Giordano Scalzo, and Sergio De Simone.  This book demonstrates how to apply design patterns and best practices in real-life situations, whether that's for new or already existing Swift projects. You’ll begin with a quick refresher on Swift, the compiler, the standard library, and the foundation, followed by the Cocoa design patterns to follow up with the creational, structural, and behavioral patterns as defined by the GoF.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will give a brief introduction to one popular framework for RP in Swift, RxSwift, and its Cocoa counterpart, RxCocoa, to make Cocoa ready for use with RP. RxSwift is not the only RP framework for Swift. Another popular one is ReactiveCocoa, but we think that, once you have understood the basic concepts behind one, it won't be hard to switch to the other. Using RxSwift and RxCocoa in reactive programming RxSwift aims to be fully compatible with Rx, Reactive Extensions for Microsoft .NET, a mature reactive programming framework that has been ported to many languages, including Java, Scala, JavasScript, and Clojure. Adopting RxSwift thus has the advantage that it will be quite natural for you to use the same approach and concepts in another language for which Rx is available, in case you need to. If you want to play with RxSwift, the first step is creating an Xcode project and adding the SwiftRx dependency. If you use the Swift Package Manager, just make sure your Package.swift file contains the following information: If you use CocoaPods, add the following dependencies to your podfile: pod 'RxSwift', '~> 4.0' pod 'RxCocoa', '~> 4.0' Then, run this command: pod install Finally, if you use Carthage, add this to Cartfile: github "ReactiveX/RxSwift" ~> 4.0 Then, run this command to finish: carthage update As you can see, we have also included RxCocoa as a dependency. RxCocoa is a framework that extends Cocoa to make it ready to be used with RxSwift. For example, RxCocoa will make many properties of your Cocoa objects observable without requiring you to add a single line of code. So if you have a UI object whose position changes depending on some user action, you can observe its center property and react to its evolution. Observables and observers Now that RxSwift is set up in our project, let's start with a few basic concepts before diving into some code: A stream in RxSwift is represented through Observable<ObservableType>, which is equivalent to Sequence, with the added capability of being able to receive new elements asynchronously. An observable stream in Rx can emit three different events: next, error, and complete. When an observer registers for a stream, the stream begins to emit next events, and it does so until an error or complete event is generated, in which case the stream stops emitting events. You subscribe to a stream by calling ObservableType.subscribe, which is equivalent to Sequence.makeIterator. However, you do not use that iterator directly, as you would, to iterate a sequence; rather, you provide a callback that will receive new events. When you are done with a stream, you should release it, along with all resources it allocated, by calling dispose. To make it easier not to forget releasing streams, RxSwift provides DisposeBag and takeUntil. Make sure that you use one of them in your production code. All of this can be translated into the following code snippet: let aDisposableBag = DisposeBag() let thisIsAnObservableStream = Observable.from([1, 2, 3, 4, 5, 6]) let subscription = thisIsAnObservableStream.subscribe( onNext: { print("Next value: \($0)") }, onError: { print("Error: \($0)") }, onCompleted: { print("Completed") }) // add the subscription to the disposable bag // when the bag is collected, the subscription is disposed subscription.disposed(by: aDisposableBag) // if you do not use a disposable bag, do not forget this! // subscription.dispose() Usually, your view controller is where you create your subscriptions, while, in our example thisIsAnObservableStream, observers and observables fit into your view model. In general, you should make all of your model properties observable, so your view controller can subscribe to those observables to update the UI when need be. In addition to being observable, some properties of your view model could also be observers. For example, you could have a UITextField or UISearchBar in your app UI and a property of your view model could observe its text property. Based on that value, you could display some relevant information, for example, the result of a query. When a property of your view model is at the same time an observable and an observer, RxSwift provides you with a different role for your entity—that of a Subject. There exist multiple categories of subjects, categorized based on their behavior, so you will see BehaviourSubject, PublishSubject, ReplaySubject, and Variable. They only differ in the way that they make past events available to their observers. Before looking at how these new concepts may be used in your program, we need to introduce two further concepts: transformations and schedulers. Transformations Transformations allow you to create new observable streams by combining, filtering, or transforming the events emitted by other observable streams. The available transformations include the following: map: This transforms each event in a stream into another value before any observer can observe that value. For example, you could map the text property of a UISearchBar into an URL to be used to query some remote service. flatMap: This transforms each event into another Observable. For example, you could map the text property of a UISearchBar into the result of an asynchronous query. scan: This is similar to the reduce Swift operator on sequences. It will accumulate each new event into a partial result based on all previously emitted events and emit that result. filter: This enables filtering of emitted events based on a condition to be verified. merge: This merges two streams of events by preserving their ordering. zip: This combines two streams of events by creating a new stream whose events are tuples made by the successive events from the two original streams. Schedulers Schedulers allow you to control to which queue RxSwift operators are dispatched. By default, all RxSwift operations are executed on the same queue where the subscription was made, but by using schedulers with observeOn and subscribeOn, you can alter that behavior. For example, you could subscribe to a stream whose events are emitted from a background queue, possibly the results of some lengthy tasks, and observe those events from the main thread to be able to update the UI based on those tasks' outcomes. Recalling our previous example, this is how we could use observeOn and subscribeOn as described: let aDisposableBag = DisposeBag() let thisIsAnObservableStream = Observable.from([1, 2, 3, 4, 5, 6]) .observeOn(MainScheduler.instance).map { n in print("This is performed on the main scheduler") } let subscription = thisIsAnObservableStream .subscribeOn(ConcurrentDispatchQueueScheduler(qos: .background)) .subscribe(onNext: { event in print("Handle \(event) on main thread? \(Thread.isMainThread)") }, onError: { print("Error: \($0). On main thread? \(Thread.isMainThread)") }, onCompleted: { print("Completed. On main thread? \(Thread.isMainThread)") }) subscription.disposed(by: aDisposableBag) Asynchronous networking – an example Now we can take a look at a slightly more compelling example, showing off the power of reactive programming. Let's get back to our previous example: a UISearchBar collects user input that a view controller observes, to update a table displaying the result of a remote query. This is a pretty standard UI design. Using RxCocoa, we can observe the text property of the search bar and map it into a URL. For example, if the user inputs a GitHub username, the URLRequest could retrieve a list of all their repositories. We then further transform the URLRequest into another observable using flatMap. The remoteStream function is defined in the following snippet, and simply returns an observable containing the result of the network query. Finally, we bind the stream returned by flatMap to our tableView, again using one of the methods provided by RxCocoa, to update its content based on the JSON data passed in record: searchController.searchBar.rx.text.asObservable() .map(makeURLRequest) .flatMap(remoteStream) .bind(to: tableView.rx.items(cellIdentifier: cellIdentifier)) { index, record, cell in cell.textLabel?.text = "" // update here the table cells } .disposed(by: disposeBag) This looks all pretty clear and linear. The only bit left out is the networking code. This is a pretty standard code, with the major difference that it returns an observable wrapping a URLSession.dataTask call. The following code shows the standard way to create an observable stream by calling observer.onNext and passing the result of the asynchronous task: func remoteStream<T: Codable>(_ request: URLRequest) -> Observable<T> { return Observable<T>.create { observer in let task = URLSession.shared.dataTask(with: request) { (data, response, error) in do { let records: T = try JSONDecoder().decode(T.self, from: data ?? Data()) for record in records { observer.onNext(record) } } catch let error { observer.onError(error) } observer.onCompleted() } task.resume() return Disposables.create { task.cancel() } } } As a final bit, we could consider the following variant: we want to store the UISearchBar text property value in our model, instead of simply retrieving the information associated with it in our remote service. To do so, we add a username property in our view model and recognize that it should, at the same time, be an observer of the UISearchBar text property as well as an observable, since it will be observed by the view controller to retrieve the associated information whenever it changes. This is the relevant code for our view model: import Foundation import RxSwift import RxCocoa class ViewModel { var username = Variable<String>("") init() { setup() } setup() { ... } } The view controller will need to be modified as in the following code block, where you can see we bind the UISearchBar text property to our view model's username property; then, we observe the latter, as we did previously with the search bar: searchController.searchBar.rx.observe(String.self, "text") .bindTo(viewModel.username) .disposed(by: disposeBag) viewModel.username.asObservable() .map(makeURLRequest) .flatMap(remoteStream) .bind(to: tableView.rx.items(cellIdentifier: cellIdentifier)) { index, record, cell in cell.textLabel?.text = "" // update here the table cells } .disposed(by: disposeBag) With this last example, our short introduction to RxSwift is complete. There is much more to be said, though. A whole book could be devoted to RxSwift/RxCocoa and how they can be used to write Swift apps! If you found this post useful, do check out the book, Hands-On Design Patterns with Swift. This book provides a complete overview of how to implement classic design patterns in Swift.  It will guide you to build Swift applications that are scalable, faster, and easier to maintain. Reactive Extensions: Ways to create RxJS Observables [Tutorial] What’s new in Vapor 3, the popular Swift based web framework Exclusivity enforcement is now complete in Swift 5
Read more
  • 0
  • 0
  • 42940

article-image-creating-a-basic-julia-project-for-loading-and-saving-data-tutorial
Prasad Ramesh
09 Feb 2019
11 min read
Save for later

Creating a basic Julia project for loading and saving data [Tutorial]

Prasad Ramesh
09 Feb 2019
11 min read
In this article, we take a look at the common Iris dataset using simple statistical methods. Then we create a simple Julia project to load and save data from the Iris dataset. This article is an excerpt from a book written by Adrian Salceanu titled Julia Programming Projects. In this book, you will develop and run a web app using Julia and the HTTP package among other things. To start, we'll load, the Iris flowers dataset, from the RDatasets package and we'll manipulate it using standard data analysis functions. Then we'll look more closely at the data by employing common visualization techniques. And finally, we'll see how to persist and (re)load our data. But, in order to do that, first, we need to take a look at some of the language's most important building blocks. Here are the external packages used in this tutorial and their specific versions: CSV@v0.4.3 DataFrames@v0.15.2 Feather@v0.5.1 Gadfly@v1.0.1 IJulia@v1.14.1 JSON@v0.20.0 RDatasets@v0.6.1 In order to install a specific version of a package you need to run: pkg> add PackageName@vX.Y.Z For example: pkg> add IJulia@v1.14.1 Alternatively, you can install all the used packages by downloading the Project.toml file using pkg> instantiate as follows: julia> download("https://raw.githubusercontent.com/PacktPublishing/Julia-Programming-Projects/master/Chapter02/Project.toml", "Project.toml") pkg> activate . pkg> instantiate Using simple statistics to better understand our data Now that it's clear how the data is structured and what is contained in the collection, we can get a better understanding by looking at some basic stats. To get us started, let's invoke the describe function: julia> describe(iris) The output is as follows: This function summarizes the columns of the iris DataFrame. If the columns contain numerical data (such as SepalLength), it will compute the minimum, median, mean, and maximum. The number of missing and unique values is also included. The last column reports the type of data stored in the row. A few other stats are available, including the 25th and the 75th percentile, and the first and the last values. We can ask for them by passing an extra stats argument, in the form of an array of symbols: julia> describe(iris, stats=[:q25, :q75, :first, :last]) The output is as follows: Any combination of stats labels is accepted. These are all the options—:mean, :std, :min, :q25, :median, :q75, :max, :eltype, :nunique, :first, :last, and :nmissing. In order to get all the stats, the special :all value is accepted: julia> describe(iris, stats=:all) The output is as follows: We can also compute these individually by using Julia's Statistics package. For example, to calculate the mean of the SepalLength column, we'll execute the following: julia> using Statistics julia> mean(iris[:SepalLength]) 5.843333333333334 In this example, we use iris[:SepalLength] to select the whole column. The result, not at all surprisingly, is the same as that returned by the corresponding describe() value. In a similar way we can compute the median(): julia> median(iris[:SepalLength]) 5.8 And there's (a lot) more, such as, for instance, the standard deviation std(): julia> std(iris[:SepalLength]) 0.828066127977863 Or, we can use another function from the Statistics package, cor(), in a simple script to help us understand how the values are correlated: julia> for x in names(iris)[1:end-1] for y in names(iris)[1:end-1] println("$x \t $y \t $(cor(iris[x], iris[y]))") end println("-------------------------------------------") end Executing this snippet will produce the following output: SepalLength SepalLength 1.0 SepalLength SepalWidth -0.11756978413300191 SepalLength PetalLength 0.8717537758865831 SepalLength PetalWidth 0.8179411262715759 ------------------------------------------------------------ SepalWidth SepalLength -0.11756978413300191 SepalWidth SepalWidth 1.0 SepalWidth PetalLength -0.42844010433053953 SepalWidth PetalWidth -0.3661259325364388 ------------------------------------------------------------ PetalLength SepalLength 0.8717537758865831 PetalLength SepalWidth -0.42844010433053953 PetalLength PetalLength 1.0 PetalLength PetalWidth 0.9628654314027963 ------------------------------------------------------------ PetalWidth SepalLength 0.8179411262715759 PetalWidth SepalWidth -0.3661259325364388 PetalWidth PetalLength 0.9628654314027963 PetalWidth PetalWidth 1.0 ------------------------------------------------------------ The script iterates over each column of the dataset with the exception of Species (the last column, which is not numeric), and generates a basic correlation table. The table shows strong positive correlations between SepalLength and PetalLength (87.17%), SepalLength and PetalWidth (81.79%), and PetalLength and PetalWidth (96.28%). There is no strong correlation between SepalLength and SepalWidth. We can use the same script, but this time employ the cov() function to compute the covariance of the values in the dataset: julia> for x in names(iris)[1:end-1] for y in names(iris)[1:end-1] println("$x \t $y \t $(cov(iris[x], iris[y]))") end println("--------------------------------------------") end This code will generate the following output: SepalLength SepalLength 0.6856935123042507 SepalLength SepalWidth -0.04243400447427293 SepalLength PetalLength 1.2743154362416105 SepalLength PetalWidth 0.5162706935123043 ------------------------------------------------------- SepalWidth SepalLength -0.04243400447427293 SepalWidth SepalWidth 0.189979418344519 SepalWidth PetalLength -0.3296563758389262 SepalWidth PetalWidth -0.12163937360178968 ------------------------------------------------------- PetalLength SepalLength 1.2743154362416105 PetalLength SepalWidth -0.3296563758389262 PetalLength PetalLength 3.1162778523489933 PetalLength PetalWidth 1.2956093959731543 ------------------------------------------------------- PetalWidth SepalLength 0.5162706935123043 PetalWidth SepalWidth -0.12163937360178968 PetalWidth PetalLength 1.2956093959731543 PetalWidth PetalWidth 0.5810062639821031 ------------------------------------------------------- The output illustrates that SepalLength is positively related to PetalLength and PetalWidth, while being negatively related to SepalWidth. SepalWidth is negatively related to all the other values. Moving on, if we want a random data sample, we can ask for it like this: julia> rand(iris[:SepalLength]) 7.4 Optionally, we can pass in the number of values to be sampled: julia> rand(iris[:SepalLength], 5) 5-element Array{Float64,1}: 6.9 5.8 6.7 5.0 5.6 We can convert one of the columns to an array using the following: julia> sepallength = Array(iris[:SepalLength]) 150-element Array{Float64,1}: 5.1 4.9 4.7 4.6 5.0 # ... output truncated ... Or we can convert the whole DataFrame to a matrix: julia> irisarr = convert(Array, iris[:,:]) 150×5 Array{Any,2}: 5.1 3.5 1.4 0.2 CategoricalString{UInt8} "setosa" 4.9 3.0 1.4 0.2 CategoricalString{UInt8} "setosa" 4.7 3.2 1.3 0.2 CategoricalString{UInt8} "setosa" 4.6 3.1 1.5 0.2 CategoricalString{UInt8} "setosa" 5.0 3.6 1.4 0.2 CategoricalString{ UInt8} "setosa" # ... output truncated ... Loading and saving our data Julia comes with excellent facilities for reading and storing data out of the box. Given its focus on data science and scientific computing, support for tabular-file formats (CSV, TSV) is first class. Let's extract some data from our initial dataset and use it to practice persistence and retrieval from various backends. We can reference a section of a DataFrame by defining its bounds through the corresponding columns and rows. For example, we can define a new DataFrame composed only of the PetalLength and PetalWidth columns and the first three rows: julia> iris[1:3, [:PetalLength, :PetalWidth]] 3×2 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ ├─────┼─────────────┼────────────┤ │ 1 │ 1.4 │ 0.2 │ │ 2 │ 1.4 │ 0.2 │ │ 3 │ 1.3 │ 0.2 │ The generic indexing notation is dataframe[rows, cols], where rows can be a number, a range, or an Array of boolean values where true indicates that the row should be included: julia> iris[trues(150), [:PetalLength, :PetalWidth]] This snippet will select all the 150 rows since trues(150) constructs an array of 150 elements that are all initialized as true. The same logic applies to cols, with the added benefit that they can also be accessed by name. Armed with this knowledge, let's take a sample from our original dataset. It will include some 10% of the initial data and only the PetalLength, PetalWidth, and Species columns: julia> test_data = iris[rand(150) .<= 0.1, [:PetalLength, :PetalWidth, :Species]] 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ What just happened here? The secret in this piece of code is rand(150) .<= 0.1. It does a lot—first, it generates an array of random Float values between 0 and 1; then, it compares the array, element-wise, against 0.1 (which represents 10% of 1); and finally, the resultant Boolean array is used to filter out the corresponding rows from the dataset. It's really impressive how powerful and succinct Julia can be! In my case, the result is a DataFrame with the preceding 10 rows, but your data will be different since we're picking random rows (and it's quite possible you won't have exactly 10 rows either). Saving and loading using tabular file formats We can easily save this data to a file in a tabular file format (one of CSV, TSV, and others) using the CSV package. We'll have to add it first and then call the write method: pkg> add CSV julia> using CSV julia> CSV.write("test_data.csv", test_data) And, just as easily, we can read back the data from tabular file formats, with the corresponding CSV.read function: julia> td = CSV.read("test_data.csv") 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ Just specifying the file extension is enough for Julia to understand how to handle the document (CSV, TSV), both when writing and reading. Working with Feather files Feather is a binary file format that was specially designed for storing data frames. It is fast, lightweight, and language-agnostic. The project was initially started in order to make it possible to exchange data frames between R and Python. Soon, other languages added support for it, including Julia. Support for Feather files does not come out of the box, but is made available through the homonymous package. Let's go ahead and add it and then bring it into scope: pkg> add Feather julia> using Feather Now, saving our DataFrame is just a matter of calling Feather.write: julia> Feather.write("test_data.feather", test_data) Next, let's try the reverse operation and load back our Feather file. We'll use the counterpart read function: julia> Feather.read("test_data.feather") 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ Yeah, that's our sample data all right! In order to provide compatibility with other languages, the Feather format imposes some restrictions on the data types of the columns. You can read more about Feather in the package's official documentation at https://juliadata.github.io/Feather.jl/latest/index.html. Saving and loading with MongoDB Let's also take a look at using a NoSQL backend for persisting and retrieving our data. In order to follow through this part, you'll need a working MongoDB installation. You can download and install the correct version for your operating system from the official website, at https://www.mongodb.com/download-center?jmp=nav#community. I will use a Docker image which I installed and started up through Docker's Kitematic (available for download at https://github.com/docker/kitematic/releases). Next, we need to make sure to add the Mongo package. The package also has a dependency on LibBSON, which is automatically added. LibBSON is used for handling BSON, which stands for Binary JSON, a binary-encoded serialization of JSON-like documents. While we're at it, let's add the JSON package as well; we will need it. I'm sure you know how to do that by now—if not, here is a reminder: pkg> add Mongo, JSON At the time of writing, Mongo.jl support for Julia v1 was still a work in progress. This code was tested using Julia v0.6. Easy! Let's let Julia know that we'll be using all these packages: julia> using Mongo, LibBSON, JSON We're now ready to connect to MongoDB: julia> client = MongoClient() Once successfully connected, we can reference a dataframes collection in the db database: julia> storage = MongoCollection(client, "db", "dataframes") Julia's MongoDB interface uses dictionaries (a data structure called Dict in Julia) to communicate with the server. For now, all we need to do is to convert our DataFrame to such a Dict. The simplest way to do it is to sequentially serialize and then deserialize the DataFrame by using the JSON package. It generates a nice structure that we can later use to rebuild our DataFrame: julia> datadict = JSON.parse(JSON.json(test_data)) Thinking ahead, to make any future data retrieval simpler, let's add an identifier to our dictionary: julia> datadict["id"] = "iris_test_data" Now we can insert it into Mongo: julia> insert(storage, datadict) In order to retrieve it, all we have to do is query the Mongo database using the "id" field we've previously configured: Julia> data_from_mongo = first(find(storage, query("id" => "iris_test_data"))) We get a BSONObject, which we need to convert back to a DataFrame. Don't worry, it's straightforward. First, we create an empty DataFrame: julia> df_from_mongo = DataFrame() 0×0 DataFrames.DataFrame Then we populate it using the data we retrieved from Mongo: for i in 1:length(data_from_mongo["columns"]) df_from_mongo[Symbol(data_from_mongo["colindex"]["names"][i])] = Array(data_from_mongo["columns"][i]) end julia> df_from_mongo 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ And that's it! Our data has been loaded back into a DataFrame. In this tutorial, we looked at the Iris dataset and worked on loading and saving the data in a simple Julia project.  To learn more about machine learning recommendation in Julia and testing the model check out this book Julia Programming Projects. Julia for machine learning. Will the new language pick up pace? Announcing Julia v1.1 with better exception handling and other improvement GitHub Octoverse: top machine learning packages, languages, and projects of 2018
Read more
  • 0
  • 0
  • 31876

article-image-how-to-make-machine-learning-based-recommendations-using-julia-tutorial
Prasad Ramesh
08 Feb 2019
8 min read
Save for later

How to make machine learning based recommendations using Julia [Tutorial]

Prasad Ramesh
08 Feb 2019
8 min read
In this article, we will look at machine learning based recommendations using Julia. We will make recommendations using a Julia package called 'Recommendation'. This article is an excerpt from a book written by Adrian Salceanu titled Julia Programming Projects. In this book, you will learn how to build simple-to-advanced applications through examples in Julia Lang 1.x using modern tools. In order to ensure that your code will produce the same results as described in this article, it is recommended to use the same package versions. Here are the external packages used in this tutorial and their specific versions: CSV@v.0.4.3 DataFrames@v0.15.2 Gadfly@v1.0.1 IJulia@v1.14.1 Recommendation@v0.1.0+ In order to install a specific version of a package you need to run: pkg> add PackageName@vX.Y.Z For example: pkg> add IJulia@v1.14.1 Alternatively, you can install all the used packages by downloading the Project.toml file provided on GitHub. You can use pkg> instantiate as follows: julia> download("https://raw.githubusercontent.com/PacktPublishing/Julia-Projects/master/Chapter07/Project.toml", "Project.toml") pkg> activate . pkg> instantiate Julia's ecosystem provides access to Recommendation.jl, a package that implements a multitude of algorithms for both personalized and non-personalized recommendations. For model-based recommenders, it has support for SVD, MF, and content-based recommendations using TF-IDF scoring algorithms. There's also another very good alternative—the ScikitLearn.jl package (https://github.com/cstjean/ScikitLearn.jl). This implements Python's very popular scikit-learn interface and algorithms in Julia, supporting both models from the Julia ecosystem and those of the scikit-learn library (via PyCall.jl). The Scikit website and documentation can be found at http://scikit-learn.org/stable/. It is very powerful and definitely worth keeping in mind, especially for building highly efficient recommenders for production usage. For learning purposes, we'll stick to Recommendation, as it provides for a simpler implementation. Making recommendations with Recommendation For our learning example, we'll use Recommendation. It is the simplest of the available options, and it's a good teaching device, as it will allow us to further experiment with its plug-and-play algorithms and configurable model generators. Before we can do anything interesting, though, we need to make sure that we have the package installed: pkg> add Recommendation#master julia> using Recommendation Please note that I'm using the #master version, because the tagged version, at the time of writing this book, was not yet fully updated for Julia 1.0. The workflow for setting up a recommender with Recommendation involves three steps: Setting up the training data Instantiating and training a recommender using one of the available algorithms Once the training is complete, asking for recommendations Let's implement these steps. Setting up the training data Recommendation uses a DataAccessor object to set up the training data. This can be instantiated with a set of Event objects. A Recommendation.Event is an object that represents a user-item interaction. It is defined like this: struct Event user::Int item::Int value::Float64 end In our case, the user field will represent the UserID, the item field will map to the ISBN, and the value field will store the Rating. However, a bit more work is needed to bring our data in the format required by Recommendation: First of all, our ISBN data is stored as a string and not as an integer. Second, internally, Recommendation builds a sparse matrix of user *  item and stores the corresponding values, setting up the matrix using sequential IDs. However, our actual user IDs are large numbers, and Recommendation will set up a very large, sparse matrix, going all the way from the minimum to the maximum user IDs. What this means is that, for example, we only have 69 users in our dataset (as confirmed by unique(training_data[:UserID]) |> size), with the largest ID being 277,427, while for books we have 9,055 unique ISBNs. If we go with this, Recommendation will create a 277,427 x 9,055 matrix instead of a 69 x 9,055 matrix. This matrix would be very large, sparse, and inefficient. Therefore, we'll need to do a bit more data processing to map the original user IDs and the ISBNs to sequential integer IDs, starting from 1. We'll use two Dict objects that will store the mappings from the UserID and ISBN columns to the recommender's sequential user and book IDs. Each entry will be of the form dict[original_id] = sequential_id: julia> user_mappings, book_mappings = Dict{Int,Int}(), Dict{String,Int}() We'll also need two counters to keep track of, and increment, the sequential IDs: julia> user_counter, book_counter = 0, 0 We can now prepare the Event objects for our training data: julia> events = Event[] julia> for row in eachrow(training_data) global user_counter, book_counter user_id, book_id, rating = row[:UserID], row[:ISBN], row[:Rating] haskey(user_mappings, user_id) || (user_mappings[user_id] = (user_counter += 1)) haskey(book_mappings, book_id) || (book_mappings[book_id] = (book_counter += 1)) push!(events, Event(user_mappings[user_id], book_mappings[book_id], rating)) end This will fill up the events array with instances of Recommendation.Event, which represents a unique UserID, ISBN, and Rating combination. To give you an idea, it will look like this: julia> events 10005-element Array{Event,1}: Event(1, 1, 10.0) Event(1, 2, 8.0) Event(1, 3, 9.0) Event(1, 4, 8.0) Event(1, 5, 8.0) # output omitted # Please remember this very important aspect—in Julia, the for loop defines a new scope. This means that variables defined outside the for loop are not accessible inside it. To make them visible within the loop's body, we need to declare them as global. Now, we are ready to set up our DataAccessor: julia> da = DataAccessor(events, user_counter, book_counter) Building and training the recommender At this point, we have all that we need to instantiate our recommender. A very efficient and common implementation uses MF—unsurprisingly, this is one of the options provided by the Recommendation package, so we'll use it. Matrix Factorization The idea behind MF is that, if we're starting with a large sparse matrix like the one used to represent user x profile ratings, then we can represent it as the product of multiple smaller and denser matrices. The challenge is to find these smaller matrices so that their product is as close to our original matrix as possible. Once we have these, we can fill in the blanks in the original matrix so that the predicted values will be consistent with the existing ratings in the matrix: Our user x books rating matrix can be represented as the product between smaller and denser users and books matrices. To perform the matrix factorization, we can use a couple of algorithms, among which the most popular are SVD and Stochastic Gradient Descent (SGD). Recommendation uses SGD to perform matrix factorization. The code for this looks as follows: julia> recommender = MF(da) julia> build(recommender) We instantiate a new MF recommender and then we build it—that is, train it. The build step might take a while (a few minutes on a high-end computer using the small dataset that's provided on GitHub). If we want to tweak the training process, since SGD implements an iterative approach for matrix factorization, we can pass a max_iter argument to the build function, asking it for a maximum number of iterations. The more iterations we do, in theory, the better the recommendations—but the longer it will take to train the model. If you want to speed things up, you can invoke the build function with a max_iter of 30 or less—build(recommender, max_iter = 30). We can pass another optional argument for the learning rate, for example, build (recommender, learning_rate=15e-4, max_iter=100). The learning rate specifies how aggressively the optimization technique should vary between each iteration. If the learning rate is too small, the optimization will need to be run a lot of times. If it's too big, then the optimization might fail, generating worse results than the previous iterations. Making recommendations Now that we have successfully built and trained our model, we can ask it for recommendations. These are provided by the recommend function, which takes an instance of a recommender, a user ID (from the ones available in the training matrix), the number of recommendations, and an array of books ID from which to make recommendations as its arguments: julia> recommend(recommender, 1, 20, [1:book_counter...]) With this line of code, we retrieve the recommendations for the user with the recommender ID 1, which corresponds to the UserID 277427 in the original dataset. We're asking for up to 20 recommendations that have been picked from all the available books. We get back an array of a Pair of book IDs and recommendation scores: 20-element Array{Pair{Int64,Float64},1}: 5081 => 19.1974 5079 => 19.1948 5078 => 19.1946 5077 => 17.1253 5080 => 17.1246 # output omitted # In this article, we learned how to make recommendations with machine learning in Julia.  To learn more about machine learning recommendation in Julia and testing the model check out this book Julia Programming Projects. YouTube to reduce recommendations of ‘conspiracy theory’ videos that misinform users in the US How to Build a music recommendation system with PageRank Algorithm How to build a cold-start friendly content-based recommender using Apache Spark SQL
Read more
  • 0
  • 0
  • 33846
article-image-how-to-create-a-desktop-application-with-electron-tutorial
Bhagyashree R
06 Feb 2019
15 min read
Save for later

How to create a desktop application with Electron [Tutorial]

Bhagyashree R
06 Feb 2019
15 min read
Electron is an open source framework, created by GitHub, that lets you develop desktop executables that bring together Node and Chrome to provide a full GUI experience. Electron has been used for several well-known projects, including developer tools such as Visual Studio Code, Atom, and Light Table. Basically, you can define the UI with HTML, CSS, and JS (or using React, as we'll be doing), but you can also use all of the packages and functions in Node. So, you won't be limited to a sandboxed experience, being able to go beyond what you could do with just a browser. This article is taken from the book  Modern JavaScript Web Development Cookbook by Federico Kereki.  This problem-solving guide teaches you popular problems solving techniques for JavaScript on servers, browsers, mobile phones, and desktops. To follow along with the examples implemented in this article, you can download the code from the book's GitHub repository. In this article, we will look at how we can use Electron together with the tools like, React and Node, to create a native desktop application, which you can distribute to users. Setting up Electron We will start with installing Electron, and then in the later recipes, we'll see how we can turn a React app into a desktop program. You can install Electron by executing the following command: npm install electron --save-dev Then, we'll need a starter JS file. Taking some tips from the main.js file, we'll create the following electron-start.js file: // Source file: electron-start.js /* @flow */ const { app, BrowserWindow } = require("electron"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024 }); mainWindow.loadURL("http://localhost:3000"); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Here are some points to note regarding the preceding code snippet: This code runs in Node, so we are using require() instead of import The mainWindow variable will point to the browser instance where our code will run We'll start by running our React app, so Electron will be able to load the code from http://localhost:3000 In our code, we also have to process the following events: "ready" is called when Electron has finished its initialization and can start creating windows. "closed" means your window was closed; your app might have several windows open, so at this point, you should delete the closed one. "window-all-closed" implies your whole app was closed. In Windows and Linux, this means quitting, but for macOS, you don't usually quit applications, because of Apple' s usual rules. "activate" is called when your app is reactivated, so if the window had been deleted (as in Windows or Linux), you have to create it again. We already have our React app (you can find the React app in the GitHub repository) in place, so we just need a way to call Electron. Add the following script to package.json, and you'll be ready: "scripts": { "electron": "electron .", . . . How it works... To run the Electron app in development mode, we have to do the following: Run our restful_server_cors server code from the GitHub repository. Start the React app, which requires the server to be running. Wait until it's loaded, and then and only then, move on to the next step. Start Electron. So, basically, you'll have to run the following two commands, but you'll need to do so in separate terminals: // in the directory for our restful server: node out/restful_server_cors.js // in the React app directory: npm start // and after the React app is running, in other terminal: npm run electron After starting Electron, a screen quickly comes up, and we again find our countries and regions app, now running independently of a browser: The app works as always; as an example, I selected a country, Canada, and correctly got its list of regions: We are done! You can see that everything is interconnected, as before, in the sense that if you make any changes to the React source code, they will be instantly reflected in the Electron app. Adding Node functionality to your app In the previous recipe, we saw that with just a few small configuration changes, we can turn our web page into an application. However, you're still restricted in terms of what you can do, because you are still using only those features available in a sandboxed browser window. You don't have to think this way, for you can add basically all Node functionality using functions that let you go beyond the limits of the web. Let's see how to do it in this recipe. How to do it We want to add some functionality to our app of the kind that a typical desktop would have. The key to adding Node functions to your app is to use the remote module in Electron. With it, your browser code can invoke methods of the main process, and thus gain access to extra functionality. Let's say we wanted to add the possibility of saving the list of a country's regions to a file. We'd require access to the fs module to be able to write a file, and we'd also need to open a dialog box to select what file to write to. In our serviceApi.js file, we would add the following functions: // Source file: src/regionsApp/serviceApi.js /* @flow */ const electron = window.require("electron").remote; . . . const fs = electron.require("fs"); export const writeFile = fs.writeFile.bind(fs); export const showSaveDialog = electron.dialog.showSaveDialog; Having added this, we can now write files and show dialog boxes from our main code. To use this functionality, we could add a new action to our world.actions.js file: // Source file: src/regionsApp/world.actions.js /* @flow */ import { getCountriesAPI, getRegionsAPI, showSaveDialog, writeFile } from "./serviceApi"; . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => e && window.console.log(`ERROR SAVING ${filename}`, e); ); } }); }; When the saveRegionsToDisk() action is dispatched, it will show a dialog to prompt the user to select what file is to be written, and will then write the current set of regions, taken from getState().regions, to the selected file in JSON format. We just have to add the appropriate button to our <RegionsTable> component to be able to dispatch the necessary action: // Source file: src/regionsApp/regionsTableWithSave.component.js /* @flow */ import React from "react"; import PropTypes from "prop-types"; import "../general.css"; export class RegionsTable extends React.PureComponent<{ loading: boolean, list: Array<{ countryCode: string, regionCode: string, regionName: string }>, saveRegions: () => void }> { static propTypes = { loading: PropTypes.bool.isRequired, list: PropTypes.arrayOf(PropTypes.object).isRequired, saveRegions: PropTypes.func.isRequired }; static defaultProps = { list: [] }; render() { if (this.props.list.length === 0) { return <div className="bordered">No regions.</div>; } else { const ordered = [...this.props.list].sort( (a, b) => (a.regionName < b.regionName ? -1 : 1) ); return ( <div className="bordered"> {ordered.map(x => ( <div key={x.countryCode + "-" + x.regionCode}> {x.regionName} </div> ))} <div> <button onClick={() => this.props.saveRegions()}> Save regions to disk </button> </div> </div> ); } } } We are almost done! When we connect this component to the store, we'll simply add the new action, as follows: // Source file: src/regionsApp/regionsTableWithSave.connected.js /* @flow */ import { connect } from "react-redux"; import { RegionsTable } from "./regionsTableWithSave.component"; import { saveRegionsToDisk } from "./world.actions"; const getProps = state => ({ list: state.regions, loading: state.loadingRegions }); const getDispatch = (dispatch: any) => ({ saveRegions: () => dispatch(saveRegionsToDisk()) }); export const ConnectedRegionsTable = connect( getProps, getDispatch )(RegionsTable); How it works The code we added showed how we could gain access to a Node package (fs, in our case) and some extra functions, such as showing a Save to disk dialog. When we run our updated app and select a country, we'll see our newly added button, as in the following screenshot: Clicking on the button will pop up a dialog, allowing you to select the destination for the data: If you click Save, the list of regions will be written in JSON format, as we specified earlier in our writeRegionsToDisk() function. Building a more windowy experience In the previous recipe, we added the possibility of using any and all of the functions provided by Node. In this recipe, let's now focus on making our app more window-like, with icons, menus, and so on. We want the user to really believe that they're using a native app, with all the features that they would be accustomed to. The following list of interesting subjects from Electron APIs is just a short list of highlights, but there are many more available options: clipboardTo do copy and paste operations using the system's clipboarddialogTo show the native system dialogs for messages, alerts, opening and saving files, and so onglobalShortcutTo detect keyboard shortcutsMenu, MenuItemTo create a menu bar with menus and submenusNotificationTo add desktop notificationspowerMonitor, powerSaveBlockerTo monitor power state changes, and to disable entering sleep modescreenTo get information about the screen, displays, and so onTrayTo add icons and context menus to the system's tray Let's add a few of these functions so that we can get a better-looking app that is more integrated to the desktop. How to do it Any decent app should probably have at least an icon and a menu, possibly with some keyboard shortcuts, so let's add those features now, and just for the sake of it, let's also add some notifications for when regions are written to disk. Together with the Save dialog we already used, this means that our app will include several native windowing features. To start with, let's add an icon. Showing an icon is the simplest thing because it just requires an extra option when creating the BrowserWindow() object. I'm not very graphics-visual-designer oriented, so I just downloaded the Alphabet, letter, r Icon Free file from the Icon-Icons website. Implement the icon as follows: mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: "./src/regionsApp/r_icon.png" }); You can also choose icons for the system tray, although there's no way of using our regions app in that context, but you may want to look into it nonetheless. To continue, the second feature we'll add is a menu, with some global shortcuts to boot. In our App.regions.js file, we'll need to add a few lines to access the Menu module, and to define our menu itself: // Source file: src/App.regions.js . . . import { getRegions } from "./regionsApp/world.actions"; . . . const electron = window.require("electron").remote; const { Menu } = electron; const template = [ { label: "Countries", submenu: [ { label: "Uruguay", accelerator: "Alt+CommandOrControl+U", click: () => store.dispatch(getRegions("UY")) }, { label: "Hungary", accelerator: "Alt+CommandOrControl+H", click: () => store.dispatch(getRegions("HU")) } ] }, { label: "Bye!", role: "quit" } ]; const mainMenu = Menu.buildFromTemplate(template); Menu.setApplicationMenu(mainMenu); Using a template is a simple way to create a menu, but you can also do it manually, adding item by item. I decided to have a Countries menu with two options to show the regions for Uruguay and Hungary. The click property dispatches the appropriate action. I also used the accelerator property to define global shortcuts. See the accelerator.md for the list of possible key combinations to use, including the following: Command keys, such as Command (or Cmd), Control (or Ctrl), or both (CommandOrControl or CmdOrCtrl) Alternate keys, such as Alt, AltGr, or Option Common keys, such as Shift, Escape (or Esc), Tab, Backspace, Insert, or Delete Function keys, such as F1 to F24 Cursor keys, including Up, Down, Left, Right, Home, End, PageUp, and PageDown Media keys, such as MediaPlayPause, MediaStop, MediaNextTrack, MediaPreviousTrack, VolumeUp, VolumeDown, and VolumeMute I also want to be able to quit the application. A complete list of roles is available at Electron docs. With these roles, you can do a huge amount, including some specific macOS functions, along with the following: Work with the clipboard (cut, copy, paste, and pasteAndMatchStyle) Handle the window (minimize, close, quit, reload, and forceReload) Zoom (zoomIn, zoomOut, and resetZoom) To finish, and really just for the sake of it, let's add a notification trigger for when a file is written. Electron has a Notification module, but I opted to use node-notifier, which is quite simple to use. First, we'll add the package in the usual fashion: npm install node-notifier --save In serviceApi.js, we'll have to export the new function, so we'll able to import from elsewhere, as we'll see shortly: const electron = window.require("electron").remote; . . . export const notifier = electron.require("node-notifier"); Finally, let's use this in our world.actions.js file: import { notifier, . . . } from "./serviceApi"; With all our setup, actually sending a notification is quite simple, requiring very little code: // Source file: src/regionsApp/world.actions.js . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => { if (e) { window.console.log(`ERROR SAVING ${filename}`, e); } else { notifier.notify({ title: "Regions app", message: `Regions saved to ${filename}` }); } }); } }); }; How it works First, we can easily check that the icon appears: Now, let's look at the menu. It has our options, including the shortcuts: Then, if we select an option with either the mouse or the global shortcut, the screen correctly loads the expected regions: Finally, let's see if the notifications work as expected. If we click on the Save regions to disk button and select a file, we'll see a notification, as in the following screenshot: Making a distributable package Now that we have a full app, all that's left to do is package it up so that you can deliver it as an executable file for Windows, Linux, or macOS users. How to do it. There are many ways of packaging an app, but we'll use a tool, electron-builder, that will make it even easier, if you can get its configuration right! First of all, we'll have to begin by defining the build configuration, and our initial step will be, as always, to install the tool: npm install electron-builder --save-dev To access the added tool, we'll require a new script, which we'll add in package.json: "scripts": { "dist": "electron-builder", . . . } We'll also have to add a few more details to package.json, which are needed for the build process and the produced app. In particular, the homepage change is required, because the CRA-created index.html file uses absolute paths that won't work later with Electron: "name": "chapter13", "version": "0.1.0", "description": "Regions app for chapter 13", "homepage": "./", "license": "free", "author": "Federico Kereki", Finally, some specific building configuration will be required. You cannot build for macOS with a Linux or Windows machine, so I'll leave that configuration out. We have to specify where the files will be found, what compression method to use, and so on: "build": { "appId": "com.electron.chapter13", "compression": "normal", "asar": true, "extends": null, "files": [ "electron-start.js", "build/**/*", "node_modules/**/*", "src/regionsApp/r_icon.png" ], "linux": { "target": "zip" }, "win": { "target": "portable" } } We have completed the required configuration, but there are also some changes to do in the code itself, and we'll have to adapt the code for building the package. When the packaged app runs, there won't be any webpack server running; the code will be taken from the built React package. The starter code will require the following changes: // Source file: electron-start.for.builder.js /* @flow */ const { app, BrowserWindow } = require("electron"); const path = require("path"); const url = require("url"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: path.join(__dirname, "./build/r_icon.png") }); mainWindow.loadURL( url.format({ pathname: path.join(__dirname, "./build/index.html"), protocol: "file", slashes: true }) ); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Mainly, we are taking icons and code from the build/ directory. An npm run build command will take care of generating that directory, so we can proceed with creating our executable app. How it works After doing this setup, building the app is essentially trivial. Just do the following, and all the distributable files will be found in the dist/ directory: npm run electron-builder Now that we have the Linux app, we can run it by unzipping the .zip file and clicking on the chapter13 executable. (The name came from the "name" attribute in package.json, which we modified earlier.) The result should be like what's shown in the following screenshot: I also wanted to try out the Windows EXE file. Since I didn't have a Windows machine, I made do by downloading a free VirtualBox virtual machine. After downloading the virtual machine, setting it up in VirtualBox, and finally running it, the result that was produced was the same as for Linux: So, we've managed to develop a React app, enhanced it with the Node and Electron features, and finally packaged it for different operating systems. With that, we are done! If you found this post useful, do check out the book, Modern JavaScript Web Development Cookbook.  You will learn how to create native mobile applications for Android and iOS with React Native, build client-side web applications using React and Redux, and much more. How to perform event handling in React [Tutorial] Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Electron 3.0.0 releases with experimental textfield, and button APIs
Read more
  • 0
  • 0
  • 77232

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 39378
Modal Close icon
Modal Close icon