Test Script Execution Recording using Selenium Webdriver


Everyone has known how to take the screenshot in selenium web driver but when we ask how to take a video recording of the test script in web driver, people have no idea. So here I describe how to capture test script video in selenium web driver.

When the execution time of your script is too long so you can record your script and then you can analyze the script.

For the recording the script you have to download ATUTestRecorder jar file and include in your project.

So here I am writing the code of recording script.


import org.openqa.selenium.WebDriver;

import org.openqa.selenium.WebDriver;

import org.openqa.selenium.chrome.ChromeDriver;
import atu.testrecorder.ATUTestRecorder;
public class SeleniumRecord 

{ 

public static void main(String args[]) throws Exception
{
ATUTestRecorder OBJ = new ATUTestRecorder("/home/manoj/Desktop/recording","RECORDINGVIEO-",false);
OBJ.start();
System.setProperty("webdriver.chrome.driver", "/home/manoj/Downloads/chromedriver"); 
WebDriver driver = new ChromeDriver(); 
driver.manage().window().maximize(); 
driver.get("http://www.gmail.com"); 
Thread.sleep(2000); 
driver.close(); 
OBJ.stop();;
}

For recording, I am creating a folder recording and we already define the path of this folder in ATURecorder constructor.
After executing the test script, the script was recorded and you can check in your folder. here I am attaching the screenshot of the video.

videorecording

 

 

Feel free to ask me any question.

Thanks.

Advertisements
Posted in Scala, testing | Tagged , , , , , | Leave a comment

Create Your Own MetastoreEvent Listeners in Hive With Scala


HIve MetaStore Event Listeners are used to Detect the every single event that takes place whenever an event is executed in hive, in case You want some action to take place for an event you can override MetaStorePreEventListener and provide it your own Implementation

in this article, we will learn how to create our own metastore event listeners in the hive using scala and sbt

so let’s get started first add the following dependencies in your build.sbt file

libraryDependencies += "org.apache.hive" % "hive-exec" % "1.2.1" excludeAll
  ExclusionRule(organization = "org.pentaho")

libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.7.3"

libraryDependencies += "org.apache.httpcomponents" % "httpclient" % "4.3.4"

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.6.0"

libraryDependencies += "org.apache.hive" % "hive-service" % "1.2.1"

unmanagedJars in Compile += file("/usr/lib/hive/lib/hive-exec-1.2.1.jar")

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

now create your first class you can be named it anything I named it as OrcMetastoreListener this class must extend  MetaStorePreEventListener class of hive and take Hadoop conf as the constructor argument

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hive.metastore.MetaStorePreEventListener
import org.apache.hadoop.hive.metastore.events.PreEventContext.PreEventType._
import org.apache.hadoop.hive.metastore.events._

class OrcMetastoreListener(conf: Configuration) extends MetaStorePreEventListener(conf) {

  override def onEvent(preEventContext: PreEventContext): Unit = {
    preEventContext.getEventType match {
      case CREATE_TABLE =>
        val tableName = preEventContext.asInstanceOf[PreCreateTableEvent].getTable
        tableName.getSd.setInputFormat("org.apache.hadoop.hive.ql.io.orc.OrcInputFormat")
        tableName.getSd.setOutputFormat("org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat")
      case ALTER_TABLE =>
        val newTableName = preEventContext.asInstanceOf[PreAlterTableEvent].getNewTable
        newTableName.getSd.setInputFormat("org.apache.hadoop.hive.ql.io.orc.OrcInputFormat")
        newTableName.getSd.setOutputFormat("org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat")
      case _ => //do nothing

    }

  }
}

the pre-event context contains all the hive meta store event in my case I want that whenever a table is get created in the hive it must use hive input format and output format and same thing for altering command

The best use case for this listener is when somebody wants to query a data source such as spark or any other data source using its own custom input format and even don’t want to alter the schema  of hive table to use his custom input format

now let’s build a jar from the core and use it in the Hive

First, add sbt assembly plugin in your plugins.sbt file

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.5")

now got to your root project and build the jar with command sbt assembly

it will build your jar, collect your jar and put it in your $HIVE_HOME/lib path

inside $HIVE_HOME/conf folder add the following contents in hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false></value>
</property>
<property>
<name>hive.metastore.pre.event.listeners</name>
<value>metastorelisteners.OrcMetastoreListener</value>
</property>
</configuration>

now create a table in hive and describe it

hive> CREATE TABLE HIVETABLE(ID INT);
OK
Time taken: 2.742 seconds
hive> DESC EXTENDED HIVETABLE
    > ;
OK
id                  	int                 	                    
	 	 
Detailed Table Information Table(tableName:hivetable, dbName:default, owner:hduser,e, inputFormat:org.apache.hadoop.hive.ql.io.orc.OrcInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat, compressed:false, 
Time taken: 0.611 seconds, Fetched: 3 row(s)


knoldus-advt-sticker


 

Posted in Scala | Leave a comment

Scripting Library in Scala – Ammonite


Ammonite is a Scala library that lets us use Scala language for Scripting. It allows us to write scripts in Scala. The advantage of using Ammonite is that we don’t have to switch over to Python or Bash for scripting requirements of the projects.This liberates the developer from the need of working in multiple languages.

Ammonite can be used in the REPL as scripts or as a library in existing projects or alternatively as a standalone system shell.

Ammonite REPL
It is an improved Scala REPL. It has more features than the standard REPL and is loaded with lots of ergonomic improvements like Pretty Printing, Syntax Highlighting etc and support configurability that is Configurations can be put in the predef.sc file.

To Get Started with the Ammonite as a Scala shell: Download the standalone Ammonite 1.0.2 executable for Scala 2.12

 sudo curl -L -o /usr/local/bin/amm https://git.io/v5Tct && sudo chmod +x /usr/local/bin/amm &&

You can also set the Path in your bashrc :

#Set Ammonite Home
export AMMONITE_HOME="path/amm"

Ammonite as Library into Existing Project Continue reading

Posted in Scala | Tagged , , , | Leave a comment

Getting Started With Phantom


phantom

Phantom is Reactive type-safe Scala driver for Apache Cassandra/Datastax Enterprise. So, first lets explore what Apache Cassandra is with some basic introduction to it.

Apache Cassandra

Apache Cassandra is a free, open source data storage system that was created at Facebook in 2008. It is highly scalable database designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It is a type of NoSQL database which is Schema-free. For more about Cassandra refer to this blog Getting Started With Cassandra.

Phantom-DSL

We wanted to integrate Cassandra into Scala ecosystem that’s why we used Phantom-DSL as one of the module of outworkers. So, if you are planning on using Cassandra with Scala, phantom is the weapon of choice because of :-

  • Ease of use and quality coding.
  • Reducing code and boilerplate by at least 90%.
  • Automated schema generation

Continue reading

Posted in Scala | Tagged , , , , , | 3 Comments

Introduction to Mesos


What is Mesos ?

In layman’s term, Imagine a busy airport.
Airplanes are constantly taking off and landing.
There are multiple runways, and an airport dispatcher is assigning time-slots to airplanes to land or takeoff.
So Mesos is the airport dispatcher, runways are compute nodes, airplanes are compute tasks, and frameworks like Hadoop, Spark and Google Kubernetes are airlines companies.

In technical terms, Apache Mesos is the first open source cluster manager that handles the workload efficiently in distributed environment through dynamic resource sharing and isolation. This means that you can run any distributed application i.e spark, hadoop etc., which requires clustered resources.

It sits between the application layer and the operating system and makes it easier to deploy and manage applications in large-scale clustered environments more efficiently.

Mesos allows multiple services to scale and utilise a shared pool of servers more efficiently. The key idea behind the Mesos is to turn your data center into one very large computer.

Apache Mesos is the opposite of virtualization because in virtualization one physical resource is divided into multiple virtual resources, while in Mesos multiple physical resources are clubbed into a single virtual resource.

Who is using it?

Prominent users of Mesos include Twitter, Airbnb, MediaCrossing, Xogito and Categorize. Airbnb uses Mesos to manager their big data infrastructure.

Mesos Internals:

Mesos is leveraging features of modern kernels for resource isolation, prioritisation, limiting and accounting. This is normally done by cgroups in Linux, zones in Solaris. Mesos provide resources isolation for CPU, memory, I/O, file system , etc. It is also possible to use Linux containers but current isolation support for Linux container in Mesos is limited to only CPU and memory.

Architecture of Mesos:

Mesos Architecture

Mesos Master:

Mesos master is the heart of the cluster. It guarantees that the cluster will be highly available. It hosts the primary user interface that provides information about the resources available in the cluster. The master is a central source of all running task, it stores in memory all the data related to the task. For the completed task, there is only fixed amount of memory available, thus allowing the master to serve the user interface and data about the task with the minimal latency.

Mesos Agent:

The Mesos Agent holds and manages the container that hosts the executor (all things runs inside a container in Mesos). It manages the communication between the local executor and Mesos master, thus agent acts as an intermediate between them. The Mesos agent publishes the information related to the host they are running in, including data about running task and executors, available resources of the host and other metadata. It guarantees the delivery of status update of the tasks to the schedulers.

Mesos Framework:

Mesos Framework has two parts: The Scheduler and The Executor. The Scheduler registers itself in the Mesos master, and in turn gets the unique framework id. It is the responsibility of scheduler to launch task when the resource requirement and constraints match with received offer the Mesos master. It is also responsible for handling task failures and errors. The executor executes the task launched by the scheduler and notifies back the status of each task.

Continue reading

Posted in Devops, Scala | Tagged , , , | 2 Comments

Jenkins | Problems you might face while pipe-lining!


I expect you to be familiar with basics of Jenkins. If you’re not please visit Introduction to Jenkins, this post will take you through very basics of Jenkins. What I want to introduce to you, are post setup type of things. Means, you have already setup Jenkins and now you are worried about how to do pipelining of the project.

I will take you through the problems that might come when you pick working on a Jenkins Pipeline project that uses Docker images. To understand what is a Docker Image please visit Introduction to Docker, this post will take you through the basics of Docker Image.

Pipe-Lining a Maven Project:

Creating a Pipe-line is similar to create a simple Jenkins Job, but in here you have to give some different configurations for your job. Let’s start.

Steps:

1. Goto Jenkins Home and click new Item.

2. Select the pipeline option and give a suitable job name and press OK.

3. Now give proper configurations for this Job as defined below:

a. In General tab you can give the project based security to a particaular person/group of people and define what role/permissions you want this person/group to be involved in.

b. You don’t need to touch other settings in General/Job Notification/Office 365 Connector Tabs for a simple pipeline.

c. In next Tab i.e. Build Triggers, you can define the type of trigger you want to automate to build your Job or you can leave it blank if you want to manually trigger your Build.

d. Then the most important configuraion is Pipeline tab.

i) There are two ways to make pipeline, first you can write a script in given textbox in Pipeline Script option, second you can select Pipeline Script from SCM, and provide a Jenkinsfile in your project and give the path in script path.

ii) Then define your Source Code Management(SCM) in SCM option i.e. Git in our case.

iii) Then define your Git Repository URL, your will see an error as show below in the image steps, the next image show how to resolve it. You’ll have to create a proper Jenkins credential for the given Repo and select it in Credential option. The error will then disapear.

Continue reading

Posted in Scala | Leave a comment

Basic Example for Spark Structured Streaming & Kafka Integration


The Spark Streaming integration for Kafka 0.10 is similar in design to the 0.8 Direct Stream approach. It provides simple parallelism, 1:1 correspondence between Kafka partitions and Spark partitions, and access to offsets and metadata. However, because the newer integration uses the new Kafka consumer API instead of the simple API, there are notable differences in usage. This version of the integration is marked as experimental, so the API is potentially subject to change.

In this blog, I am going to implement the basic example on Spark Structured Streaming & Kafka Integration.

Here, I am using

  • Apache Spark  2.2.0
  • Apache Kafka 0.11.0.1
  • Scala 2.11.8

Create the built.sbt

Let’s create a sbt project and add following dependencies in build.sbt.

libraryDependencies ++= Seq("org.apache.spark" % "spark-sql_2.11" % "2.2.0",
                        "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.2.0",
                        "org.apache.kafka" % "kafka-clients" % "0.11.0.1")

Continue reading

Posted in Scala, Spark, Streaming | Tagged , | 6 Comments

Welcome to the world of Riak Database !!!


Today we are going to discuss the Riak Database which is distributed NoSQL Database. In the current scenario, when there are a lot of data into the world, we can not go for the old technology for storing the data. The user wants to keep all record of their data and want to process it at lightning-fast speed, so they use Big Data technology. But old databases are not compatible with the Big Data technology. So Riak provides the functionality for the distribute the data on the multil cluster and perform the operation on it.

What is Riak?

Riak has highly distributed database software. It provides high availability, fault tolerance, operational simplicity, and scalability.

Riak is available in Riak Open Source and Riak Enterprise Edition and provided in two variants – Riak KV and Riak TS.

Continue reading

Posted in big data, database, NoSql, Scala | Tagged | 1 Comment

Testing HTTP services in Angular


Prerequisites :

    1. 1. Understanding of Angular.
    1. 2. Understanding of Component’s unit tests in Angular
    1. 3. Understanding of Karma and Jasmine

Http Service

Let’s consider a simple service to get data using get method of Http service.

Let’s start with writing a test case for this service.

Configuring Testing Module for Service:

Continue reading

Posted in AngularJs2.0, JavaScript, testing | Leave a comment

KNOLX : An Introduction to Jenkins


Hi all,

Knoldus has organized a 30 min session on 18th August 2017 at 3:30 PM. The topic was “An Introduction to Jenkins”. Many people have joined and enjoyed the session. I am going to share the slides and the video of the session here. Please let me know if you have any questions related to linked slides or doubts regarding the content.
Here are the slides:

And Here’s the video of the session:

In case you have any doubts regarding the topic you may ask them in the comment section below.


knoldus-advt-sticker


Posted in Scala | 1 Comment