Hive-Metastore : A Basic Introduction


As we know database is the most important and powerful part for any organisation. It is the collection of Schema, Tables, Relationships, Queries and Views. It is an organized collection of data. But can you ever think about these question –

  1. How does database manage all the tables?
  2. How does database manage all the relationship?
  3. How do we perform all operations so easy?
  4. Is there any way to know about all this?

There is one answer for all the question and that is Metastore. In the Metastore database keeps all the information related to our databases, tables and relations as Metadata. When ever you want to know about database than in the Metastore we can easily find all the information.

Here we will talk about the Hive-Metastore system where it keep all the information about the tables and relations.

Hive-Metastore :

All hive implementation need a metastore service, where it stores metadata. It is implemented using tables in relational database. By default, Hive uses built-in Derby SQL server. It provides single process storage, so when we use Derby we can not run instance of Hive CLI. Whenever we want to run Hive on a personal machine or for for some developer task than it is good but when we want to use it on cluster than MYSQL or any other similar relational database is required.

Now when you run your hive query and you are using default derby database you will find that your current directory now contains a new sub-directory metastore_db. Also the metastore will be created if it doesn’t already exist. The property of interest here is javax.jdo.option.ConnectionURL. The default value of this property is jdbc:derby:;databaseName=metastore_db;create=true. This value specifies that you will be using embedded derby as your Hive metastore and the location of the metastore is metastore_db.

We can also configure directory for hive store table information. By default, the location of warehouse is file:///user/hive/warehouse and we can also use hive-site.xml file for local or remote metastore.

hive-site.xml : We used hive-site.xml for changing the configuring to specifying to Hive where the database is stored. We used JDBC compliant database for the metastore because default embedded database is not suitable for the production. For providing these configuration we used hive-site.xml file. An example of hive-site.xml for using MYSQL Database for storing metastore :

<configuration>

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore</value>
<description>metadata is stored in a MySQL server</description>
</property>

<property>
<name>javax.jdo.option.ConnectionURL.metastore_sql</name>
<value>jdbc:mysql://localhost/metastore_sql</value>
<description>user metadata is stored in a MySQL server</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>user</value>
<description>user name for connecting to mysql server </description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
<description>password for connecting to mysql server </description>
</property>

</configuration>

When we use MYSQL JDBC driver than we download Jconnector(MYSQL JDBC Driver) and place in the $HIVE_HOME/lib and place hive-site.xml in $HIVE_HOME/conf. After this we will be able to store metastore in MYSQL.

To know about the metastore tables, field and relation please look into this diagram :

metastoreflow

Here in this diagram, we can find our all the answer regarding the metastore. We can find out that how metastore stores information of database & tables and How these tables are internally connected with each other. In these tables we an find all the information which related to the tables.

I hope it will help you to develop basic understanding with the metastore in hive.

Thanks.

Reference:

MR diagram for Metastore

Hive Book


KNOLDUS-advt-sticker

Posted in database, Scala | Tagged , , , , | 1 Comment

KnolX – Akka Streams


In one of our KnolX sessions, we discussed on Akka Streams. It was a step by step introduction to Akka Streams. It started with the need for reactive streams and moved on with a discussion on components of Akka Streams, error handling in Akka Streams and testing in Akka Streams.

You may find the slides of the KnolX below :

You can also watch the video here:

 

 


KNOLDUS-advt-sticker

Posted in Scala | Leave a comment

KnolX – Introduction to Streaming in Apache Spark


Based on Apache Spark 1.6.0

Apache Spark provides a special API for stream processing. This allow s user to write streaming jobs as they write batch jobs. Currently supports Java, Scala and Python.

Spark Streaming follows the “micro-batch” architecture . Spark Streaming receives data from various input sources and groups it into small batches.each batch is created in a particular time Interval. At the beginning of each time interval a new batch is created, and any data that arrives during that interval gets added to that batch. At the end of interval the batch is completed. User can define the time Interval by the argument called batch Interval. The batch interval is typically between 500 milliseconds and several seconds, as configured by the application developer. Each input batch forms an RDD, and is processed using Spark jobs to create other RDDs.

apache_streaming_architecture

Spark Streaming provides an abstraction called DStreams, or discretized streams. A DStream is a sequence of data arriving over time. Internally, each DStream is represented as a sequence of RDDs arriving at each time step.

Dstream

Here as shown in above image batch interval is defined for 1 Second, So every second new RDD is created and DStream represent this Sequence of RDDs.

For Fault Tolerant Data received is copied to two nodes so Spark Streaming can tolerate single worker failure. Spark Streaming also introduced a mechanism called checkpointing that saves the state periodically to a file system (like HDFS or S3). User may setup these checkpoints every 5-10 batches of data. So, In case of failure Spark Streaming resume from last checkpoint.

Transformations apply some operation on current DStream and generate a new DStream.

Transformations on DStreams can be grouped into either stateless or stateful:
In stateless transformations the processing of each batch does not depend on the data of its previous batches.
Stateful transformations, in contrast, use data or intermediate results from previous batches to compute the results of the current batch. They include transformations based on sliding windows and on tracking state across time.

dStream and Transformation

stateless transformations is simple as you apply to RDDs previously .But for stateful transformations data of current batch is dependent on previous batches.

The two main types of Stateful Transformation are:
Windowed Operations
UpdateStateByKey

Windowed Operations perform operation across a longer time period rather than on a single batch interval, it combines the result from multiple batches. All the windowed operations takes two parameters the one is window duration and the other is sliding duration . Both must be multiple of batch interval.

window duration controls the how many previous batches are consider for operation and sliding duration which defaults to the batch interval, controls how frequently the new DStream computes results.

val lines = streamingContext.socketTextStream("localhost", 9999)
val errorLines = lines.filter(_.contains("error"))
errorLines.window(Seconds(30), Seconds(10))

UpdateStateByKey help us maintaining state across the batches by providing access to the state variable for DStreams of key/value pair. Given a DStream of (key, event) pairs, it lets you construct a new DStream of (key, state) pairs by taking a function that specifies how to update the state for each key given new events. To use UpdateStateByKey update(events, oldState) is provided that takes in the events that have arrived for a key and its previous state, and returns a newState to store for it The result of updateStateByKey() will be a new DStream that contains an RDD of (key, state) pairs on each time step.

val lineWithLength = lines.map{line => (line, line.length)}
lineWithLength.updateStateByKey(someFunction)

Output Operation or Action specify what needs to be done with the final transformed data in a stream and are similar as of RDDs . There is one common debugging output operation named print() this prints the first 10 elements from each batch of DStream.

Some Tips

  • Minimum batch size Spark Streaming can use.is 500 milliseconds, is has proven to be a good minimum size for many applications.
  • The best approach is to start with a larger batch size (around 10 seconds) and work your way down to a smaller batch size.
  • Receivers can sometimes act as a bottleneck if there are too many records for a single machine to read in and distribute. You can add more receivers by creating multiple input DStreams, and then applying union to merge them into a single stream
  • If receivers cannot be increased anymore, you can further redistribute the received data by explicitly repartitioning the input stream using DStream.repartition.

You can find the video For the same below


KNOLDUS-advt-sticker

Posted in Scala | 1 Comment

KnolX – Scal’a’ngular


In one of the previous knolx sessions, we had a session on enhancing the power of Scala JS over the scripting for front end development. In this session we discussed about a new term Scal’a’ngular.

Scal’a’ngular is defined as a term composed of Scala and Angular JS. As the name suggests it is used to show the handshake and the relationship between Scala and Angular JS. And it is a smart way to build an Angular JS app without javascript and in a typesafe way.

You can find the slides for the presentation and the video as well below,


KNOLDUS-advt-sticker

Posted in Scala | Leave a comment

Understanding Support Vector Machines


[Contributed by Raghu from Knoldus, Canada]

One of the important and popular classification techniques among Machine Learning algorithms is Support Vector Machines. This is also called large margin classification. Support Vector Machine technique results in a hyperplane that separates and hence classifies samples into two distinct classes. SVM results in such a plane that not only separates samples but does it with maximum separation possible. Thus the name large margin classifier. A 2-dimensional depiction of this is shown in the picture below. This is the case of a linear SVM where the decision boundary that separates the classes is linear.

Screenshot from 2016-08-18 22-34-29

Support Vector Machines also support classification where the decision boundary is non-linear. In this case, SVM uses a Kernel. Most popular kernel that is used for non-linear decision problems is what is called an Radial Basis Function Kernel (RBF Kernel in short). This is also called a Gaussian Kernel. Below are 2 images that will depict the working of the SVM with Gaussian Kernel which does classification using non-linear decision boundary.

Screenshot from 2016-08-18 22-35-40.png

One of the easiest ways to build SVM is to use a SVM implementations available in many of the popular ML libraries for various languages. LIBSVM, Scikit-learn and Spark ML are all examples of SVM implementations that are available to use. In this article, we will demonstrate a simple way to build an SVM, train it and then use it using scikit-learn using Python.

The following listing shows a Python session

Continue reading

Posted in big data, Scala | Tagged , | 1 Comment

Services In Angular 2


Services are the building blocks that Angular provides for the definition of the business logic of our applications. In AngularJs 1.x, we had three different ways for defining services:

// The Factory method
module.factory('ServiceName', function (dep1, dep2, …) {
  return {
    // public API
  };
});

// The Service method
module.service('ServiceName', function (dep1, dep2, …) {
  // public API
  this.publicProp = val;
});

// The Provider method
module.provider('ServiceName', function () {
  return {
    $get: function (dep1, dep2, …) {
      return {
        // public API
      };
    }
  };
});

Although the first two syntactical variations provide similar functionality, they differ in the way the registered directive will be instantiated. The third syntax allows further configuration of the registered provider during configuration time.

Having three different methods for defining services is quite confusing for the AngularJS 1.x beginners.

When a given service is required, through the DI mechanism of the framework, AngularJS resolves all of its dependencies and instantiates it by passing them to the factory function, which encapsulates the logic for its creation. The factory function is passed as the second argument to the factory and service methods. The provider method allows definition of a service on lower level; the factory method there is the one under the$get property of the provider.

Just like AngularJS 1.x, Angular 2 tolerates this separation of concerns as well

in Angular 2 we use injectable services

This is a component that wishes to use injectable service needs to receive it in one of its constructor’s parameters. Having the private keyword before each injectable parameter makes a class member named the same as the parameter name and assigns the parameter to it.

The code example below demonstrates how to declare an injectable service and how to use it.

@Injectable()
export class MyService {
items:Array<any>;
constructor() {
this.items = [
{name: 'Christoph Burgdorf', degree: 'mca'},
{name: 'Pascal Precht', degree: 'mca'},
]
}
getNames() {
return this.items
}
}

As you can see above, this is simply an exported class decorated with the Injectable decorator.

Second, let’s use it in a component:

import {MyService} from 'path/to/myService'

Second, we need to instansiate it. The Angular 2.0 framework will do that for us after we will put our service in a component’s providers array.

Providers: Having the constructor parameters doesn’t make sure that in instance will be sent to the constructor. To make sure in instance is created, we need to put the injectable service type in the providers array of theViewMetaData

@Component({ /*some ViewMetaData members */
   providers: [ MyService],
   /*some other ViewMetaData members */ });

And, last but not least, we have to inject the service in our component’s class constructor.

export class MyCompoentClass {
newItems:Array<any>;
 constructor(nameservice:MyService) {
this.newitems = nameservice.getNames();
}
}

KNOLDUS-advt-sticker

Posted in AngularJs2.0, Scala | Leave a comment

Security threats in web applications


Today, most security breaches online occur through the application rather than the server. The majority of web application attacks occur through cross-site scripting (XSS) and SQL injection attacks which typically result from flawed coding, and failure to sanitize input to and output from the web application.

In this blog I will be discussing these two attacks and methods on how to counter them.

Cross Site Scripting (XSS)

Cross-site scripting (XSS) is an injection attack which is carried out on Web applications that accept input, but do not properly separate data and executable code before the input is delivered back to a user’s browser.

Like all injection attacks, XSS takes advantage of the fact that browsers can’t tell valid markup from attacker-controlled markup, they simply execute whatever markup text they receive. The attack circumvents the Same Origin Policy (SOP), a security measure used in Web browser programming languages such as JavaScript and Ajax.

Cross-site scripting (XSS) attacks, bypass the same origin policy by tricking a site into delivering malicious code along with the intended content. This is a huge problem, as browsers trust all of the code that shows up on a page as being legitimately part of that page’s security origin.

Same Origin Policy requires everything on a Web page to come from the same source. When Same Origin Policy is not enforced, an attacker might inject a script and modify the Web page to suit his own purposes, perhaps to extract data that will allow the attacker to impersonate an authenticated user or perhaps to input malicious code for the browser to execute.

There are a number of security controls that can be used to reduce or entirely remove the threat of cross-site scripting. They include:

  • Input validation – determines if an end user’s input matches the expected format. For example, a browser-side script would not be expected in a phone number field.
  • Content Security Policy (CSP) – restricts which scripts can be run or loaded on a Web page.
  • Output encoding – tells the browser that certain characters it is going to receive should be treated as display text, rather than executable code.

Play provides security headers filter that can be used to configure some default headers in the HTTP response to mitigate security issues. The ContentSecurity-Policy HTTP response header helps you reduce XSS risks on modern browsers by declaring what dynamic resources are allowed to load via a HTTP Header.

Configuring Security Headers

  • Enabling the security headers filterTo enable the security headers filter, add the Play filters project to your libraryDependencies in build.sbt:
    libraryDependencies += filters
    

    Now add the security headers filter to your filters, which is typically done by creating a Filters class in the root of your project:

Filters.scala

import javax.inject.Inject

import play.api.http.DefaultHttpFilters

import play.filters.headers.SecurityHeadersFilter

class Filters @Inject() (securityHeadersFilter: SecurityHeadersFilter) extends DefaultHttpFilters(securityHeadersFilter)

The Filters class can either be in the root package, or if it has another name or is in another package, needs to be configured using play.http.filters in application.conf:

play.http.filters = "filters.MyFilters"

Configuring the security headers

The filter will set headers in the HTTP response automatically. The settings can be configured through the following settings in application.conf

  • play.filters.headers.frameOptions – sets X-Frame-Options, “DENY” by default.
  • play.filters.headers.xssProtection – sets X-XSS-Protection, “1; mode=block” by default.
  • play.filters.headers.contentTypeOptions – sets X-Content-Type-Options, “nosniff” by default.
  • play.filters.headers.permittedCrossDomainPolicies – sets X-Permitted-Cross-Domain-Policies, “master-only” by default.
  • play.filters.headers.contentSecurityPolicy – sets Content-Security-Policy, “default-src ‘self’” by default.

Any of the headers can be disabled by setting a configuration value of null, for example:

play.filters.headers.frameOptions = null

The Content-Security-Policy HTTP header allows you to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those sources.

CSP provides a rich set of policy directives that control over the resources that a page is allowed to load.

  • base-uri restricts the URLs that can appear in page’s <base> element.
  • child-src lists the URLs for workers and embedded frame contents. For example: child-src https://youtube.com would enable embedding videos from YouTube but not from other origins. Use this in place of the deprecated frame-src directive.
  • connect-src limits the origins to which you can connect (via XHR, WebSockets, and EventSource).
  • font-src specifies the origins that can serve web fonts. Google’s Web Fonts could be enabled via font-src https://themes.googleusercontent.com
  • form-action lists valid endpoints for submission from <form> tags.
  • frame-ancestors specifies the sources that can embed the current page. This directive applies to <frame>, <iframe>, , and <applet>tags. This directive cant be used in <meta> tags and applies only to non-HTML resources.
  • frame-src deprecated. Use child-src instead.
  • img-src defines the origins from which images can be loaded.
  • media-src restricts the origins allowed to deliver video and audio.
  • object-src allows control over Flash and other plugins.
  • plugin-types limits the kinds of plugins a page may invoke.
  • report-uri specifies a URL where a browser will send reports when a content security policy is violated. This directive cant be used in <meta> tags.
  • style-src is script-src’s counterpart for stylesheets.
  • upgrade-insecure-requests Instructs user agents to rewrite URL schemes, changing HTTP to HTTPS. This directive is for web sites with large numbers of old URLs that need to be rewritten.

default-src directive defines the defaults for most directives you leave unspecified. Generally, this applies to any directive that ends with -src. If default-src is set to https://example.com, and you fail to specify a font-src directive, then you can load fonts from https://example.com, and nowhere else.

Example:

Suppose we have a project in which we want to load an image from some other domain but as we have set the CSP header to self we are only allowed to load the images from the same domain.

blog2

So in this case we can use the img-src directive and set its value as

play.filters.headers.contentSecurityPolicy = "default-src 'self'; img-src 'self' www.gettyimages.ca"

where http://www.gettyimages.ca is the other domain from where to load the image.

blog1.png

SQL injections

SQL injection is a type of security exploit in which the attacker adds Structured Query Language (SQL) code to a Web form input box to gain access to resources or make changes to data. An SQL query is a request for some action to be performed on a database. Typically, on a Web form for user authentication, when a user enters their name and password into the text boxes provided for them, those values are inserted into a SELECT query. If the values entered are found as expected, the user is allowed access; if they aren’t found, access is denied. However, most Web forms have no mechanisms in place to block input other than names and passwords. Unless such precautions are taken, an attacker can use the input boxes to send their own request to the database, which could allow them to download the entire database or interact with it in other illicit ways.

Imagine a simple Web site set up by a package delivery company to provide delivery status information to anyone who knows the tracking number associated with a particular package. The application may simply ask the user for the tracking number and then look it up in a database table using the following SQL code:

SELECT *

FROM Shipments

WHERE TrackingID='@tracking'

 

Where @tracking is a variable passed in from the Web application. Under normal circumstances, this application may function perfectly normally. For example, if a user enters the tracking number 1A2123ZC2, the corresponding query would be:

SELECT *

FROM Shipments

WHERE TrackingID='1A2123ZC2'


That ideal situation makes one flawed assumption — that the user will only enter a valid tracking number. Malicious individuals are not likely to be so cooperative. Suppose that the user instead enters the string shown below in the tracking number field:
1A2123ZC2′ or true

The corresponding query will now be:

SELECT *

FROM Shipments

WHERE TrackingID='1A2123ZC2' or true

Which will have the unintended consequence of retrieving all of the tracking information stored in the database. Now assume that we have an even more malicious user who enters the following string:
1A2123ZC2′; DELETE FROM Shipments

This would cause the database to execute the following query:

SELECT *

FROM Shipments

WHERE TrackingID='1A2123ZC2';

DELETE FROM Shipments

Which would have the clearly undesirable result of deleting all of the tracking information from the database!

There are several steps that you can take to reduce the possibility of a SQL injection attack against your database:

  • Escape single quotation marks. Include code within your Web applications that replaces single apostrophes with double apostrophes. This will force the database server to recognize the apostrophe as a literal character rather than a string delimiter.
  • Limit the privileges available to the account that executes Web application code. In the example above, if the account only had permission to perform the intended action (retrieving records from the Shipping table), the deletion would not be possible.
  • Reduce or eliminate debugging information. When an error condition occurs on your server, the Web user should not see technical details of the error. This type of information could aid an intruder seeking to explore the structure of your database.

References:

Here is the link of the github repository for the XSS example:

PlaySecurityProject


KNOLDUS-advt-sticker

Posted in Scala | 1 Comment

Javascript Style Checker


Despite many years of experience, people still type variable names incorrectly, make syntax errors and forget to handle errors properly and forget about the best practices in hurry. But its important to write the quality code. A good linting tool, or a linter, will help to check the code errors and the standard style before someone waste their time—or worse, client’s time.

First of all, I will be describing some standard javascript rules and then the lint tools available to check for javascript standard and how the tools can be integrated with IntelliJ IDEA.

Some Standard JavaScript rules.

  • Use 2 spaces for indentation

  function hello (name) {
    console.log('hi', name)
 }
  • Use single quotes for strings except to avoid escaping.

  console.log('hello there')
  $("
<div class = 'box' ")
  • No unused variables

  function myFunction () {
    var result = something()                    // avoid
  }
    • Add a space after keywords.

  if (condition) { ... }                       // OK
  if(condition) { ... }                        // Avoid
    • Add a space before a function declaration’s parentheses.

  function name (arg) { ... }                 // OK
  function name(arg) { ... }                  // Avoid
    • Always use=== instead of ==.

Exception: obj == null is allowed to check for null || undefined.
  if (name === 'John')                          //OK
  if (name == 'John')                           //Avoid
  if (name !== 'John')                          // OK
  if (name != 'John')                           //Avoid
    • Infix operators must be spaced

  var message = 'hello, ' + name + '!'         // OK
  var message = 'hello, ' +name+'!'            // Avoid
    • Commas should have a space after them

  var list = [1, 2, 3, 4]                       // OK
  var list = [1,2,3,4]                          // Avoid
    • Always handle the err function parameter

OK
  run(function (err) {
    if (err) throw err
    window.alert('done')
  })

Avoid
  run(function (err) {
    window.alert('done')
  })

Lint Tools

Despite many years of experience, one can still type variable names incorrectly, make syntax errors and forget to handle errors properly. A good linting tool, or a linter, will tell about this before wastage of time—or worse, client’s time. A good linting tool can also help make sure a project adheres to a coding standard. A linting tool helps to avoid silly mistakes when writing JavaScript.

Different Lint Tools

Different tools available are:

  • JSLint

  • JSHint

  • JSCS

  • ESLint

JSLint

Pros

  • Comes configured and ready to go (if you agree with the rules it enforces)

Cons

  1. JSLint doesn’t have a configuration file, which can be problematic if you need to change the settings

  2. Limited number of configuration options, many rules cannot be disabled

  3. You can’t add custom rules

  4. Undocumented features

  5. Difficult to know which rule is causing which error

JSHint

JSHint was created as a more configurable version of JSLint (of which it is a fork). You can configure every rule, and put them into a configuration file, which makes JSHint easy to use in bigger projects.

Pros

  1. Most settings can be configured

  2. Supports a configuration file, making it easier to use in larger projects

  3. Has support for many libraries out of the box, like jQuery, QUnit, NodeJS, Mocha, etc.

Cons

  1. Difficult to know which rule is causing an error.

  2. Has two types of option: enforcing and relaxing (which can be used to make JSHint stricter, or to suppress its warnings). This can make configuration slightly confusing.

  3. No custom rule support

JSCS

JSCS is different from the others in that it doesn’t do anything unless you give it a configuration file or tell it to use a preset

Pros

  1. Ready-made configuration files can make it easy to set up if you follow one of the available coding styles

  2. Has a flag to include rule names in reports, so it’s easy to figure out which rule is causing which error

  3. Can be extended with custom plugins

    Cons

  1. Only detects coding style violations. JSCS doesn’t detect potential bugs such as unused variables, or accidental globals, etc.

  2. Slowest of the four, but this is not a problem in typical use

ESLint

Pros

  1. Flexible: any rule can be toggled, and many rules have extra settings that can be tweaked

  2. Very extensible and has many plugins available

  3. Easy to understand output

  4. Includes many rules not available in other linters, making ESLint more useful for detecting problem

Cons

  1. Some configuration required.

  2. Slow, but not a hindrance.

After considering the pros and cons, I would suggest to use ESLint. Even the JSCS and ESLint teams have agreed upon making ESLint together instead of competing with each other.

How to use?

Three requirements :

1) It requires Node.js

2) To include ESLint as part of your project’s build system. (I.e having esLint installed)

3) You should then setup a configuration file (having .eslintrc.json file).

To run ESLint using intellij:

  • File | Settings | Languages and Frameworks | JavaScript | Code Quality Tools | ESLint
  • Support displaying eslint warnings as intellij inspections

    blog2

 

 

 

 

 

 

 

 

 

Configuring Rules:

(Rules can be added in .eslintrc.json):

{
“rules”: {
“semi”: [“error”, “always”],
“quotes”: [“error”, “double”]
}
}

The names “semi” and “quotes” are the names of rules in ESLint. The first value is the error level of the rule and can be one of these values:

  • “off” or 0 turn the rule off

  • “warn” or 1 – turn the rule on as a warning (doesn’t affect exit code)

  • “error” or 2 – turn the rule on as an error (exit code will be 1)

Sample .eslintrc.json file :

{
    "extends": "google",
    "installedESLint": true,
    "rules": {
    "accessor-pairs": 2,
    "arrow-spacing": [2, { "before": true, "after": true }],
    "block-spacing": [2, "always"],
    "brace-style": [2, "1tbs", { "allowSingleLine": true }],
    "camelcase": [2, { "properties": "never" }],
    "comma-dangle": [2, "never"],
    "comma-spacing": [2, { "before": false, "after": true }],
    "comma-style": [2, "last"],
    "constructor-super": 2,
    "curly": [2, "multi-line"],
    "dot-location": [2, "property"],
    "eol-last": 2,
    "eqeqeq": [2, "allow-null"],
    "handle-callback-err": [2, "^(err|error)$" ],
    "indent": [2, 2, { "SwitchCase": 1 }],
    "key-spacing": [2, { "beforeColon": false, "afterColon": true }],
    "keyword-spacing": [2, { "before": true, "after": true }],
    "new-cap": [2, { "newIsCap": true, "capIsNew": false }],
    "new-parens": 2,
    "no-array-constructor": 2,
    "no-caller": 2,
    "no-class-assign": 2,
    "no-cond-assign": 2,
    "no-const-assign": 2,
    "no-constant-condition": [2, { "checkLoops": false }],
    "no-control-regex": 2,
    "no-debugger": 2,
    "no-delete-var": 2,
    "no-dupe-args": 2,
    "no-dupe-class-members": 2,
    "no-dupe-keys": 2,
    "no-duplicate-case": 2,
    "no-duplicate-imports": 2,
    "no-empty-character-class": 2,
    "no-empty-pattern": 2,
    "no-eval": 2,
    "no-ex-assign": 2,
    "no-extend-native": 2,
    "no-extra-bind": 2,
    "no-extra-boolean-cast": 2,
    "no-extra-parens": [2, "functions"],
    "no-fallthrough": 2,
    "no-floating-decimal": 2,
    "no-func-assign": 2,
    "no-implied-eval": 2,
    "no-inner-declarations": [2, "functions"],
    "no-invalid-regexp": 2,
    "no-irregular-whitespace": 2,
    "no-iterator": 2,
    "no-label-var": 2,
    "no-labels": [2, { "allowLoop": false, "allowSwitch": false }],
    "no-lone-blocks": 2,
    "no-mixed-spaces-and-tabs": 2,
    "no-multi-spaces": 2,
    "no-multi-str": 2,
    "no-multiple-empty-lines": [2, { "max": 1 }],
    "no-native-reassign": 2,
    "no-negated-in-lhs": 2,
    "no-new": 2,
    "no-new-func": 2,
    "no-new-object": 2,
    "no-new-require": 2,
    "no-new-symbol": 2,
    "no-new-wrappers": 2,
    "no-obj-calls": 2,
    "no-octal": 2,
    "no-octal-escape": 2,
    "no-path-concat": 2,
    "no-proto": 2,
    "no-redeclare": 2,
    "no-regex-spaces": 2,
    "no-return-assign": [2, "except-parens"],
    "no-self-assign": 2,
    "no-self-compare": 2,
    "no-sequences": 2,
    "no-shadow-restricted-names": 2,
    "no-spaced-func": 2,
    "no-sparse-arrays": 2,
    "no-this-before-super": 2,
    "no-throw-literal": 2,
    "no-trailing-spaces": 2,
    "no-undef": 2,
    "no-undef-init": 2,
    "no-unexpected-multiline": 2,
    "no-unmodified-loop-condition": 2,
    "no-unneeded-ternary": [2, { "defaultAssignment": false }],
    "no-unreachable": 2,
    "no-unsafe-finally": 2,
    "no-unused-vars": [2, { "vars": "all", "args": "none" }],
    "no-useless-call": 2,
    "no-useless-computed-key": 2,
    "no-useless-constructor": 2,
    "no-useless-escape": 2,
    "no-useless-rename": 2,
    "no-whitespace-before-property": 2,
    "no-with": 2,
    "object-property-newline": [2, { "allowMultiplePropertiesPerLine": true }],
    "one-var": [2, { "initialized": "never" }],
    "operator-linebreak": [2, "after", { "overrides": { "?": "before", ":": "before" } }],
    "padded-blocks": [2, "never"],
    "quotes": [2, "single", { "avoidEscape": true, "allowTemplateLiterals": true }],
    "rest-spread-spacing": [2, "never"],
    "semi": [2, "never"],
    "semi-spacing": [2, { "before": false, "after": true }],
    "space-before-blocks": [2, "always"],
    "space-before-function-paren": [2, "always"],
    "space-in-parens": [2, "never"],
    "space-infix-ops": 2,
    "space-unary-ops": [2, { "words": true, "nonwords": false }],
    "spaced-comment": [2, "always", { "line": { "markers": ["*package", "!", ","] }, "block": { "balanced": true, "markers": ["*package", "!", ","], "exceptions": ["*"] } }],
    "template-curly-spacing": [2, "never"],
    "unicode-bom": [2, "never"],
    "use-isnan": 2,
    "valid-typeof": 2,
    "wrap-iife": [2, "any"],
    "yield-star-spacing": [2, "both"],
    "yoda": [2, "never"],

    "standard/object-curly-even-spacing": [2, "either"],
    "standard/array-bracket-even-spacing": [2, "either"],
    "standard/computed-property-even-spacing": [2, "even"],

    "promise/param-names": 2
  }
}

References:

1) https://github.com/eslint/eslint

2) https://www.sitepoint.com/comparison-javascript-linting-tools/


KNOLDUS-advt-sticker

Posted in Scala | 1 Comment

Knolx – Introduction to ScalaJS


Hi All,
Knoldus had organized  a session on “Introduction to ScalaJS”, here are the slides of the session.
Let me know if you have any queries.

 

 

You can also watch the video here

 


KNOLDUS-advt-sticker

Posted in Scala | Leave a comment

Knolx – An Introduction to Quill


Hi All,

Knoldus had organized a session on 5th August 2016 at 5 PM. Topic was Introduction to Quill. Many people had joined and learned from the session. I am  sharing the slides of the session here. Please let me know if you have any question related to linked slides.

 

You can also watch the video here

 

Happy coding…!!!


KNOLDUS-advt-sticker

Posted in Scala | Tagged | 1 Comment