Call for Abstract

7th International Conference on Big Data Analytics & Data Mining, will be organized around the theme “Modern Technologies and Challenges in Big Data”

Data Analytics 2018 is comprised of 25 tracks and 133 sessions designed to offer comprehensive sessions that address current issues in Data Analytics 2018.

Submit your abstract to any of the mentioned tracks. All related abstracts are accepted.

Register now for the conference by choosing an appropriate package suitable to you.

Big data is data sets that is so capacious and composite that outdated data processing application software is inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating and information privacy. There are three dimensions to big data known as Volume, Variety and Velocity

  • Track 1-1Big Data Analytics Adoption
  • Track 1-2Benefits of Big Data Analytics
  • Track 1-3Barriers to Big Data Analytics
  • Track 1-4Volume Growth of Analytic Big Data
  • Track 1-5Managing Analytic Big Data
  • Track 1-6Data Types for Big Data

Huge information brings open doors as well as difficulties. Conventional information process-sing has been not able meet the gigantic continuous interest of huge information; we require the new era of data innovation to manage the episode of huge information

  • Track 2-1Big data storage architecture
  • Track 2-2GEOSS clearinghouse
  • Track 2-3Distributed and parallel computing

Huge information is information so vast that it doesn't fit in the fundamental memory of a solitary machine, and the need to prepare huge information by productive calculations emerges in Internet seeks, system activity checking, machine learning, experimental figuring, signal handling, and a few different territories. This course will cover numerically exhaustive models for increasing such calculations, and some provable confinements of calculations working in those models.

  • Track 3-1Data Stream Algorithms
  • Track 3-2Randomized Algorithms for Matrices and Data
  • Track 3-3Algorithmic Techniques for Big Data Analysis
  • Track 3-4Models of Computation for Massive Data
  • Track 3-5The Modern Algorithmic Toolbox

Tremendous data is an extensive term for data sets so significant or complex that customary data planning applications are deficient. Employments of gigantic data consolidate Big Data Analytics in Enterprises, Big Data Trends in Retail and Travel Industry, Current and future circumstance of Big Data Market, Financial parts of Big Data Industry, Big data in clinical and social protection, Big data in Regulated Industries, Big data in Biomedicine, Multimedia and Personal Data Mining

  • Track 4-1Finances and Frauds services
  • Track 4-2Security and privacy
  • Track 4-3Manufacturing
  • Track 4-4Telecommunication
  • Track 4-5E-Government
  • Track 4-6Public administration
  • Track 4-7Big Data Analytics in Enterprises
  • Track 4-8Retail / Consumer
  • Track 4-9Travel Industry
  • Track 4-10Current and future scenario of Big Data Market
  • Track 4-11Financial aspects of Big Data Industry
  • Track 4-12Clinical and healthcare
  • Track 4-13Regulated Industries
  • Track 4-14Biomedicine
  • Track 4-15Web and digital media

The Internet of things (IOT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to connect and exchange data. Each thing is uniquely identifiable through its embedded computing system but is able to inter-operate within the existing Internet infrastructure. "Things", in the IoT sense, can refer to a wide variety of devices such as heart monitoring implants, biochip transponders on farm animals, cameras streaming live feeds of wild animals in coastal waters, automobiles with built-in sensors, DNA analysis devices for environmental/food/pathogen monitoring or field operation devices that assist fire fighters in search and rescue operations

  • Track 5-1Medical and healthcare
  • Track 5-2Transportation
  • Track 5-3Environmental monitoring
  • Track 5-4Infrastructure Management
  • Track 5-5Enterprise
  • Track 5-6Consumer application

The period of Big Data is here: information of immense sizes is getting to be universal. With this comes the need to take care of advancement issues of exceptional sizes. Machine learning, compacted detecting; informal organization science and computational science are some of a few noticeable application areas where it is anything but difficult to plan improvement issues with millions or billions of variables. Traditional improvement calculations are not intended to scale to occasions of this size; new methodologies are required. This workshop expects to unite analysts chipping away at unique streamlining calculations and codes fit for working in the Big Data setting.

  • Track 6-1Computational problems in magnetic resonance imaging
  • Track 6-2Optimization of big data in mobile networks

Information Mining Applications in Engineering and Medicine attentions to offer data excavators who wish to apply stand-out data some help with mining environments. These applications relate Data mining structures in genuine cash related business territory examination, Application of data mining in positioning, Data mining and Web Application, Medical Data Mining, Data Mining in Healthcare, Engineering data mining, Data Mining in security, Social Data Mining, Neural Networks and Data Mining, these are a portion of the jobs of data Mining.

  • Track 7-1Data mining systems in financial market analysis
  • Track 7-2Application of data mining in education
  • Track 7-3Data mining and processing in bioinformatics, genomics and biometrics
  • Track 7-4Advanced Database and Web Application
  • Track 7-5Medical Data Mining
  • Track 7-6Data Mining in Healthcare data
  • Track 7-7Engineering data mining
  • Track 7-8Data mining in security
  • Track 7-9High performance data mining algorithms
  • Track 7-10Methodologies on large-scale data mining

With advances in technologies, nurse scientists are increasingly generating and using large and complex datasets, sometimes called “Big Data,” to promote and improve the health of individuals, families, and communities. In recent years, the National Institutes of Health have placed a great emphasis on enhancing and integrating the data sciences into the health research enterprise.  New strategies for collecting and analysing large data sets will allow us to better understand the biological, genetic, and behavioural underpinnings of health, and to improve the way we prevent and manage illness.

  • Track 8-1Big data in nursing inquiry
  • Track 8-2Methods, tools and processes used with big data with relevance to nursing
  • Track 8-3Big Data and Nursing Practice

Distributed computing is a sort of Internet-based imagining that gives shared handling resources and information to PCs and unlike devices on concentration. It is a typical for authorizing pervasive, on-interest access to a common pool of configurable registering assets which can be quickly provisioned and discharged with insignificant administration exertion. Distributed calculating and volume preparations supply clients and ventures with different abilities to store and procedure their info in outsider info trots. It depends on sharing of assets to accomplish rationality and economy of scale, like a utility over a system.

  • Track 9-1Microsoft Azure Cloud Computing
  • Track 9-2Amazon Web Services
  • Track 9-3Google Cloud
  • Track 9-4Ecommerce and customer service
  • Track 9-5Cloud Computing Applications
  • Track 9-6Emerging Cloud Computing Technology
  • Track 9-7Cloud Automation and Optimization
  • Track 9-8High Performance Computing (HPC)
  • Track 9-9Mobile Cloud Computing

Machine learning is a field of computer knowledge that gives processors the ability to learn without existence explicitly programmed. Machine learning is closely associated to computational statistics, which also attentions on prediction-making through the use of computers. Within the field of data analytics, machine learning is a technique used to devise difficult models and processes that lend themselves to expectation in profitable use, this is known as predictive analytics.

  • Track 10-1Machine learning and statistics
  • Track 10-2Machine learning tools and techniques
  • Track 10-3Bayesian networks
  • Track 10-4Fielded applications
  • Track 10-5Generalization as search

Artificial Intelligence is a system of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think. AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.

  • Track 11-1Cybernetics
  • Track 11-2Artificial creativity
  • Track 11-3Artificial Neural networks
  • Track 11-4Adaptive Systems
  • Track 11-5Ontologies and Knowledge sharing

Information Mining gadgets and programming ventures join Big Data Security and Privacy, Data Mining and Predictive Analytics in Machine Learning, Boundary to Database Systems and Software Systems.

  • Track 12-1Big Data Security and Privacy
  • Track 12-2E-commerce and Web services
  • Track 12-3Medical informatics
  • Track 12-4Visualization Analytics for Big Data
  • Track 12-5Predictive Analytics in Machine Learning and Data Mining
  • Track 12-6Interface to Database Systems and Software Systems

Informal organization investigation (SNA) is the advancement of looking at social structures using system and chart speculations. It describes arranged structures as far as lumps (individual on-screen characters, individuals, or things inside the system) and the ties or edges (connections or cooperation’s) that interface them.

  • Track 13-1Networks and relations
  • Track 13-2Development of social network analysis
  • Track 13-3Analyzing relational data
  • Track 13-4Dimensions and displays
  • Track 13-5Positions, sets and clusters

Information mining undertaking can be shown as a data mining request. A data mining request is portrayed similarly as data mining task primitives. This track joins Competitive examination of mining figuring’s, Semantic-based Data Mining and Data Pre-planning, Mining on data streams, Graph and sub-outline mining, Scalable data pre-taking care of and cleaning procedures, Statistical Methods in Data Mining, Data Mining Predictive Analytics.

  • Track 14-1Competitive analysis of mining algorithms
  • Track 14-2Computational Modelling and Data Integration
  • Track 14-3Semantic-based Data Mining and Data Pre-processing
  • Track 14-4Mining on data streams
  • Track 14-5Graph and sub-graph mining
  • Track 14-6Scalable data pre-processing and cleaning techniques
  • Track 14-7Statistical Methods in Data Mining

Data mining structures and calculations an interdisciplinary subfield of programming building is the computational arrangement of finding case in awesome information sets including techniques like Big Data Search and Mining, Novel Theoretical Models for Big Data, High execution information mining figuring's, Methodologies on sweeping scale information mining, Methodologies on expansive scale information mining, Big Data Analysis, Data Mining Analytics, Big Data and Analytics.

  • Track 15-1Novel Theoretical Models for Big Data
  • Track 15-2New Computational Models for Big Data
  • Track 15-3Empirical study of data mining algorithms

The basic calculations in information mining and investigation shape the premise for the developing field of information science, which incorporates robotized techniques to examine examples and models for a wide range of information, with applications extending from logical revelation to business insight and examination.

  • Track 16-1Numeric attributes
  • Track 16-2Categorical attributes
  • Track 16-3Graph data

Bunching can be viewed as the most essential unsupervised learning issue; along these lines, as each other issue of this kind, it manages finding a structure in a gathering of unlabelled information. A free meaning of bunching could be the way toward sorting out items into gatherings whose individuals are comparable somehow.

  • Track 17-1Hierarchical clustering
  • Track 17-2Density Based Clustering
  • Track 17-3Spectral and Graph Clustering
  • Track 17-4Clustering Validation

Cybersecurity which is also known as computer security is the technology designed to protect computer systems containing programs or data from damage or unauthorized access.  This includes preventive measures for cyber terrorism with the help of cyber security with high performance computing

  • Track 18-1Counter measures to combat cyber terrorism
  • Track 18-2Cyber security for critical infrastructures and high performance computing
  • Track 18-3Security/privacy technologies
  • Track 18-4Personal identity verification
  • Track 18-5Human activity recognition

Information representation or information perception is seen by numerous orders as a present likeness visual correspondence. It is not claimed by any one field, yet rather discovers translation crosswise over numerous It envelops the arrangement and investigation of the visual representation of information, signifying "data that has been dreamy in some schematic structure, including attributes or variables for the units of data".

  • Track 19-1Analysis data for visualization
  • Track 19-2Scalar visualization techniques
  • Track 19-3Frame work for flow visualization
  • Track 19-4System aspects of visualization applications
  • Track 19-5Future trends in scientific visualization

Business Analytics is the investigation of information through factual and operations examination, the arrangement of prescient models, utilization of enhancement procedures and the correspondence of these outcomes to clients, business accomplices and associate administrators. It is the convergence of business and information science.

  • Track 20-1Emerging phenomena
  • Track 20-2Technology drives and business analytics
  • Track 20-3Capitalizing on a growing marketing opportunity

In the course of recent decades there has been an enormous increment in the measure of information being put away in databases and the quantity of database applications in business and the investigative space. This blast in the measure of electronically put away information was quickened by the achievement of the social model for putting away information and the improvement and developing of information recovery and control innovations.

  • Track 21-1Multifaceted and task-driven search
  • Track 21-2Personalized search and ranking
  • Track 21-3Data, entity, event, and relationship extraction
  • Track 21-4Data integration and data cleaning
  • Track 21-5Opinion mining and sentiment analysis

In our e-world, information protection and cyber security have gotten to be typical terms. In our business, we have a commitment to secure our customers' information, which has been acquired per their express consent exclusively for their utilization. That is an imperative point if not promptly obvious. There's been a ton of speak of late about Google's new protection approaches, and the discourse rapidly spreads to other Internet beasts like Facebook and how they likewise handle and treat our own data.

  • Track 22-1Data encryption
  • Track 22-2Data Hiding
  • Track 22-3Public key cryptography
  • Track 22-4Quantum Cryptography
  • Track 22-5Convolution
  • Track 22-6Hashing

A Frequent example is an example that happens as often as possible in an information set. Initially proposed by [AIS93] with regards to regular thing sets and affiliation guideline digging for business sector crate investigation. Stretched out to a wide range of issues like chart mining, consecutive example mining, times arrangement design mining, content mining.

  • Track 23-1Frequent item sets and association
  • Track 23-2Item Set Mining Algorithms
  • Track 23-3Graph Pattern Mining
  • Track 23-4Pattern and Role Assessment

Enormous Data is a liberal wonder which is a standout amongst the most every now and again talked about subjects in the present age, and is relied upon to remain so within a reasonable time-frame Aptitudes, equipment and programming, calculation design, accurate centrality, the sign to commotion proportion and the way of Big Data itself are distinguished as the significant problems which are tarnishing the way toward acquiring important gauges from Big Data

  • Track 24-1Challenges for Forecasting with Big Data
  • Track 24-2Applications of Statistical and Data Mining Techniques for Big Data Forecasting
  • Track 24-3Forecasting the Michigan Confidence Index
  • Track 24-4Forecasting targets and characteristics

Open information is the feeling that a few information ought to be unreservedly accessible to everybody to utilize and republish as they wish, without confinements from right, licenses or different systems of control. The objectives of the open information development are like those of other "open" developments, for example, open premise, open equipment, open fulfilled, and open access.

  • Track 25-1Open Data, Government and Governance
  • Track 25-2Open Development and Sustainability
  • Track 25-3Open Science and Research
  • Track 25-4Technology, Tools and Business