All dependencies with the dependency hierarchy flattened
All dependencies with the dependency hierarchy flattened
Hash computed from the dataset, could be overridden to include things other than CRC
Hash computed from the dataset, could be overridden to include things other than CRC
Define the DQM rules, fixes and policies to be applied to this DataSet.
Define the DQM rules, fixes and policies to be applied to this DataSet.
See org.tresamigos.smv.dqm, org.tresamigos.smv.dqm.DQMRule, and org.tresamigos.smv.dqm.DQMFix
for details on creating rules and fixes.
Concrete modules and files should override this method to define rules/fixes to apply. The default is to provide an empty set of DQM rules/fixes.
DataSet type: could be 4 values, Input, Link, Module, Output
DataSet type: could be 4 values, Input, Link, Module, Output
Exports a dataframe to a hive table.
Exports a dataframe to a hive table.
Names the persisted file for the result of this SmvDataSet
Names the persisted file for the result of this SmvDataSet
The FQN of an SmvDataSet is its classname for Scala implementations.
The FQN of an SmvDataSet is its classname for Scala implementations.
Scala proxies for implementations in other languages must override this to name the proxied FQN.
TODO: remove this method as checkDependency replaced this function
TODO: remove this method as checkDependency replaced this function
Hash computed based on instance values of the dataset, such as the timestamp of an input file *
Hash computed based on instance values of the dataset, such as the timestamp of an input file *
flag if this module is ephemeral or short lived so that it will not be persisted when a graph is executed.
flag if this module is ephemeral or short lived so that it will not be persisted when a graph is executed. This is quite handy for "filter" or "map" type modules so that we don't force an extra I/O step when it is not needed. By default all modules are persisted unless the flag is overridden to true. Note: the module will still be persisted if it was specifically selected to run by the user.
Objects defined in Spark Shell has class name start with $ *
Objects defined in Spark Shell has class name start with $ *
Can be overridden to supply custom metadata TODO: make SmvMetadata more user friendly or find alternative format for user metadata
Can be overridden to supply custom metadata TODO: make SmvMetadata more user friendly or find alternative format for user metadata
Returns the path for the module's csv output
Returns the path for the module's csv output
An optional sql query to run to publish the results of this module when the --publish-hive command line is used.
An optional sql query to run to publish the results of this module when the --publish-hive command line is used. The DataFrame result of running this module will be available to the query as the "dftable" table. For example: return "insert overwrite table mytable select * from dftable" If this method is not specified, the default is to just create the table specified by tableName() with the results of the module.
returns the DataFrame from this dataset (file/module).
returns the DataFrame from this dataset (file/module). The value is cached so this function can be called repeatedly. The cache is external to SmvDataSet so that it we will not recalculate the DF even after dynamically loading the same SmvDataSet. If force argument is true, the we skip the cache. Note: the RDD graph is cached and NOT the data (i.e. rdd.cache is NOT called here)
Read a dataframe from a persisted file path, that is usually an input data set or the output of an upstream SmvModule.
Read a dataframe from a persisted file path, that is usually an input data set or the output of an upstream SmvModule.
The default format is headerless CSV with '"' as the quote character
modules must override to provide set of datasets they depend on.
modules must override to provide set of datasets they depend on. This is no longer the canonical list of dependencies. Internally we should query resolvedRequiresDS for dependencies.
fixed list of SmvDataSet dependencies
fixed list of SmvDataSet dependencies
Method to run/pre-process the input file.
Method to run/pre-process the input file. Users can override this method to perform file level ETL operations.
Returns the run information from this dataset's last run.
Returns the run information from this dataset's last run.
If the dataset has never been run, returns an empty run info with null for its components.
Hash computed based on the source code of the dataset's class *
Hash computed based on the source code of the dataset's class *
full name of hive output table if this module is published to hive.
full name of hive output table if this module is published to hive.
Override to validate module results based on current and historic metadata.
Override to validate module results based on current and historic metadata. If Some, DQM will fail. Defaults to None.
user tagged code "version".
user tagged code "version". Derived classes should update the value when code or data
SMV Dataset Wrapper around a hive table.