/* ----------------------------------------------------------------------- *//** * * @file lda.sql_in * * @brief SQL functions for Latent Dirichlet Allocation * @date Dec 2012 * * @sa For an introduction to Latent Dirichlet Allocation models, see the module description \ref grp_lda. * *//* ------------------------------------------------------------------------*/ m4_include(`SQLCommon.m4') /** @addtogroup grp_lda
Contents
@brief Generates a Latent Dirichlet Allocation predictive model for a collection of documents. Latent Dirichlet Allocation (LDA) is a generative probabilistic model for natural texts. It is used in problems such as automated topic discovery, collaborative filtering, and document classification. In addition to an implementation of LDA, this MADlib module also provides a number of additional helper functions to interpret results of the LDA output. @note Topic modeling is often used as part of a larger text processing pipeline, which may include operations such as term frequency, stemming and stop word removal. You can use the function Term Frequency to generate the required vocabulary format from raw documents for the LDA training function. See the examples later on this page for more details. @anchor background @par Background The LDA model posits that each document is associated with a mixture of various topics (e.g., a document is related to Topic 1 with probability 0.7, and Topic 2 with probability 0.3), and that each word in the document is attributable to one of the document's topics. There is a (symmetric) Dirichlet prior with parameter \f$ \alpha \f$ on each document's topic mixture. In addition, there is another (symmetric) Dirichlet prior with parameter \f$ \beta \f$ on the distribution of words for each topic. The following generative process then defines a distribution over a corpus of documents: - Sample for each topic \f$ i \f$, a per-topic word distribution \f$ \phi_i \f$ from the Dirichlet(\f$\beta\f$) prior. - For each document: - Sample a document length N from a suitable distribution, say, Poisson. - Sample a topic mixture \f$ \theta \f$ for the document from the Dirichlet(\f$\alpha\f$) distribution. - For each of the N words: - Sample a topic \f$ z_n \f$ from the multinomial topic distribution \f$ \theta \f$. - Sample a word \f$ w_n \f$ from the multinomial word distribution \f$ \phi_{z_n} \f$ associated with topic \f$ z_n \f$. In practice, only the words in each document are observable. The topic mixture of each document and the topic for each word in each document are latent unobservable variables that need to be inferred from the observables, and this is referred to as the inference problem for LDA. Exact inference is intractable, but several approximate inference algorithms for LDA have been developed. The simple and effective Gibbs sampling algorithm described in Griffiths and Steyvers [2] appears to be the current algorithm of choice. This implementation provides a parallel and scalable in-database solution for LDA based on Gibbs sampling. It takes advantage of the shared-nothing MPP architecture and is a different implementation than one would find for MPI or map/reduce. @anchor train @par Training Function The LDA training function has the following syntax:
lda_train( data_table,
           model_table,
           output_data_table,
           voc_size,
           topic_num,
           iter_num,
           alpha,
           beta
         )
\b Arguments
data_table
TEXT. Name of the table storing the training dataset. Each row is in the form <docid, wordid, count> where \c docid, \c wordid, and \c count are non-negative integers. The \c docid column refers to the document ID, the \c wordid column is the word ID (the index of a word in the vocabulary), and \c count is the number of occurrences of the word in the document. Please note: - \c wordid must be contiguous integers going from from 0 to \c voc_size − \c 1. - column names for \c docid, \c wordid, and \c count are currently fixed, so you must use these exact names in the data_table. The function Term Frequency can be used to generate vocabulary in the required format from raw documents.
model_table
TEXT. This is an output table generated by LDA which contains the learned model. It has one row with the following columns:
voc_size INTEGER. Size of the vocabulary. As mentioned above for the input table, \c wordid consists of contiguous integers going from 0 to \c voc_size − \c 1.
topic_num INTEGER. Number of topics.
alpha DOUBLE PRECISION. Dirichlet prior for the per-document topic multinomial.
beta DOUBLE PRECISION. Dirichlet prior for the per-topic word multinomial.
model BIGINT[]. The encoded model description (not human readable).
output_data_table
TEXT. The name of the table generated by LDA that stores the output data. It has the following columns:
docid INTEGER. Document id from input 'data_table'.
wordcount INTEGER. Count of number of words in the document, including repeats. For example, if a word appears 3 times in the document, it is counted 3 times.
words INTEGER[]. Array of \c wordid in the document, not including repeats. For example, if a word appears 3 times in the document, it appears only once in the \c words array.
counts INTEGER[]. Frequency of occurance of a word in the document, indexed the same as the \c words array above. For example, if the 2nd element of the \c counts array is 4, it means that the word in the 2nd element of the \c words array occurs 4 times in the document.
topic_count INTEGER[]. Array of the count of words in the document that correspond to each topic. This array is of length \c topic_num. Topic ids are continuous integers going from 0 to \c topic_num − \c 1.
topic_assignment INTEGER[]. Array indicating which topic each word in the document corresponds to. This array is of length \c wordcount. Words that are repeated \c n times in the document will show up consecutively \c n times in this array.
voc_size
INTEGER. Size of the vocabulary. As mentioned above for the input 'data_table', \c wordid consists of continuous integers going from 0 to \c voc_size − \c 1.
topic_num
INTEGER. Desired number of topics.
iter_num
INTEGER. Desired number of iterations.
alpha
DOUBLE PRECISION. Dirichlet prior for the per-document topic multinomial (e.g., 50/topic_num is a reasonable value to start with as per Griffiths and Steyvers [2] ).
beta
DOUBLE PRECISION. Dirichlet prior for the per-topic word multinomial (e.g., 0.01 is a reasonable value to start with).
@anchor predict @par Prediction Function Prediction involves labelling test documents using a learned LDA model:
lda_predict( data_table,
             model_table,
             output_predict_table
           );
\b Arguments
data_table
TEXT. Name of the table storing the test dataset (new document to be labeled).
model_table
TEXT. The model table generated by the training process.
output_predict_table
TEXT. The prediction output table. Each row in the table stores the topic distribution and the topic assignments for a document in the dataset. This table has the exact same columns and interpretation as the 'output_data_table' from the training function above.
@anchor perplexity @par Perplexity Perplexity describes how well the model fits the data by computing word likelihoods averaged over the test documents. This function returns a single perplexity value.
lda_get_perplexity( model_table,
                    output_predict_table
                  );
\b Arguments
model_table
TEXT. The model table generated by the training process.
output_predict_table
TEXT. The prediction output table generated by the predict function above.
@anchor helper @par Helper Functions The helper functions can help to interpret the output from LDA training and LDA prediction. Topic description by top-k words with highest probability Applies to LDA training only.
lda_get_topic_desc( model_table,
                    vocab_table,
                    output_table,
                    top_k
                  )
\b Arguments
model_table
TEXT. The model table generated by the training process.
vocab_table
TEXT. The vocabulary table in the form . Reminder that this table can be created using the \c term_frequency function (\ref grp_text_utilities) with the parameter \c compute_vocab set to TRUE.
output_table
TEXT. The output table with per-topic description generated by this helper function. It has the following columns:
topicid INTEGER. Topic id.
wordid INTEGER. Word id.
prob DOUBLE PRECISION. Probability that this topic will generate the word.
word TEXT. Word in text form.
top_k
TEXT. The desired number of top words to show for each topic.
Per-word topic counts Applies to LDA training only.
lda_get_word_topic_count( model_table,
                          output_table
                        )
\b Arguments
model_table
TEXT. The model table generated by the training process.
output_table
TEXT. The output table with per-word topic counts generated by this helper function. It has the following columns:
wordid INTEGER. Word id.
topic_count INTEGER[]. Count of word association with each topic, i.e., shows how many times a given word is assigned to a topic. Array is of length number of topics.
Per-topic word counts Applies to LDA training only.
lda_get_topic_word_count( model_table,
                          output_table
                        )
\b Arguments
model_table
TEXT. The model table generated by the training process.
output_table
TEXT. The output table with per-topic word counts generated by this helper function. It has the following columns:
topicid INTEGER. Topic id.
word_count INTEGER[]. Array showing which words are associated with the topic by frequency. Array is of length number of words.
Per-document word to topic mapping Applies to both LDA training and LDA prediction.
lda_get_word_topic_mapping( output_data_table,  -- From training or prediction
                            output_table
                          )
\b Arguments
output_data_table
TEXT. The output data table generated by either LDA training or LDA prediction.
output_table
TEXT. The output table with word to topic mappings generated by this helper function. It has the following columns:
docid INTEGER. Document id.
wordid INTEGER. Word id.
topicid INTEGER. Topic id.
@anchor examples @examp -# Prepare a training dataset for LDA. The examples below are small strings extracted from various Wikipedia documents:
DROP TABLE IF EXISTS documents;
CREATE TABLE documents(docid INT4, contents TEXT);
INSERT INTO documents VALUES
(0, 'Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora.'),
(1, 'By the late 1960s, the balance between pitching and hitting had swung in favor of the pitchers. In 1968 Carl Yastrzemski won the American League batting title with an average of just .301, the lowest in history.'),
(2, 'Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field.'),
(3, 'California''s diverse geography ranges from the Sierra Nevada in the east to the Pacific Coast in the west, from the Redwood–Douglas fir forests of the northwest, to the Mojave Desert areas in the southeast. The center of the state is dominated by the Central Valley, a major agricultural area.');
You can apply stemming, stop word removal and tokenization at this point in order to prepare the documents for text processing. Depending upon your database version, various tools are available. Databases based on more recent versions of PostgreSQL may do something like:
SELECT tsvector_to_array(to_tsvector('english',contents)) from documents;
                        tsvector_to_array
+-----------------------------------------------------------------------
 {analyz,bayesian,class,content,corpora,develop,document,larg,...}
 {1960s,1968,301,american,averag,balanc,bat,carl,favor,histori,...}
 {also,applic,close,comput,deliv,disciplin,domain,field,learn,...}
 {agricultur,area,california,center,central,coast,desert,divers,...}
(4 rows)
In this example, we assume a database based on an older version of PostgreSQL and just perform basic punctuation removal and tokenization. The array of words is added as a new column to the documents table:
ALTER TABLE documents ADD COLUMN words TEXT[];
UPDATE documents SET words = 
    regexp_split_to_array(lower(
    regexp_replace(contents, E'[,.;\']','', 'g')
    ), E'[\\\\s+]');
SELECT * FROM documents ORDER BY docid;
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docid    | 0
contents | Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora.
words    | {statistical,topic,models,are,a,class,of,bayesian,latent,variable,models,originally,developed,for,analyzing,the,semantic,content,of,large,document,corpora}
-[ RECORD 2 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docid    | 1
contents | By the late 1960s, the balance between pitching and hitting had swung in favor of the pitchers. In 1968 Carl Yastrzemski won the American League batting title with an average of just .301, the lowest in history.
words    | {by,the,late,1960s,the,balance,between,pitching,and,hitting,had,swung,in,favor,of,the,pitchers,in,1968,carl,yastrzemski,won,the,american,league,batting,title,with,an,average,of,just,301,the,lowest,in,history}
-[ RECORD 3 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docid    | 2
contents | Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field.
words    | {machine,learning,is,closely,related,to,and,often,overlaps,with,computational,statistics,a,discipline,that,also,specializes,in,prediction-making,it,has,strong,ties,to,mathematical,optimization,which,deliver,methods,theory,and,application,domains,to,the,field}
-[ RECORD 4 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docid    | 3
contents | California's diverse geography ranges from the Sierra Nevada in the east to the Pacific Coast in the west, from the Redwood–Douglas fir forests of the northwest, to the Mojave Desert areas in the southeast. The center of the state is dominated by the Central Valley, a major agricultural area.
words    | {californias,diverse,geography,ranges,from,the,sierra,nevada,in,the,east,to,the,pacific,coast,in,the,west,from,the,redwood–douglas,fir,forests,of,the,northwest,to,the,mojave,desert,areas,in,the,southeast,the,center,of,the,state,is,dominated,by,the,central,valley,a,major,agricultural,area}
-# Build a word count table by extracting the words and building a histogram for each document using the \c term_frequency function (\ref grp_text_utilities).
DROP TABLE IF EXISTS documents_tf, documents_tf_vocabulary;
SELECT madlib.term_frequency('documents',    -- input table
                             'docid',        -- document id column
                             'words',        -- vector of words in document
                             'documents_tf', -- output documents table with term frequency
                             TRUE);          -- TRUE to created vocabulary table
SELECT * FROM documents_tf ORDER BY docid LIMIT 20;
 docid | wordid | count 
-------+--------+-------
     0 |     71 |     1
     0 |     90 |     1
     0 |     56 |     1
     0 |     68 |     2
     0 |     85 |     1
     0 |     28 |     1
     0 |     35 |     1
     0 |     54 |     1
     0 |     64 |     2
     0 |      8 |     1
     0 |     29 |     1
     0 |     80 |     1
     0 |     24 |     1
     0 |     11 |     1
     0 |     17 |     1
     0 |     32 |     1
     0 |      3 |     1
     0 |     42 |     1
     0 |     97 |     1
     0 |     95 |     1
(20 rows)
Here is the associated vocabulary table. Note that wordid starts at 0:
SELECT * FROM documents_tf_vocabulary ORDER BY wordid LIMIT 20;
 wordid |     word     
--------+--------------
      0 | 1960s
      1 | 1968
      2 | 301
      3 | a
      4 | agricultural
      5 | also
      6 | american
      7 | an
      8 | analyzing
      9 | and
     10 | application
     11 | are
     12 | area
     13 | areas
     14 | average
     15 | balance
     16 | batting
     17 | bayesian
     18 | between
     19 | by
(20 rows)
The total number of words in the vocabulary across all documents is:
SELECT COUNT(*) FROM documents_tf_vocabulary;
 count 
+------
   103
(1 row)
-# Train LDA model. For Dirichlet priors we use initial rule-of-thumb values of 50/(number of topics) for alpha and 0.01 for beta. Reminder that column names for docid, wordid, and count are currently fixed, so you must use these exact names in the input table. After a successful run of the LDA training function two tables are generated, one for storing the learned model and the other for storing the output data table.
DROP TABLE IF EXISTS lda_model, lda_output_data;
SELECT madlib.lda_train( 'documents_tf',     -- documents table in the form of term frequency
                         'lda_model',        -- model table created by LDA training (not human readable)
                         'lda_output_data',  -- readable output data table 
                         103,                -- vocabulary size
                         5,                  -- number of topics
                         10,                 -- number of iterations
                         5,                  -- Dirichlet prior for the per-doc topic multinomial (alpha)
                         0.01                -- Dirichlet prior for the per-topic word multinomial (beta)
                       );
SELECT * FROM lda_output_data ORDER BY docid;
-[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------
docid            | 0
wordcount        | 22
words            | {24,17,11,95,90,85,68,54,42,35,28,8,3,97,80,71,64,56,32,29}
counts           | {1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,2,1,1,1}
topic_count      | {4,2,4,3,9}
topic_assignment | {4,2,4,1,2,1,2,2,0,3,4,4,3,0,0,4,0,4,4,4,3,4}
-[ RECORD 2 ]----+------------------------------------------------------------------------------------------------------
docid            | 1
wordcount        | 37
words            | {1,50,49,46,19,16,14,9,7,0,90,68,57,102,101,100,93,88,75,74,59,55,53,48,39,21,18,15,6,2}
counts           | {1,3,1,1,1,1,1,1,1,1,5,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}
topic_count      | {2,5,14,9,7}
topic_assignment | {0,3,3,3,1,4,2,2,2,1,3,1,2,2,2,2,2,2,2,1,4,3,2,0,4,2,4,2,3,4,3,1,3,4,3,2,4}
-[ RECORD 3 ]----+------------------------------------------------------------------------------------------------------
docid            | 2
wordcount        | 36
words            | {10,27,33,40,47,51,58,62,63,69,72,83,100,99,94,92,91,90,89,87,86,79,76,70,60,52,50,36,30,25,9,5,3}
counts           | {1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,1}
topic_count      | {15,10,1,7,3}
topic_assignment | {0,3,1,3,0,0,3,3,1,0,1,0,0,0,0,1,1,0,4,2,0,4,1,0,1,0,0,4,3,3,3,0,1,1,1,0}
-[ RECORD 4 ]----+------------------------------------------------------------------------------------------------------
docid            | 3
wordcount        | 49
words            | {77,78,81,82,67,65,51,45,44,43,34,26,13,98,96,94,90,84,73,68,66,61,50,41,38,37,31,23,22,20,19,12,4,3}
counts           | {1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2,11,1,1,2,1,1,3,1,1,1,1,1,1,1,1,1,1,1}
topic_count      | {5,5,26,5,8}
topic_assignment | {4,4,4,0,2,0,0,2,4,4,2,2,2,1,2,4,1,0,2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,4,3,3,3,2,3,2,3,2,1,4,2,2,1,0}
-# Review learned model using helper functions. First, we get topic description by top-k words. These are the k words with the highest probability for the topic. Note that if there are ties in probability, more than k words may actually be reported for each topic. Also note that topicid starts at 0:
DROP TABLE IF EXISTS helper_output_table;
SELECT madlib.lda_get_topic_desc( 'lda_model',                -- LDA model generated in training
                                  'documents_tf_vocabulary',  -- vocabulary table that maps wordid to word
                                  'helper_output_table',      -- output table for per-topic descriptions
                                  5);                         -- k: number of top words for each topic
SELECT * FROM helper_output_table ORDER BY topicid, prob DESC LIMIT 40;
 topicid | wordid |        prob        |       word        
---------+--------+--------------------+-------------------
       0 |      3 |  0.111357750647429 | a
       0 |     51 |  0.074361820199778 | is
       0 |     94 |  0.074361820199778 | to
       0 |     70 | 0.0373658897521273 | optimization
       0 |     82 | 0.0373658897521273 | southeast
       0 |     60 | 0.0373658897521273 | machine
       0 |     71 | 0.0373658897521273 | originally
       0 |     69 | 0.0373658897521273 | often
       0 |     99 | 0.0373658897521273 | which
       0 |     83 | 0.0373658897521273 | specializes
       0 |      1 | 0.0373658897521273 | 1968
       0 |     97 | 0.0373658897521273 | variable
       0 |     25 | 0.0373658897521273 | closely
       0 |     93 | 0.0373658897521273 | title
       0 |     47 | 0.0373658897521273 | has
       0 |     65 | 0.0373658897521273 | mojave
       0 |     79 | 0.0373658897521273 | related
       0 |     89 | 0.0373658897521273 | that
       0 |     10 | 0.0373658897521273 | application
       0 |    100 | 0.0373658897521273 | with
       0 |     92 | 0.0373658897521273 | ties
       0 |     54 | 0.0373658897521273 | large
       1 |     94 |  0.130699088145897 | to
       1 |      9 |  0.130699088145897 | and
       1 |      5 | 0.0438558402084238 | also
       1 |     57 | 0.0438558402084238 | league
       1 |     49 | 0.0438558402084238 | hitting
       1 |     13 | 0.0438558402084238 | areas
       1 |     39 | 0.0438558402084238 | favor
       1 |     85 | 0.0438558402084238 | statistical
       1 |     95 | 0.0438558402084238 | topic
       1 |      0 | 0.0438558402084238 | 1960s
       1 |     76 | 0.0438558402084238 | prediction-making
       1 |     86 | 0.0438558402084238 | statistics
       1 |     84 | 0.0438558402084238 | state
       1 |     72 | 0.0438558402084238 | overlaps
       1 |     22 | 0.0438558402084238 | center
       1 |      4 | 0.0438558402084238 | agricultural
       1 |     63 | 0.0438558402084238 | methods
       1 |     33 | 0.0438558402084238 | discipline
(40 rows)
Get the per-word topic counts. This mapping shows how many times a given word is assigned to a topic. E.g., wordid 3 is assigned to topicid 0 three times:
DROP TABLE IF EXISTS helper_output_table;
SELECT madlib.lda_get_word_topic_count( 'lda_model',            -- LDA model generated in training
                                        'helper_output_table'); -- output table for per-word topic counts
SELECT * FROM helper_output_table ORDER BY wordid LIMIT 20;
 wordid | topic_count 
--------+-------------
      0 | {0,1,0,0,0}
      1 | {1,0,0,0,0}
      2 | {1,0,0,0,0}
      3 | {3,0,0,0,0}
      4 | {0,0,0,0,1}
      5 | {0,1,0,0,0}
      6 | {1,0,0,0,0}
      7 | {0,0,0,1,0}
      8 | {0,1,0,0,0}
      9 | {0,0,0,3,0}
     10 | {1,0,0,0,0}
     11 | {1,0,0,0,0}
     12 | {0,0,1,0,0}
     13 | {0,0,0,0,1}
     14 | {0,1,0,0,0}
     15 | {0,0,0,0,1}
     16 | {0,1,0,0,0}
     17 | {0,0,1,0,0}
     18 | {1,0,0,0,0}
     19 | {2,0,0,0,0}
(20 rows)
Get the per-topic word counts. This mapping shows which words are associated with each topic by frequency:
DROP TABLE IF EXISTS topic_word_count;
SELECT madlib.lda_get_topic_word_count( 'lda_model',
                                        'topic_word_count');
SELECT * FROM topic_word_count ORDER BY topicid;
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
topicid    | 1
word_count | {1,1,0,0,0,0,0,1,1,0,1,0,0,0,0,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1,0,0,1,1,0,0,0,0,0,0,0,1,0}
-[ RECORD 2 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
topicid    | 2
word_count | {0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1,1,2,0,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,0,0,4,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,5,0,1,0,0,1,0,0,0}
-[ RECORD 3 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
topicid    | 3
word_count | {0,0,0,0,0,0,0,0,0,3,0,1,0,1,1,0,0,0,0,2,0,0,0,0,1,0,0,1,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,2,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0}
-[ RECORD 4 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
topicid    | 4
word_count | {0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,0,0,1,0,0,1,0,0,0,1,0,0,1,1,1,0,0,0,1,0,0,0,0,0,0,1,0,7,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,1,0,0,0,0,1,0,0,0,0,1,1,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,1}
-[ RECORD 5 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
topicid    | 5
word_count | {0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,18,0,0,0,0,0,0,0,1,0,2,0,0}
Get the per-document word to topic mapping:
DROP TABLE IF EXISTS helper_output_table;
SELECT madlib.lda_get_word_topic_mapping('lda_output_data',  -- Output table from training
                                         'helper_output_table');
SELECT * FROM helper_output_table ORDER BY docid LIMIT 40;
 docid | wordid | topicid 
-------+--------+---------
     0 |     56 |       1
     0 |     54 |       1
     0 |     42 |       2
     0 |     35 |       1
     0 |     32 |       1
     0 |     29 |       3
     0 |     28 |       4
     0 |     24 |       3
     0 |     17 |       2
     0 |     11 |       0
     0 |      8 |       1
     0 |      3 |       0
     0 |     97 |       0
     0 |     95 |       3
     0 |     90 |       0
     0 |     85 |       0
     0 |     80 |       2
     0 |     71 |       2
     0 |     68 |       0
     0 |     64 |       1
     1 |      2 |       0
     1 |      1 |       0
     1 |      0 |       1
     1 |    102 |       4
     1 |    101 |       2
     1 |    100 |       1
     1 |     93 |       3
     1 |     90 |       2
     1 |     90 |       0
     1 |     88 |       1
     1 |     75 |       1
     1 |     74 |       3
     1 |     68 |       0
     1 |     59 |       2
     1 |     57 |       4
     1 |     55 |       3
     1 |     53 |       3
     1 |     50 |       0
     1 |     49 |       1
     1 |     48 |       0
(40 rows)
-# Use a learned LDA model for prediction (that is, to label new documents). In this example, we use the same input table as we used to train, just for demonstration purpose. Normally, the test document is a new one that we want to predict on.
DROP TABLE IF EXISTS outdata_predict;
SELECT madlib.lda_predict( 'documents_tf',          -- Document to predict
                           'lda_model',             -- LDA model from training
                           'outdata_predict'        -- Output table for predict results         
                         );
SELECT * FROM outdata_predict;
-[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------
docid            | 0
wordcount        | 22
words            | {17,11,28,29,95,3,32,97,85,35,54,80,64,90,8,24,42,71,56,68}
counts           | {1,1,1,1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2}
topic_count      | {1,3,16,1,1}
topic_assignment | {2,2,1,0,2,2,2,3,2,2,2,2,2,2,4,2,2,2,2,2,1,1}
-[ RECORD 2 ]----+------------------------------------------------------------------------------------------------------
docid            | 1
wordcount        | 37
words            | {90,101,2,88,6,7,75,46,74,68,39,9,48,49,102,50,59,53,55,57,100,14,15,16,18,19,93,21,0,1}
counts           | {5,1,1,1,1,1,1,1,1,2,1,1,1,1,1,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1}
topic_count      | {0,1,11,6,19}
topic_assignment | {4,4,4,4,4,4,4,4,4,2,4,2,2,1,3,2,2,4,4,4,3,3,3,4,3,3,2,4,4,2,2,4,2,4,2,4,2}
-[ RECORD 3 ]----+------------------------------------------------------------------------------------------------------
docid            | 2
wordcount        | 36
words            | {90,3,5,9,10,25,27,30,33,36,40,47,50,51,52,58,60,62,63,69,70,72,76,79,83,86,87,89,91,92,94,99,100}
counts           | {1,1,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,1,1}
topic_count      | {26,3,5,1,1}
topic_assignment | {4,0,0,2,2,0,0,0,0,2,0,0,0,3,0,0,0,0,0,0,0,0,0,2,0,2,0,0,0,0,0,1,1,1,0,0}
-[ RECORD 4 ]----+------------------------------------------------------------------------------------------------------
docid            | 3
wordcount        | 49
words            | {41,38,3,77,78,94,37,81,82,19,84,34,96,13,31,98,90,51,26,61,23,22,50,65,66,67,45,44,68,4,12,43,20,73}
counts           | {1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,11,1,1,1,1,1,3,1,1,1,1,2,2,1,1,1,1,1}
topic_count      | {0,28,0,4,17}
topic_assignment | {1,1,4,1,1,1,1,1,1,4,1,1,1,3,1,1,1,4,4,4,4,4,4,4,4,4,4,4,4,1,1,1,4,3,3,3,1,1,4,4,1,1,1,1,1,1,1,1,1}
The test table is expected to be in the same form as the training table and can be created with the same process. The LDA prediction results have the same format as the output table generated by the LDA training function. -# Review prediction using helper function. (This is the same per-document word to topic mapping that we used on the learned model.)
DROP TABLE IF EXISTS helper_output_table;
SELECT madlib.lda_get_word_topic_mapping('outdata_predict',  -- Output table from prediction
                                         'helper_output_table');
SELECT * FROM helper_output_table ORDER BY docid LIMIT 40;
 docid | wordid | topicid 
-------+--------+---------
     0 |     54 |       4
     0 |     42 |       1
     0 |     35 |       4
     0 |     32 |       4
     0 |     29 |       4
     0 |     28 |       1
     0 |     24 |       4
     0 |     17 |       1
     0 |     11 |       4
     0 |      8 |       4
     0 |      3 |       0
     0 |     97 |       4
     0 |     95 |       1
     0 |     90 |       2
     0 |     85 |       4
     0 |     80 |       0
     0 |     71 |       0
     0 |     68 |       0
     0 |     64 |       4
     0 |     64 |       1
     0 |     56 |       4
     1 |      2 |       4
     1 |      1 |       4
     1 |      0 |       2
     1 |    102 |       4
     1 |    101 |       4
     1 |    100 |       4
     1 |     93 |       4
     1 |     90 |       2
     1 |     90 |       0
     1 |     88 |       2
     1 |     75 |       2
     1 |     74 |       0
     1 |     68 |       0
     1 |     59 |       4
     1 |     57 |       2
     1 |     55 |       2
     1 |     53 |       1
     1 |     50 |       0
     1 |     49 |       2
(40 rows)
-# Call the perplexity function to see how well the model fits the data. Perplexity computes word likelihoods averaged over the test documents.
SELECT madlib.lda_get_perplexity( 'lda_model',        -- LDA model from training
                                  'outdata_predict'   -- Prediction output
                                );
 lda_get_perplexity 
+--------------------
    79.481894411824
(1 row)
@anchor literature @literature [1] D.M. Blei, A.Y. Ng, M.I. Jordan, Latent Dirichlet Allocation, Journal of Machine Learning Research, vol. 3, pp. 993-1022, 2003. [2] T. Griffiths and M. Steyvers, Finding scientific topics, PNAS, vol. 101, pp. 5228-5235, 2004. [3] Y. Wang, H. Bai, M. Stanton, W-Y. Chen, and E.Y. Chang, lda: Parallel Dirichlet Allocation for Large-scale Applications, AAIM, 2009. [4] http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation [5] J. Chang, Collapsed Gibbs sampling methods for topic models, R manual, 2010. @anchor related @par Related Topics File lda.sql_in documenting the SQL functions. */ -- UDT for summarizing a UDF call DROP TYPE IF EXISTS MADLIB_SCHEMA.lda_result CASCADE; CREATE TYPE MADLIB_SCHEMA.lda_result AS ( output_table TEXT, description TEXT ); /** * @brief This UDF provides an entry for the lda training process. * @param data_table Table storing the training dataset, each row is in * the form of where docid, * wordid, and count are all non-negative integers. * @param model_table Table storing the learned models (voc_size, topic_num, * alpha, beta, per-word topic counts, and * corpus-level topic counts) * @param output_data_table Table storing the output data table in the form of * * @param voc_size Size of the vocabulary (Note that the wordid should * be continous integers starting from 0 to voc_size - * 1. A data validation rountine will be called to * validate the dataset.) * @param topic_num Number of topics (e.g. 100) * @param iter_num Number of iterations (e.g. 60) * @param alpha Dirichlet parameter for the per-doc topic multinomial * (e.g. 50/topic_num) * @param beta Dirichlet parameter for the per-topic word multinomial * (e.g. 0.01) */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_train ( data_table TEXT, model_table TEXT, output_data_table TEXT, voc_size INT4, topic_num INT4, iter_num INT4, alpha FLOAT8, beta FLOAT8 ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.lda_train( schema_madlib, data_table, model_table, output_data_table, voc_size, topic_num, iter_num, alpha, beta ) return [[model_table, 'model table'], [output_data_table, 'output data table']] $$ LANGUAGE plpythonu m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF provides an entry for the lda predicton process. * @param data_table Table storing the testing dataset, each row is in the * form of * where docid, wordid, and count are all non-negative * integers. * @param model_table Table storing the learned models * @param output_table Table storing per-document topic counts and topic * assignments * @note default iter_num = 20 */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_predict ( data_table TEXT, model_table TEXT, output_table TEXT ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.lda_predict(schema_madlib, data_table, model_table, output_table) return [[ output_table, 'per-doc topic distribution and per-word topic assignments']] $$ LANGUAGE plpythonu m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief A overloaded version which allows users to specify iter_num. */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_predict ( data_table TEXT, model_table TEXT, output_table TEXT, iter_num INT4 ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.lda_predict( schema_madlib, data_table, model_table, output_table, iter_num) return [[ output_table, 'per-doc topic distribution and per-word topic assignments']] $$ LANGUAGE PLPYTHONU STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF computes the per-topic word counts. * @param model_table The model table generated by the training process * @param output_table The output table storing the per-topic word counts */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_get_topic_word_count ( model_table TEXT, output_table TEXT ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.get_topic_word_count(schema_madlib, model_table, output_table) return [[output_table, 'per-topic word counts']] $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF computes the per-word topic counts. * @param model_table The model table generated by the training process * @param dist_table The output table storing the per-word topic counts */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_get_word_topic_count ( model_table TEXT, output_table TEXT ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.get_word_topic_count(schema_madlib, model_table, output_table) return [[output_table, 'per-word topic counts']] $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF gets the description for each topic (top-k words) * @param model_table The model table generated by the training process * @param vocab_table The vocabulary table () * @param top_k The number of top words for each topic description * @param desc_table The output table for storing the per-topic description */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_get_topic_desc ( model_table TEXT, vocab_table TEXT, desc_table TEXT, top_k INT4 ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.get_topic_desc( schema_madlib, model_table, vocab_table, desc_table, top_k) return [[ desc_table, """topic description, use "ORDER BY topicid, prob DESC" to check the results"""]] $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF gets the wordid - topicid mapping from the lda training output table * @param lda_output_table The output table from lda traning or predicting * @param mapping_table The result table that saves the mapping info */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_get_word_topic_mapping ( lda_output_table TEXT, mapping_table TEXT ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.get_word_topic_mapping(schema_madlib, lda_output_table, mapping_table) return [[mapping_table, 'wordid - topicid mapping']] $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF assigns topics to words in a document randomly. * @param word_count The number of words in the document * @param topic_num The number of topics (specified by the user) * @return The topic counts and topic assignments */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_random_assign ( word_count INT4, topic_num INT4 ) RETURNS INT4[] AS 'MODULE_PATHNAME', 'lda_random_assign' LANGUAGE C STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This UDF learns the topics of words in a document and is the main * step of a Gibbs sampling iteration. The model parameter (including the * per-word topic counts and corpus-level topic counts) is passed to this * function in the first call and then transfered to the rest calls through * fcinfo->flinfo->fn_extra to allow the immediate update. * @param words The set of unique words in the document * @param counts The counts of each unique words in the document * (sum(counts) = word_count) * @param doc_topic The current per-doc topic counts and topic * assignments * @param model The current model (including the per-word topic counts * and the corpus-level topic counts) * @param alpha The Dirichlet parameter for per-document topic multinomial * @param beta The Dirichlet parameter for per-topic word multinomial * @param voc_size The size of vocabulary * @param topic_num The number of topics * @param iter_num The number of iterations * @return The learned topic counts and topic assignments */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_gibbs_sample ( words INT4[], counts INT4[], doc_topic INT4[], model INT8[], alpha FLOAT8, beta FLOAT8, voc_size INT4, topic_num INT4, iter_num INT4 ) RETURNS INT4[] AS 'MODULE_PATHNAME', 'lda_gibbs_sample' LANGUAGE C m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This UDF is the sfunc for the aggregator computing the topic counts * for each word and the topic count in the whole corpus. It scans the topic * assignments in a document and updates the topic counts. * @param state The topic counts * @param words The unique words in the document * @param counts The counts of each unique words in the document * (sum(counts) = word_count) * @param topic_assignment The topic assignments in the document * @param voc_size The size of vocabulary * @param topic_num The number of topics * @return The updated state */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_count_topic_sfunc ( state INT8[], words INT4[], counts INT4[], topic_assignment INT4[], voc_size INT4, topic_num INT4 ) RETURNS INT8[] AS 'MODULE_PATHNAME', 'lda_count_topic_sfunc' LANGUAGE C m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This UDF is the prefunc for the aggregator computing the per-word * topic counts. * @param state1 The local word topic counts * @param state2 The local word topic counts * @return The element-wise sum of two local states */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_count_topic_prefunc ( state1 INT8[], state2 INT8[] ) RETURNS INT8[] AS 'MODULE_PATHNAME', 'lda_count_topic_prefunc' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This uda computes the word topic counts by scanning and summing * up topic assignments in each document. * @param words The unique words in the document * @param counts The counts of each unique words in the document * @param topic_assignment The topic assignments in the document * @param voc_size The size of vocabulary * @param topic_num The number of topics * @return The word topic counts (a 1-d array embeding a 2-d array) */ DROP AGGREGATE IF EXISTS MADLIB_SCHEMA.__lda_count_topic_agg ( INT4[], INT4[], INT4[], INT4, INT4 ); CREATE AGGREGATE MADLIB_SCHEMA.__lda_count_topic_agg ( INT4[], INT4[], INT4[], INT4, INT4 ) ( stype = INT8[], sfunc = MADLIB_SCHEMA.__lda_count_topic_sfunc m4_ifdef( `__POSTGRESQL__', `', `, prefunc = MADLIB_SCHEMA.__lda_count_topic_prefunc' ) ); /** * @brief This UDF computes the perplexity given the output data table and the * model table. * @param model_table The model table generated by lda_train * @param output_data_table The output data table generated by lda_predict * @return The perplexity */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_get_perplexity ( model_table TEXT, output_data_table TEXT ) RETURNS FLOAT8 AS $$ PythonFunctionBodyOnly(`lda', `lda') return lda.get_perplexity( schema_madlib, model_table, output_data_table) $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `READS SQL DATA', `'); /** * @brief This UDF is the sfunc for the aggregator computing the perpleixty. * @param state The cached model plus perplexity * @param words The unique words in the document * @param counts The counts of each unique words in the document * @param doc_topic The topic counts in the document * @param model The learned model * @param alpha The Dirichlet parameter for per-document topic multinomial * @param beta The Dirichlet parameter for per-topic word multinomial * @param voc_size The size of vocabulary * @param topic_num The number of topics * @return The updated state */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_perplexity_sfunc ( state INT8[], words INT4[], counts INT4[], doc_topic INT4[], model INT8[], alpha FLOAT8, beta FLOAT8, voc_size INT4, topic_num INT4 ) RETURNS INT8[] AS 'MODULE_PATHNAME', 'lda_perplexity_sfunc' LANGUAGE C IMMUTABLE m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This UDF is the prefunc for the aggregator computing the perplexity. * @param state1 The local state * @param state2 The local state * @return The merged state */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_perplexity_prefunc ( state1 INT8[], state2 INT8[] ) RETURNS INT8[] AS 'MODULE_PATHNAME', 'lda_perplexity_prefunc' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This UDF is the finalfunc for the aggregator computing the perplexity. * @param state The merged state * @return The perpleixty */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_perplexity_ffunc ( state INT8[] ) RETURNS FLOAT8 AS 'MODULE_PATHNAME', 'lda_perplexity_ffunc' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /* * @brief This aggregator computes the perpleixty. * @param words The unique words in the document * @param counts The counts of each unique words in the document * @param doc_topic The topic counts in the document * @param model The learned model * @param alpha The Dirichlet parameter for per-document topic multinomial * @param beta The Dirichlet parameter for per-topic word multinomial * @param voc_size The size of vocabulary * @param topic_num The number of topics * @return The updated perplexity */ DROP AGGREGATE IF EXISTS MADLIB_SCHEMA.__lda_perplexity_agg ( INT4[], INT4[], INT4[], INT8[], FLOAT8, FLOAT8, INT4, INT4 ); CREATE AGGREGATE MADLIB_SCHEMA.__lda_perplexity_agg ( INT4[], INT4[], INT4[], INT8[], FLOAT8, FLOAT8, INT4, INT4 ) ( stype = INT8[], sfunc = MADLIB_SCHEMA.__lda_perplexity_sfunc, finalfunc = MADLIB_SCHEMA.__lda_perplexity_ffunc m4_ifdef( `__POSTGRESQL__', `', `, prefunc = MADLIB_SCHEMA.__lda_perplexity_prefunc' ) ); CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_check_count_ceiling ( arr INT8[], r INT4, c INT4 ) RETURNS INT4[] AS 'MODULE_PATHNAME', 'lda_check_count_ceiling' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief Unnest a 2-D array into a set of 1-D arrays * @param arr The 2-D array to be unnested * @return The unnested 1-D arrays */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_unnest ( arr INT8[], r INT4, c INT4 ) RETURNS SETOF INT4[] AS 'MODULE_PATHNAME', 'lda_unnest' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_unnest_transpose ( arr INT8[], r INT4, c INT4 ) RETURNS SETOF INT4[] AS 'MODULE_PATHNAME', 'lda_unnest_transpose' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief Transpose a 2-D array * @param matrix The input 2-D array * @param The transposed array */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_transpose ( matrix INT8[][] ) RETURNS INT8[][] AS 'MODULE_PATHNAME', 'lda_transpose' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief L1 normalization with smoothing * @param arr The array to be normalized * @param smooth The smoothing parameter * @return The normalized vector */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_norm_with_smoothing ( arr FLOAT8[], smooth FLOAT8 ) RETURNS FLOAT8[] AS 'MODULE_PATHNAME', 'l1_norm_with_smoothing' LANGUAGE C IMMUTABLE STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This UDF returns the index of elements in a sorted order * @param arr The array to be sorted * @return The index of elements */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_index_sort ( arr FLOAT8[] ) RETURNS INT4[] AS $$ PythonFunctionBodyOnly(`lda', `lda') return lda.index_sort(arr) $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `'); /** * @brief This UDF checks the vocabulary and converts non-continous wordids into * continuous integers ranging from 0 to voc_size - 1. * @param vocab_table The vocabulary table in the form of * @param output_vocab_table The regularized vocabulary table */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_norm_vocab ( vocab_table TEXT, output_vocab_table TEXT ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.norm_vocab(vocab_table, output_vocab_table) return [[output_vocab_table,'normalized vocbulary table']] $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF converts the data table according to the normalized * vocabulary, and all rows with non-positive count values will be removed * @param data_table The data table to be normalized * @param vocab_table The normalized vocabulary table * @param output_data_table The normalized data table */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_norm_dataset ( data_table TEXT, norm_vocab_table TEXT, output_data_table TEXT ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.norm_dataset(data_table, norm_vocab_table, output_data_table) return [[output_data_table,'normalized data table']] $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); /** * @brief This UDF extracts the list of wordids from the data table and joins * it with the vocabulary table to get the list of common wordids, next it will * normalize the vocabulary based on the common wordids and then normalize the * data table based on the normalized vocabulary. * @param data_table The data table to be normalized * @param vocab_table The vocabulary table to be normalized * @param output_data_table The normalized data table * @param output_vocab_table The normalized vocabulary table */ CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.__lda_util_conorm_data ( data_table TEXT, vocab_table TEXT, output_data_table TEXT, output_vocab_table TEXT ) RETURNS SETOF MADLIB_SCHEMA.lda_result AS $$ PythonFunctionBodyOnly(`lda', `lda') lda.conorm_data( data_table, vocab_table, output_data_table, output_vocab_table) return [[output_data_table,'normalized data table'], [output_vocab_table,'normalized vocab table']] $$ LANGUAGE plpythonu STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `MODIFIES SQL DATA', `'); -- UDT contains model matrix (voc_size x topic_num) and total topic counts array DROP TYPE IF EXISTS MADLIB_SCHEMA._pivotalr_lda_model CASCADE; CREATE TYPE MADLIB_SCHEMA._pivotalr_lda_model AS ( model_matrix_part1 integer[], model_matrix_part2 integer[], total_topic_counts double precision[] ); CREATE OR REPLACE FUNCTION MADLIB_SCHEMA.lda_parse_model ( lda_model bigint[], voc_size integer, topic_num integer ) RETURNS MADLIB_SCHEMA._pivotalr_lda_model AS 'MODULE_PATHNAME' LANGUAGE c STRICT m4_ifdef(`__HAS_FUNCTION_PROPERTIES__', `NO SQL', `');