Generative AI
Components to leverage native Generative AI capabilities on Data Warehouses.
ML Generate Text
Description
This components calls ML.GENERATE_TEXT function in BigQuery for each row on the input table.
Inputs
Model [FQN]: The path for the model to be used in the formatproject_id.dataset.modelPrompt column: The column in the input source used to generate the prompt.Max Output Tokens: anINT64value in the range[1,1024]that sets the maximum number of tokens that the model outputs. Specify a lower value for shorter responses and a higher value for longer responses. The default is50.Temperature: aFLOAT64value in the range[0.0,1.0]that is used for sampling during the response generation, which occurs whentop_kandtop_pare applied. It controls the degree of randomness in token selection. Lowertemperaturevalues are good for prompts that require a more deterministic and less open-ended or creative response, while highertemperaturevalues can lead to more diverse or creative results. Atemperaturevalue of0is deterministic, meaning that the highest probability response is always selected. The default is1.0.Top P: anINT64value in the range[1,40]that changes how the model selects tokens for output. Specify a lower value for less random responses and a higher value for more random responses. The default is40.Top K: aFLOAT64value in the range[0.0,1.0]that changes how the model selects tokens for output. Specify a lower value for less random responses and a higher value for more random responses. The default is1.0.Tokens are selected from the most (based on the
top_kvalue) to least probable until the sum of their probabilities equals thetop_pvalue. For example, if tokens A, B, and C have a probability of0.3,0.2, and0.1and thetop_pvalue is0.5, then the model selects either A or B as the next token by using thetemperaturevalue and doesn't consider C.
Outputs
Result table [Table]
External links
Last updated
Was this helpful?
