Functions¶

namespace
fl
Copyright (c) Facebook, Inc.
and its affiliates. All rights reserved.
This source code is licensed under the BSDstyle license found in the LICENSE file in the root directory of this source tree.
and its affiliates. All rights reserved.
This source code is licensed under the BSDstyle license found in the LICENSE file in the root directory of this source tree. Logging is a light, multilevel, compile time filterable, logging infrastructure that is similar to glog in output format. It defines two logging macros, one for any logging and the other for more verbose logging. Compile time filter is applied separately to each of the two.
Output format: LMMDD HH:MM:SS.uuuuuu tid filename:##] Log message … L: Log level (Fatal, Critical, Error, Warning, Info) MMDD: month, day HH:MM:SS.uuuuuu: time (24hour format) with microseconds tid: thread ID filename:## the basename of the source file and line number of the LOG message
LOG use examples: LOG(INFO) << “foo bar n=” << 42; Output example: I0206 10:42:21.047293 87072 Logging.h:15 foo bar n=42 Note that LOG(level) only prints when level is <= from value set to Logging::setMaxLoggingLevel(level)
VLOG use example: VLOG(1) << “foo bar n=” << 42; Output example: vlog(1)0206 10:42:21.005439 87072 Logging.h:23 foo bar n=42 Note that VLOG(level) only prints when level is <= from value set to VerboseLogging::setMaxLoggingLevel(level)
and its affiliates. All rights reserved.
This source code is licensed under the BSDstyle license found in the LICENSE file in the root directory of this source tree. The configurable memory allocator is obtained by calling: std::unique_ptr<MemoryAllocator> CreateMemoryAllocator(config) Config defines a a set of allocators assembled in a CompositeMemoryAllocator.
Functions

Variable
operator+
(const Variable &lhs, const Variable &rhs)¶ Elementwise addition of two Variables.
\[ out = var_1 + var_2 \]

Variable
operator+
(const double &lhs, const Variable &rhs)¶ Adds a scalar to each element in the Variable.
\[ out_i = value + var_i \]

Variable
operator+
(const Variable &lhs, const double &rhs)¶ Adds a scalar to each element in the Variable.
\[ out_i = var_i + value \]

Variable
operator*
(const Variable &lhs, const Variable &rhs)¶ Elementwise multiplication of two Variables.
\[ out = var_1 \times var_2 \]

Variable
operator*
(const double &lhs, const Variable &rhs)¶ Multiplies each element in the Variable by a scalar.
\[ out_i = value \times var_i \]

Variable
operator*
(const Variable &lhs, const double &rhs)¶ Multiplies each element in the Variable by a scalar.
\[ out_i = var_i \times value \]

Variable
operator
(const Variable &lhs, const Variable &rhs)¶ Elementwise subtraction of two Variables.
\[ out = var_1  var_2 \]

Variable
operator
(const double &lhs, const Variable &rhs)¶ Subtracts a scalar from each element in the Variable.
\[ out_i = var_i  value \]

Variable
operator
(const Variable &lhs, const double &rhs)¶ Subtracts each element in the Variable from a scalar.
\[ out_i = value  var_i \]

Variable
operator/
(const Variable &lhs, const Variable &rhs)¶ Elementwise division of two Variables.
\[ out = \frac{var_1}{var_2} \]

Variable
operator/
(const double &lhs, const Variable &rhs)¶ Divides each element in the Variable by a scalar.
\[ out_i = \frac{var_i}{value} \]

Variable
operator/
(const Variable &lhs, const double &rhs)¶ Divides a scalar by each element in the Variable.
\[ out_i = \frac{value}{var_i} \]

Variable
operator>
(const Variable &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of two Variables.
\[ out = var_1 > var_2 \]

Variable
operator>
(const double &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = value > var_i \]

Variable
operator>
(const Variable &lhs, const double &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = var_i > value \]

Variable
operator<
(const Variable &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of two Variables.
\[ out = var_1 < var_2 \]

Variable
operator<
(const double &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = value < var_i \]

Variable
operator<
(const Variable &lhs, const double &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = var_i < value \]

Variable
operator>=
(const Variable &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of two Variables.
\[ out = var_1 >= var_2 \]

Variable
operator>=
(const double &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = value >= var_i \]

Variable
operator>=
(const Variable &lhs, const double &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = var_i >= value \]

Variable
operator<=
(const Variable &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of two Variables.
\[ out = var_1 <= var_2 \]

Variable
operator<=
(const double &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = value <= var_i \]

Variable
operator<=
(const Variable &lhs, const double &rhs)¶ [Nondifferentiable] Elementwise comparison of a Variable and a scalar.
\[ out_i = value <= var_i \]

Variable
operator&&
(const Variable &lhs, const Variable &rhs)¶ [Nondifferentiable] Elementwise logical and of two Variables.
\[ out = var_1 \& var_2 \]

Variable
operator!
(const Variable &input)¶ [Nondifferentiable] Elementwise logical not of a Variable.
\[ out_i = !var_i \]

Variable
negate
(const Variable &input)¶ Computes negative of each element in a Variable.
\[ out_i = var_i \]

Variable
reciprocal
(const Variable &input)¶ Computes reciprocal of each element in a Variable.
\[ out_i = \frac{1}{var_i} \]

Variable
exp
(const Variable &input)¶ Computes exponential of each element in a Variable.
\[ out_i = e^{var_i} \]

Variable
log
(const Variable &input)¶ Computes natural logarithm of each element in a Variable.
\[ out_i = log(var_i) \]

Variable
log1p
(const Variable &input)¶ Computes natural logarithm of (1 + element) for each element in a Variable.
\[ out_i = log(1.0 + var_i) \]

Variable
sin
(const Variable &input)¶ Computes sine of each element in a Variable.
\[ out_i = sin(var_i) \]

Variable
cos
(const Variable &input)¶ Computes cosine of each element in a Variable.
\[ out_i = cos(var_i) \]

Variable
sqrt
(const Variable &input)¶ Computes square root of each element in a Variable.
\[ out_i = \sqrt{var_i} \]

Variable
tanh
(const Variable &input)¶ Computes hyperbolic tangent of each element in a Variable.
\[ out_i = \frac{\exp(var_i)  \exp(var_i)}{\exp(var_i) + \exp(var_i)} \]

Variable
clamp
(const Variable &input, const double min, const double max)¶ Clamps all elements in input into the range [
min
,max
] and return a resulting tensor:\[\begin{split} \begin{split}y_i = \begin{cases} \text{min} & \text{if } x_i < \text{min} \\ x_i & \text{if } \text{min} \leq x_i \leq \text{max} \\ \text{max} & \text{if } x_i > \text{max} \end{cases}\end{split} \end{split}\].

Variable
sigmoid
(const Variable &input)¶ Computes sigmoid of each element in a Variable.
\[ out_i = \frac{1}{1 + \exp(var_i)} \]

Variable
max
(const Variable &lhs, const Variable &rhs)¶ Returns elementwise maximum value of two Variables.
\[ out = max(var_1, var_2) \]

Variable
max
(const Variable &lhs, const double &rhs)¶ Returns maximum value of a scalar and each element in a Variable.
\[ out_i = max(var_i, value) \]

Variable
max
(const double &lhs, const Variable &rhs)¶ Returns maximum value of a scalar and each element in a Variable.
\[ out_i = max(value, var_i) \]

Variable
min
(const Variable &lhs, const Variable &rhs)¶ Returns elementwise minimum value of two Variables.
\[ out = min(var_1, var_2) \]

Variable
min
(const Variable &lhs, const double &rhs)¶ Returns minimum value of a scalar and each element in a Variable.
\[ out_i = min(var_i, value) \]

Variable
min
(const double &lhs, const Variable &rhs)¶ Returns minimum value of a scalar and each element in a Variable.
\[ out_i = min(value, var_i) \]

Variable
transpose
(const Variable &input)¶ Returns a tensor that is a transposed version of a Variable.
The first two dimensions are swapped.

Variable
tileAs
(const Variable &input, const Variable &reference)¶ Repeats the tensor
input
along certain dimensions so as to match the shape ofreference
.The dimensions to be repeated along are automatically inferred.

Variable
tileAs
(const Variable &input, const af::dim4 &rdims)¶ Repeats the tensor
input
along certain dimensions so as to match the shape in the descriptorrdims
.The dimensions to be repeated along are automatically inferred.

Variable
sumAs
(const Variable &input, const Variable &reference)¶ Sums up the tensor
input
along certain dimensions so as to match the shape ofreference
.The dimensions to be summed along are automatically inferred. Note that after summation, the shape of those dimensions will be 1.

Variable
sumAs
(const Variable &input, const af::dim4 &rdims)¶ Sums up the tensor
input
along certain dimensions so as to match the shape in the descriptorrdims
.The dimensions to be summed along are automatically inferred. Note that after summation, the shape of those dimensions will be 1.

Variable
concatenate
(const std::vector<Variable> &concatInputs, int dim)¶ Concatenates Variables along a specific dimension.
The shape of input Variables should be identical except the dimension to concatenate.

std::vector<Variable>
split
(const Variable &input, dim_t splitSize, int dim)¶ Splits a Variable into equally sized chunks (if possible)

std::vector<Variable>
split
(const Variable &input, const std::vector<dim_t> &splitSizes, int dim)¶ Splits a Variable into smaller chunks.

Variable
tile
(const Variable &input, const af::dim4 &dims)¶ Repeats the tensor
input
along specific dimensions.The number of repetition along each dimension is specified in descriptor
dims
.

Variable
sum
(const Variable &input, const std::vector<int> &axes)¶ Sums up the tensors
input
along dimensions specified in descriptoraxes
.If
axes
has size greater than 1, reduce over all of them.

Variable
mean
(const Variable &input, const std::vector<int> &axes)¶ Computes the mean of the tensor
input
along dimensions specified in descriptoraxes
.If
axes
has size greater than 1, reduce over all of them.

Variable
norm
(const Variable &input, const std::vector<int> &axes)¶ Computes l2norm of the tensor
input
along dimensions specified in descriptoraxes
.If
axes
has size greater than 1, reduce over all of them.

Variable
var
(const Variable &input, const std::vector<int> &axes, const bool isbiased = false)¶ Computes variance of the tensor
input
along dimensions specified in descriptoraxes
.If
axes
has size greater than 1, reduce over all of them. Uses population variance ifisbiased
istrue
, otherwise, uses sample variance.NB: the behavior of
fl::var
differs from that ofaf::var
. In ArrayFire versions >= 3.7.0, ifisbiased
istrue
the variance computation uses sample variance; iffalse
, population variance is used. For versions of ArrayFire before v3.7.0, the reverse is true.

Variable
matmul
(const Variable &lhs, const Variable &rhs)¶ Conducts matrixmatrix multiplication on two Variables.
This is a batched function if \(B_1\) or \(B_2\) is greater than 1.

Variable
matmulTN
(const Variable &lhs, const Variable &rhs)¶ Conducts matrixmatrix multiplication on two Variables, where the first one will be transposed before multiplication.
This is a batched function if \(B_1\) or \(B_2\) is greater than 1.

Variable
matmulNT
(const Variable &lhs, const Variable &rhs)¶ Conducts matrixmatrix multiplication on two Variables, where the second one will be transposed before multiplication.
This is a batched function if \(B_1\) or \(B_2\) is greater than 1.

Variable
abs
(const Variable &input)¶ Returns the absolute values of each element in a Variable.
\[ out_i = var_i \]

Variable
moddims
(const Variable &input, const af::dim4 &dims)¶ Modifies the input dimensions without changing the data order.
The shape of the output Variable is specified in descriptor
dims
.

Variable
reorder
(const Variable &input, const int dim0, const int dim1, const int dim2 = 2, const int dim3 = 3)¶ Exchanges data of an array such that the requested change in dimension is satisfied.
The linear ordering of data within the array is preserved.

Variable
linear
(const Variable &input, const Variable &weight)¶ Applies a linear transformation to the input Variable:
\[ y = Ax \].

Variable
linear
(const Variable &input, const Variable &weight, const Variable &bias)¶ Applies a linear transformation to the input Variable:
\[ y = Ax + b \].

Variable
conv2d
(const Variable &input, const Variable &weights, int sx = 1, int sy = 1, int px = 0, int py = 0, int dx = 1, int dy = 1, int groups = 1)¶ Applies a 2D convolution over an input signal given filter weights.
In the simplest case, the output with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)] of the convolution with input [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)] and weight [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)] can be precisely described as:
\[ \text{out}(C_{out_j}, N_i) = \sum_{k = 0}^{C_{in}  1} \text{weight}(k, C_{out_j}) \star \text{input}(k, N_i) \] Return
a Variable with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)]]
 Parameters
input
: a Variable with shape [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)]weights
: a Variable with shape [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)]sx
: stride in the first dimensionsy
: stride in the second dimensionpx
: number of positions of zeropadding on both sides in the first dimensionpy
: number of positions of zeropadding on both sides in the second dimensiondx
: dilation along the first kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.dy
: dilation along the second kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.groups
: number of filter groups

Variable
conv2d
(const Variable &input, const Variable &weights, const Variable &bias, int sx = 1, int sy = 1, int px = 0, int py = 0, int dx = 1, int dy = 1, int groups = 1)¶ Applies a 2D convolution over an input signal given filter weights and biases.
In the simplest case, the output with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)] of the convolution with input [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)] and weight [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)] can be precisely described as:
\[ \text{out}(C_{out_j}, N_i) = \text{bias}(C_{out_j}) + \sum_{k = 0}^{C_{in}  1} \text{weight}(k, C_{out_j}) \star \text{input}(k, N_i) \] Return
a Variable with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)]]
 Parameters
input
: a Variable with shape [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)]weights
: a Variable with shape [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)]sx
: stride in the first dimensionsy
: stride in the second dimensionpx
: number of positions of zeropadding on both sides in the first dimensionpy
: number of positions of zeropadding on both sides in the second dimensiondx
: dilation along the first kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.dy
: dilation along the second kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.groups
: number of filter groupsbias
: a Variable with shape [ \(C_{out}\)]

Variable
pool2d
(const Variable &input, int wx, int wy, int sx = 1, int sy = 1, int px = 0, int py = 0, PoolingMode mode = PoolingMode::MAX)¶ Applies a 2D pooling over an input signal composed of several input planes.
 Parameters
input
: a Variable with shape [ \(X_{in}\), \(Y_{in}\), \(C\), \(N\)]wx
: pooling window size in the first dimensionwy
: pooling window size in the second dimensionsx
: stride in the first dimensionsy
: stride in the second dimensionpx
: number of positions of zeropadding on both sides in the first dimensionpy
: number of positions of zeropadding on both sides in the second dimensionmode
: pooling mode, which supports:MAX
AVG_INCLUDE_PADDING
AVG_EXCLUDE_PADDING

Variable
softmax
(const Variable &input, const int dim)¶ Applies a softmax function on Variable
input
along dimensiondim
, so that the elements of the dimensionaldim
in output lie in the range (0,1) and sum to 1.\[ out(x_{i}) = \frac{exp(x_i)}{\sum_j exp(x_j)} \]

Variable
logSoftmax
(const Variable &input, const int dim)¶ Applies a log(softmax(x)) function on Variable
input
along dimensiondim
\[ out(x_{i}) = log \Big( \frac{exp(x_i)}{\sum_j exp(x_j)} \Big) \].

Variable
binaryCrossEntropy
(const Variable &inputs, const Variable &targets)¶ Computes the binary cross entropy loss between an input tensor \(x\) and a target tensor \(y\).
The binary cross entropy loss is:
\[ B(x, y) = \frac{1}{n} \sum_{i = 0}^n \left( y_i \times \log(x_i) + (1  y_i) \times \log(1  x_i) \right) \]Both the inputs and the targets are expected to be between 0 and 1. Parameters
inputs
: a tensor with the predicted valuestargets
: a tensor with the target values

Variable
categoricalCrossEntropy
(const Variable &input, const Variable &targets, ReduceMode reduction = ReduceMode::MEAN)¶ Computes the categorical cross entropy loss.
The input is expected to contain logprobabilities for each class. The targets should be the index of the ground truth class for each input example.
\[\begin{split} \begin{split}\ell(x, y) = \begin{cases} \frac{1}{N} \sum_{n=1}^N x_{n,y_n}, & \text{if}\; \text{reduction} = \text{MEAN},\\ \sum_{n=1}^N x_{n,y_n}, & \text{if}\; \text{reduction} = \text{SUM}, \\ \{ x_{1,y_1}, ..., x_{N,y_N} \}, & \text{if}\; \text{reduction} = \text{NONE}. \end{cases}\end{split} \end{split}\] Return
a
Variable
of loss value with shape scalar by default. Ifreduce
is NONE, then [ \(B_1\), \(B_2\), \(B_3\)]. Parameters

Variable
gatedlinearunit
(const Variable &input, const int dim)¶ The gated linear unit.
\[ H = A \times \sigma(B) \]whereinput
is split in half alongdim
to formA
andB
. See Language Modeling with Gated Convolutional Networks. Parameters
input
: input Variabledim
: dimension on which to split the input

std::tuple<Variable, Variable, Variable>
rnn
(const Variable &input, const Variable &hidden_state, const Variable &cell_state, const Variable &weights, int hidden_size, int num_layers, RnnMode mode, bool bidirectional, float dropout)¶ Applies an RNN unit to an input sequence.
A general RNN operator can be expressed as following:
\[ (h_t, c_t) = f_W(x_t, h_{t1}, c_{t1}) \]where \(h_t\), \(c_t\) are the hidden/cell state at time \(t\), \(x_t\) is the input at time \(t\) Return
a tuple of three Variables:
y
: input with shape [input size, batch size, sequence length * directions]hidden_state
: hidden state for the current time stepcell_state
: cell state for the current time step
 Parameters
input
: Variable of input with shape [input size, batch size, sequence length]hidden_state
: Variable of hidden state with shape [hidden size, batch size, total layers]cell_state
: [LSTM only] Variable of cell state with same shape as hidden stateweights
: Learnable parameters in the RNN unithidden_size
: number of features in the hidden statenum_layers
: number of recurrent layersmode
: defines the type of RNN unitRELU
TANH
LSTM
GRU
bidirectional
: ifTrue
, becomes a bidirectional RNN, unidirectional otherwisedropout
: if nonzero, introduces aDropout
layer on the outputs of each RNN layer except the last one, with dropout probability equal to dropout

Variable
embedding
(const Variable &input, const Variable &embeddings)¶ Looks up embeddings in a fixed dictionary and size.
 Return
a Variable of embeddings with shape [ \(D\), \(B_1\), \(B_2\), \(B_3\)]
 Parameters

Variable
batchnorm
(const Variable &input, const Variable &weight, const Variable &bias, Variable &running_mean, Variable &running_var, const std::vector<int> &axes, bool train, double momentum, double epsilon)¶ Applies Batch Normalization over a 4D input (a minibatch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[ y = \frac{x  \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta \]The mean and standarddeviation are calculated perdimension over the minibatches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size \(C\), the input size. By default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. Return
a Variable with same shape as
input
 Parameters
input
: a Variable with size [ \(H\), \(W\), \(C\), \(N\)]weight
: a Variable with size [ \(C\)] for \(\gamma\)bias
: a Variable with size [ \(C\)] for \(\beta\)running_mean
: a buffer storing intermediate means during trainingrunning_var
: a buffer storing intermediate variances during trainingaxes
: dimensions to perform normalization on. If having size greater than one, reduce over all of them.train
: a flag indicating if running in training modemomentum
: value of momentumepsilon
: value of \(\epsilon\)

Variable
padding
(const Variable &input, std::vector<std::pair<int, int>> pad, double val)¶ Applies asymmetric padding on a Variable
input
.

Variable
relu
(const Variable &input)¶ Applies the rectified linear unit function elementwise to a
Variable
:\[ ReLU(x) = \max(0, x) \].

Variable
gelu
(const Variable &input)¶ Applies the Gaussian Error linear Unit function elementwise to a
Variable

Variable
constant
(double val, int input_size, int output_size, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing a tensor with dimensions[input_size, output_size]
where all elements are a constant. Return
A
Variable
containing a tensor with constant values. Parameters
val
: the value of the constant in the tensorinput_size
: the second dimension for the output tensor shapeoutput_size
: the first dimension of the output tensor shapetype
: the ArrayFire datatype for which to create the tensorcalc_grad
: flag denoting whether gradient calculation on the resultingVariable
should be enabled

Variable
constant
(double val, af::dim4 dims, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing a tensor of up to rank 4 with arbitrary dimensions where all elements are a constant. Return
A
Variable
containing a tensor with constant values. Parameters
val
: the value of the constant in the tensordims
: an ArrayFire tensor shapetype
: the ArrayFire datatype for which to create the tensorcalc_grad
: flag denoting whether gradient calculation on the resultingVariable
should be enabled

Variable
identity
(int input_size, int output_size, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing an identity tensor with dimensions[input_size, output_size]
. Return
A
Variable
containing the identity tensor. Parameters
input_size
: the second dimension for the output tensor shapeoutput_size
: the first dimension of the output tensor shapetype
: the ArrayFire datatype for which to create the tensorcalc_grad
: flag denoting whether gradient calculation on the resultingVariable
should be enabled

Variable
identity
(af::dim4 dims, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing an identity tensor of up to rank 4 with arbitrary dimensions.

Variable
uniform
(int input_size, int output_size, double min = 0, double max = 1, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing a tensor with dimensions[input_size, output_size]
, where elements are distributed according to a uniform distribution with parameters \(\mathcal{U}(min, max)\).See Uniform Distribution.
 Return
A
Variable
containing a tensor with random values distributed accordingly. Parameters
input_size
: the second dimension for the output tensor shapeoutput_size
: the first dimension of the output tensor shapemin
: the lower bound parameter for the uniform distributionmax
: the upper bound parameter for the uniform distributiontype
: the ArrayFire datatype for which to create the tensorcalc_grad
: flag denoting whether gradient calculation on the resultingVariable
should be enabled

Variable
uniform
(af::dim4 dims, double min = 0, double max = 1, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing a tensor of up to rank 4 with arbitrary dimensions, where elements are distributed according to a uniform distribution with parameters \(\mathcal{U}(min, max)\).See Uniform Distribution.
 Return
A
Variable
containing a tensor with random values distributed accordingly. Parameters
dims
: an ArrayFire tensor shapemin
: the lower bound parameter for the uniform distributionmax
: the upper bound parameter for the uniform distributiontype
: the ArrayFire datatype for which to create the tensorcalc_grad
: flag denoting whether gradient calculation on the resultingVariable
should be enabled

Variable
normal
(int input_size, int output_size, double stdv = 1, double mean = 0, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing a tensor with dimensions[input_size, output_size]
where elements are distributed according to a normal distribution with parameters \(\mathcal{N}(\mu, \sigma^2)\).See Normal Distribution.
 Return
A
Variable
containing a tensor with random values distributed accordingly. Parameters
input_size
: the second dimension for the output tensor shapeoutput_size
: the first dimension of the output tensor shapestdv
: the standard deviation by which to parameterize the distributionmean
: the mean by which to parameterize the distributiontype
: the ArrayFire datatype for which to create the tensorcalc_grad
: flag denoting whether gradient calculation on the resultingVariable
should be enabled

Variable
normal
(af::dim4 dims, double stdv = 1, double mean = 0, af::dtype type = f32, bool calc_grad = true)¶ Creates a
Variable
representing a tensor of up to rank 4 with arbitrary dimensions, where elements are distributed according to a normal distribution with parameters \(\mathcal{N}(\mu, \sigma^2)\).See Normal Distribution.
 Return
A
Variable
containing a tensor with random values distributed accordingly. Parameters
dims
: an ArrayFire tensor shapestdv
: the standard deviation by which to parameterize the distributionmean
: the mean by which to parameterize the distributiontype
: the ArrayFire datatype for which to create the tensorcalc_grad
: flag denoting whether gradient calculation on the resultingVariable
should be enabled

Variable

namespace
fl
Copyright (c) Facebook, Inc.
and its affiliates. All rights reserved.
This source code is licensed under the BSDstyle license found in the LICENSE file in the root directory of this source tree.
and its affiliates. All rights reserved.
This source code is licensed under the BSDstyle license found in the LICENSE file in the root directory of this source tree. Logging is a light, multilevel, compile time filterable, logging infrastructure that is similar to glog in output format. It defines two logging macros, one for any logging and the other for more verbose logging. Compile time filter is applied separately to each of the two.
Output format: LMMDD HH:MM:SS.uuuuuu tid filename:##] Log message … L: Log level (Fatal, Critical, Error, Warning, Info) MMDD: month, day HH:MM:SS.uuuuuu: time (24hour format) with microseconds tid: thread ID filename:## the basename of the source file and line number of the LOG message
LOG use examples: LOG(INFO) << “foo bar n=” << 42; Output example: I0206 10:42:21.047293 87072 Logging.h:15 foo bar n=42 Note that LOG(level) only prints when level is <= from value set to Logging::setMaxLoggingLevel(level)
VLOG use example: VLOG(1) << “foo bar n=” << 42; Output example: vlog(1)0206 10:42:21.005439 87072 Logging.h:23 foo bar n=42 Note that VLOG(level) only prints when level is <= from value set to VerboseLogging::setMaxLoggingLevel(level)
and its affiliates. All rights reserved.
This source code is licensed under the BSDstyle license found in the LICENSE file in the root directory of this source tree. The configurable memory allocator is obtained by calling: std::unique_ptr<MemoryAllocator> CreateMemoryAllocator(config) Config defines a a set of allocators assembled in a CompositeMemoryAllocator.