Functions

namespace fl

Copyright (c) Facebook, Inc.

and its affiliates. All rights reserved.

This source code is licensed under the BSD-style license found in the LICENSE file in the root directory of this source tree.

and its affiliates. All rights reserved.

This source code is licensed under the BSD-style license found in the LICENSE file in the root directory of this source tree. Logging is a light, multi-level, compile time filterable, logging infrastructure that is similar to glog in output format. It defines two logging macros, one for any logging and the other for more verbose logging. Compile time filter is applied separately to each of the two.

Output format: LMMDD HH:MM:SS.uuuuuu tid filename:##] Log message … L: Log level (Fatal, Critical, Error, Warning, Info) MMDD: month, day HH:MM:SS.uuuuuu: time (24-hour format) with micro-seconds tid: thread ID filename:## the basename of the source file and line number of the LOG message

LOG use examples: LOG(INFO) << “foo bar n=” << 42; Output example: I0206 10:42:21.047293 87072 Logging.h:15 foo bar n=42 Note that LOG(level) only prints when level is <= from value set to Logging::setMaxLoggingLevel(level)

VLOG use example: VLOG(1) << “foo bar n=” << 42; Output example: vlog(1)0206 10:42:21.005439 87072 Logging.h:23 foo bar n=42 Note that VLOG(level) only prints when level is <= from value set to VerboseLogging::setMaxLoggingLevel(level)

and its affiliates. All rights reserved.

This source code is licensed under the BSD-style license found in the LICENSE file in the root directory of this source tree. The configurable memory allocator is obtained by calling: std::unique_ptr<MemoryAllocator> CreateMemoryAllocator(config) Config defines a a set of allocators assembled in a CompositeMemoryAllocator.

Functions

Variable operator+(const Variable &lhs, const Variable &rhs)

Element-wise addition of two Variables.

\[ out = var_1 + var_2 \]

Variable operator+(const double &lhs, const Variable &rhs)

Adds a scalar to each element in the Variable.

\[ out_i = value + var_i \]

Variable operator+(const Variable &lhs, const double &rhs)

Adds a scalar to each element in the Variable.

\[ out_i = var_i + value \]

Variable operator*(const Variable &lhs, const Variable &rhs)

Element-wise multiplication of two Variables.

\[ out = var_1 \times var_2 \]

Variable operator*(const double &lhs, const Variable &rhs)

Multiplies each element in the Variable by a scalar.

\[ out_i = value \times var_i \]

Variable operator*(const Variable &lhs, const double &rhs)

Multiplies each element in the Variable by a scalar.

\[ out_i = var_i \times value \]

Variable operator-(const Variable &lhs, const Variable &rhs)

Element-wise subtraction of two Variables.

\[ out = var_1 - var_2 \]

Variable operator-(const double &lhs, const Variable &rhs)

Subtracts a scalar from each element in the Variable.

\[ out_i = var_i - value \]

Variable operator-(const Variable &lhs, const double &rhs)

Subtracts each element in the Variable from a scalar.

\[ out_i = value - var_i \]

Variable operator/(const Variable &lhs, const Variable &rhs)

Element-wise division of two Variables.

\[ out = \frac{var_1}{var_2} \]

Variable operator/(const double &lhs, const Variable &rhs)

Divides each element in the Variable by a scalar.

\[ out_i = \frac{var_i}{value} \]

Variable operator/(const Variable &lhs, const double &rhs)

Divides a scalar by each element in the Variable.

\[ out_i = \frac{value}{var_i} \]

Variable operator>(const Variable &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of two Variables.

\[ out = var_1 > var_2 \]

Variable operator>(const double &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = value > var_i \]

Variable operator>(const Variable &lhs, const double &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = var_i > value \]

Variable operator<(const Variable &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of two Variables.

\[ out = var_1 < var_2 \]

Variable operator<(const double &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = value < var_i \]

Variable operator<(const Variable &lhs, const double &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = var_i < value \]

Variable operator>=(const Variable &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of two Variables.

\[ out = var_1 >= var_2 \]

Variable operator>=(const double &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = value >= var_i \]

Variable operator>=(const Variable &lhs, const double &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = var_i >= value \]

Variable operator<=(const Variable &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of two Variables.

\[ out = var_1 <= var_2 \]

Variable operator<=(const double &lhs, const Variable &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = value <= var_i \]

Variable operator<=(const Variable &lhs, const double &rhs)

[Non-differentiable] Element-wise comparison of a Variable and a scalar.

\[ out_i = value <= var_i \]

Variable operator&&(const Variable &lhs, const Variable &rhs)

[Non-differentiable] Element-wise logical and of two Variables.

\[ out = var_1 \& var_2 \]

Variable operator!(const Variable &input)

[Non-differentiable] Element-wise logical not of a Variable.

\[ out_i = !var_i \]

Variable negate(const Variable &input)

Computes negative of each element in a Variable.

\[ out_i = -var_i \]

Variable reciprocal(const Variable &input)

Computes reciprocal of each element in a Variable.

\[ out_i = \frac{1}{var_i} \]

Variable exp(const Variable &input)

Computes exponential of each element in a Variable.

\[ out_i = e^{var_i} \]

Variable log(const Variable &input)

Computes natural logarithm of each element in a Variable.

\[ out_i = log(var_i) \]

Variable log1p(const Variable &input)

Computes natural logarithm of (1 + element) for each element in a Variable.

\[ out_i = log(1.0 + var_i) \]

Variable sin(const Variable &input)

Computes sine of each element in a Variable.

\[ out_i = sin(var_i) \]

Variable cos(const Variable &input)

Computes cosine of each element in a Variable.

\[ out_i = cos(var_i) \]

Variable sqrt(const Variable &input)

Computes square root of each element in a Variable.

\[ out_i = \sqrt{var_i} \]

Variable tanh(const Variable &input)

Computes hyperbolic tangent of each element in a Variable.

\[ out_i = \frac{\exp(var_i) - \exp(-var_i)}{\exp(var_i) + \exp(-var_i)} \]

Variable clamp(const Variable &input, const double min, const double max)

Clamps all elements in input into the range [ min, max ] and return a resulting tensor:

\[\begin{split} \begin{split}y_i = \begin{cases} \text{min} & \text{if } x_i < \text{min} \\ x_i & \text{if } \text{min} \leq x_i \leq \text{max} \\ \text{max} & \text{if } x_i > \text{max} \end{cases}\end{split} \end{split}\]
.

Variable sigmoid(const Variable &input)

Computes sigmoid of each element in a Variable.

\[ out_i = \frac{1}{1 + \exp(-var_i)} \]

Variable max(const Variable &lhs, const Variable &rhs)

Returns element-wise maximum value of two Variables.

\[ out = max(var_1, var_2) \]

Variable max(const Variable &lhs, const double &rhs)

Returns maximum value of a scalar and each element in a Variable.

\[ out_i = max(var_i, value) \]

Variable max(const double &lhs, const Variable &rhs)

Returns maximum value of a scalar and each element in a Variable.

\[ out_i = max(value, var_i) \]

Variable min(const Variable &lhs, const Variable &rhs)

Returns element-wise minimum value of two Variables.

\[ out = min(var_1, var_2) \]

Variable min(const Variable &lhs, const double &rhs)

Returns minimum value of a scalar and each element in a Variable.

\[ out_i = min(var_i, value) \]

Variable min(const double &lhs, const Variable &rhs)

Returns minimum value of a scalar and each element in a Variable.

\[ out_i = min(value, var_i) \]

Variable transpose(const Variable &input)

Returns a tensor that is a transposed version of a Variable.

The first two dimensions are swapped.

Variable tileAs(const Variable &input, const Variable &reference)

Repeats the tensor input along certain dimensions so as to match the shape of reference.

The dimensions to be repeated along are automatically inferred.

Variable tileAs(const Variable &input, const af::dim4 &rdims)

Repeats the tensor input along certain dimensions so as to match the shape in the descriptor rdims.

The dimensions to be repeated along are automatically inferred.

Variable sumAs(const Variable &input, const Variable &reference)

Sums up the tensor input along certain dimensions so as to match the shape of reference.

The dimensions to be summed along are automatically inferred. Note that after summation, the shape of those dimensions will be 1.

Variable sumAs(const Variable &input, const af::dim4 &rdims)

Sums up the tensor input along certain dimensions so as to match the shape in the descriptor rdims.

The dimensions to be summed along are automatically inferred. Note that after summation, the shape of those dimensions will be 1.

Variable concatenate(const std::vector<Variable> &concatInputs, int dim)

Concatenates Variables along a specific dimension.

The shape of input Variables should be identical except the dimension to concatenate.

std::vector<Variable> split(const Variable &input, dim_t splitSize, int dim)

Splits a Variable into equally sized chunks (if possible)

Parameters
  • input: a Variable to split

  • splitSize: size of each split. If input dimension is not evenly divisible, last chunk of smaller splitSize will be included.

  • dim: dimension along which to split the Variable

std::vector<Variable> split(const Variable &input, const std::vector<dim_t> &splitSizes, int dim)

Splits a Variable into smaller chunks.

Parameters
  • input: a Variable to split

  • splitSizes: vector of integers specifying the sizes for each split

  • dim: dimension along which to split the Variable

Variable tile(const Variable &input, const af::dim4 &dims)

Repeats the tensor input along specific dimensions.

The number of repetition along each dimension is specified in descriptor dims.

Variable sum(const Variable &input, const std::vector<int> &axes)

Sums up the tensors input along dimensions specified in descriptor axes.

If axes has size greater than 1, reduce over all of them.

Variable mean(const Variable &input, const std::vector<int> &axes)

Computes the mean of the tensor input along dimensions specified in descriptor axes.

If axes has size greater than 1, reduce over all of them.

Variable norm(const Variable &input, const std::vector<int> &axes)

Computes l2-norm of the tensor input along dimensions specified in descriptor axes.

If axes has size greater than 1, reduce over all of them.

Variable var(const Variable &input, const std::vector<int> &axes, const bool isbiased = false)

Computes variance of the tensor input along dimensions specified in descriptor axes.

If axes has size greater than 1, reduce over all of them. Uses population variance if isbiased is true, otherwise, uses sample variance.

NB: the behavior of fl::var differs from that of af::var. In ArrayFire versions >= 3.7.0, if isbiased is true the variance computation uses sample variance; if false, population variance is used. For versions of ArrayFire before v3.7.0, the reverse is true.

Variable matmul(const Variable &lhs, const Variable &rhs)

Conducts matrix-matrix multiplication on two Variables.

This is a batched function if \(B_1\) or \(B_2\) is greater than 1.

Return

a Variable with shape [ \(M\), \(K\), \(B_1\), \(B_2\)]

Parameters
  • lhs: a Variable with shape [ \(M\), \(N\), \(B_1\), \(B_2\)]

  • rhs: a Variable with shape [ \(N\), \(K\), \(B_1\), \(B_2\)]

Variable matmulTN(const Variable &lhs, const Variable &rhs)

Conducts matrix-matrix multiplication on two Variables, where the first one will be transposed before multiplication.

This is a batched function if \(B_1\) or \(B_2\) is greater than 1.

Return

a Variable with shape [ \(M\), \(K\), \(B_1\), \(B_2\)]

Parameters
  • lhs: a Variable with shape [ \(N\), \(M\), \(B_1\), \(B_2\)]

  • rhs: a Variable with shape [ \(N\), \(K\), \(B_1\), \(B_2\)]

Variable matmulNT(const Variable &lhs, const Variable &rhs)

Conducts matrix-matrix multiplication on two Variables, where the second one will be transposed before multiplication.

This is a batched function if \(B_1\) or \(B_2\) is greater than 1.

Return

a Variable with shape [ \(M\), \(K\), \(B_1\), \(B_2\)]

Parameters
  • lhs: a Variable with shape [ \(M\), \(N\), \(B_1\), \(B_2\)]

  • rhs: a Variable with shape [ \(K\), \(N\), \(B_1\), \(B_2\)]

Variable abs(const Variable &input)

Returns the absolute values of each element in a Variable.

\[ out_i = |var_i| \]

Variable flat(const Variable &input)

Flattens the input to a single dimension.

Variable moddims(const Variable &input, const af::dim4 &dims)

Modifies the input dimensions without changing the data order.

The shape of the output Variable is specified in descriptor dims.

Variable reorder(const Variable &input, const int dim0, const int dim1, const int dim2 = 2, const int dim3 = 3)

Exchanges data of an array such that the requested change in dimension is satisfied.

The linear ordering of data within the array is preserved.

Variable linear(const Variable &input, const Variable &weight)

Applies a linear transformation to the input Variable:

\[ y = Ax \]
.

Return

a Variable with shape [ \(K\), \(M\), \(B_1\), \(B_2\)]

Parameters
  • input: a Variable with shape [ \(N\), \(M\), \(B_1\), \(B_2\)]

  • weight: a Variable with shape [ \(K\), \(N\)]

Variable linear(const Variable &input, const Variable &weight, const Variable &bias)

Applies a linear transformation to the input Variable:

\[ y = Ax + b \]
.

Return

a Variable with shape [ \(K\), \(M\), \(B_1\), \(B_2\)]

Parameters
  • input: a Variable with shape [ \(N\), \(M\), \(B_1\), \(B_2\)]

  • weight: a Variable with shape [ \(K\), \(N\)]

  • bias: a Variable with shape [ \(K\)]

Variable conv2d(const Variable &input, const Variable &weights, int sx = 1, int sy = 1, int px = 0, int py = 0, int dx = 1, int dy = 1, int groups = 1)

Applies a 2D convolution over an input signal given filter weights.

In the simplest case, the output with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)] of the convolution with input [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)] and weight [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)] can be precisely described as:

\[ \text{out}(C_{out_j}, N_i) = \sum_{k = 0}^{C_{in} - 1} \text{weight}(k, C_{out_j}) \star \text{input}(k, N_i) \]

Return

a Variable with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)]]

Parameters
  • input: a Variable with shape [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)]

  • weights: a Variable with shape [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)]

  • sx: stride in the first dimension

  • sy: stride in the second dimension

  • px: number of positions of zero-padding on both sides in the first dimension

  • py: number of positions of zero-padding on both sides in the second dimension

  • dx: dilation along the first kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.

  • dy: dilation along the second kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.

  • groups: number of filter groups

Variable conv2d(const Variable &input, const Variable &weights, const Variable &bias, int sx = 1, int sy = 1, int px = 0, int py = 0, int dx = 1, int dy = 1, int groups = 1)

Applies a 2D convolution over an input signal given filter weights and biases.

In the simplest case, the output with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)] of the convolution with input [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)] and weight [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)] can be precisely described as:

\[ \text{out}(C_{out_j}, N_i) = \text{bias}(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} \text{weight}(k, C_{out_j}) \star \text{input}(k, N_i) \]

Return

a Variable with shape [ \(X_{out}\), \(Y_{out}\), \(C_{out}\), \(N\)]]

Parameters
  • input: a Variable with shape [ \(X_{in}\), \(Y_{in}\), \(C_{in}\), \(N\)]

  • weights: a Variable with shape [ \(K_x\), \(K_y\), \(C_{in}\), \(C_{out}\)]

  • sx: stride in the first dimension

  • sy: stride in the second dimension

  • px: number of positions of zero-padding on both sides in the first dimension

  • py: number of positions of zero-padding on both sides in the second dimension

  • dx: dilation along the first kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.

  • dy: dilation along the second kernel dimension. A dilation of 1 is equivalent to a standard convolution along this axis.

  • groups: number of filter groups

  • bias: a Variable with shape [ \(C_{out}\)]

Variable pool2d(const Variable &input, int wx, int wy, int sx = 1, int sy = 1, int px = 0, int py = 0, PoolingMode mode = PoolingMode::MAX)

Applies a 2D pooling over an input signal composed of several input planes.

Parameters
  • input: a Variable with shape [ \(X_{in}\), \(Y_{in}\), \(C\), \(N\)]

  • wx: pooling window size in the first dimension

  • wy: pooling window size in the second dimension

  • sx: stride in the first dimension

  • sy: stride in the second dimension

  • px: number of positions of zero-padding on both sides in the first dimension

  • py: number of positions of zero-padding on both sides in the second dimension

  • mode: pooling mode, which supports:

    • MAX

    • AVG_INCLUDE_PADDING

    • AVG_EXCLUDE_PADDING

Variable softmax(const Variable &input, const int dim)

Applies a softmax function on Variable input along dimension dim, so that the elements of the dimensional dim in output lie in the range (0,1) and sum to 1.

\[ out(x_{i}) = \frac{exp(x_i)}{\sum_j exp(x_j)} \]

Variable logSoftmax(const Variable &input, const int dim)

Applies a log(softmax(x)) function on Variable input along dimension dim

\[ out(x_{i}) = log \Big( \frac{exp(x_i)}{\sum_j exp(x_j)} \Big) \]
.

Variable binaryCrossEntropy(const Variable &inputs, const Variable &targets)

Computes the binary cross entropy loss between an input tensor \(x\) and a target tensor \(y\).

The binary cross entropy loss is:

\[ B(x, y) = \frac{1}{n} \sum_{i = 0}^n -\left( y_i \times \log(x_i) + (1 - y_i) \times \log(1 - x_i) \right) \]
Both the inputs and the targets are expected to be between 0 and 1.

Parameters
  • inputs: a tensor with the predicted values

  • targets: a tensor with the target values

Variable categoricalCrossEntropy(const Variable &input, const Variable &targets, ReduceMode reduction = ReduceMode::MEAN)

Computes the categorical cross entropy loss.

The input is expected to contain log-probabilities for each class. The targets should be the index of the ground truth class for each input example.

\[\begin{split} \begin{split}\ell(x, y) = \begin{cases} \frac{1}{N} \sum_{n=1}^N -x_{n,y_n}, & \text{if}\; \text{reduction} = \text{MEAN},\\ \sum_{n=1}^N -x_{n,y_n}, & \text{if}\; \text{reduction} = \text{SUM}, \\ \{ -x_{1,y_1}, ..., -x_{N,y_N} \}, & \text{if}\; \text{reduction} = \text{NONE}. \end{cases}\end{split} \end{split}\]

Return

a Variable of loss value with shape scalar by default. If reduce is NONE, then [ \(B_1\), \(B_2\), \(B_3\)].

Parameters
  • input: a Variable with shape [ \(C\), \(B_1\), \(B_2\), \(B_3\)] where \(C\) is the number of classes.

  • targets: an integer Variable with shape [ \(B_1\), \(B_2\), \(B_3\)]. The values must be in \([0, C - 1]\)

  • reduction: reduction mode, which supports:

    • NONE

    • MEAN

    • SUM

Variable gatedlinearunit(const Variable &input, const int dim)

The gated linear unit.

\[ H = A \times \sigma(B) \]
where input is split in half along dim to form A and B. See Language Modeling with Gated Convolutional Networks.

Parameters
  • input: input Variable

  • dim: dimension on which to split the input

std::tuple<Variable, Variable, Variable> rnn(const Variable &input, const Variable &hidden_state, const Variable &cell_state, const Variable &weights, int hidden_size, int num_layers, RnnMode mode, bool bidirectional, float dropout)

Applies an RNN unit to an input sequence.

A general RNN operator can be expressed as following:

\[ (h_t, c_t) = f_W(x_t, h_{t-1}, c_{t-1}) \]
where \(h_t\), \(c_t\) are the hidden/cell state at time \(t\), \(x_t\) is the input at time \(t\)

Return

a tuple of three Variables:

  • y: input with shape [input size, batch size, sequence length * directions]

  • hidden_state: hidden state for the current time step

  • cell_state: cell state for the current time step

Parameters
  • input: Variable of input with shape [input size, batch size, sequence length]

  • hidden_state: Variable of hidden state with shape [hidden size, batch size, total layers]

  • cell_state: [LSTM only] Variable of cell state with same shape as hidden state

  • weights: Learnable parameters in the RNN unit

  • hidden_size: number of features in the hidden state

  • num_layers: number of recurrent layers

  • mode: defines the type of RNN unit

    • RELU

    • TANH

    • LSTM

    • GRU

  • bidirectional: if True, becomes a bidirectional RNN, unidirectional otherwise

  • dropout: if non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last one, with dropout probability equal to dropout

Variable embedding(const Variable &input, const Variable &embeddings)

Looks up embeddings in a fixed dictionary and size.

Return

a Variable of embeddings with shape [ \(D\), \(B_1\), \(B_2\), \(B_3\)]

Parameters
  • input: a Variable of a list of indices with shape [ \(B_1\), \(B_2\), \(B_3\)]

  • embeddings: a Variable of an embedding matrix with shape [ \(D\), \(N\)], where \(N\) is the number of items and \(D\) is the embedding size.

Variable batchnorm(const Variable &input, const Variable &weight, const Variable &bias, Variable &running_mean, Variable &running_var, const std::vector<int> &axes, bool train, double momentum, double epsilon)

Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

\[ y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta \]
The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size \(C\), the input size. By default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation.

Return

a Variable with same shape as input

Parameters
  • input: a Variable with size [ \(H\), \(W\), \(C\), \(N\)]

  • weight: a Variable with size [ \(C\)] for \(\gamma\)

  • bias: a Variable with size [ \(C\)] for \(\beta\)

  • running_mean: a buffer storing intermediate means during training

  • running_var: a buffer storing intermediate variances during training

  • axes: dimensions to perform normalization on. If having size greater than one, reduce over all of them.

  • train: a flag indicating if running in training mode

  • momentum: value of momentum

  • epsilon: value of \(\epsilon\)

Variable padding(const Variable &input, std::vector<std::pair<int, int>> pad, double val)

Applies asymmetric padding on a Variable input.

Return

a padded Variable

Parameters
  • input: input Variable

  • pad: a list of integer pairs specifying the positions we want to pad on both sides for each dimension

  • val: padding value

Variable dropout(const Variable &input, double p)

Applies dropout on a Variable input.

Return

a droped out Variable

Parameters
  • input: input Variable

  • p: the probability of dropout

Variable relu(const Variable &input)

Applies the rectified linear unit function element-wise to a Variable:

\[ ReLU(x) = \max(0, x) \]
.

Variable gelu(const Variable &input)

Applies the Gaussian Error linear Unit function element-wise to a Variable

Variable constant(double val, int input_size, int output_size, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing a tensor with dimensions [input_size, output_size] where all elements are a constant.

Return

A Variable containing a tensor with constant values.

Parameters
  • val: the value of the constant in the tensor

  • input_size: the second dimension for the output tensor shape

  • output_size: the first dimension of the output tensor shape

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

Variable constant(double val, af::dim4 dims, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing a tensor of up to rank 4 with arbitrary dimensions where all elements are a constant.

Return

A Variable containing a tensor with constant values.

Parameters
  • val: the value of the constant in the tensor

  • dims: an ArrayFire tensor shape

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

Variable identity(int input_size, int output_size, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing an identity tensor with dimensions [input_size, output_size].

Return

A Variable containing the identity tensor.

Parameters
  • input_size: the second dimension for the output tensor shape

  • output_size: the first dimension of the output tensor shape

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

Variable identity(af::dim4 dims, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing an identity tensor of up to rank 4 with arbitrary dimensions.

Return

A Variable containing the identity tensor.

Parameters
  • dims: an ArrayFire tensor shape

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

Variable uniform(int input_size, int output_size, double min = 0, double max = 1, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing a tensor with dimensions [input_size, output_size], where elements are distributed according to a uniform distribution with parameters \(\mathcal{U}(min, max)\).

See Uniform Distribution.

Return

A Variable containing a tensor with random values distributed accordingly.

Parameters
  • input_size: the second dimension for the output tensor shape

  • output_size: the first dimension of the output tensor shape

  • min: the lower bound parameter for the uniform distribution

  • max: the upper bound parameter for the uniform distribution

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

Variable uniform(af::dim4 dims, double min = 0, double max = 1, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing a tensor of up to rank 4 with arbitrary dimensions, where elements are distributed according to a uniform distribution with parameters \(\mathcal{U}(min, max)\).

See Uniform Distribution.

Return

A Variable containing a tensor with random values distributed accordingly.

Parameters
  • dims: an ArrayFire tensor shape

  • min: the lower bound parameter for the uniform distribution

  • max: the upper bound parameter for the uniform distribution

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

Variable normal(int input_size, int output_size, double stdv = 1, double mean = 0, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing a tensor with dimensions [input_size, output_size] where elements are distributed according to a normal distribution with parameters \(\mathcal{N}(\mu, \sigma^2)\).

See Normal Distribution.

Return

A Variable containing a tensor with random values distributed accordingly.

Parameters
  • input_size: the second dimension for the output tensor shape

  • output_size: the first dimension of the output tensor shape

  • stdv: the standard deviation by which to parameterize the distribution

  • mean: the mean by which to parameterize the distribution

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

Variable normal(af::dim4 dims, double stdv = 1, double mean = 0, af::dtype type = f32, bool calc_grad = true)

Creates a Variable representing a tensor of up to rank 4 with arbitrary dimensions, where elements are distributed according to a normal distribution with parameters \(\mathcal{N}(\mu, \sigma^2)\).

See Normal Distribution.

Return

A Variable containing a tensor with random values distributed accordingly.

Parameters
  • dims: an ArrayFire tensor shape

  • stdv: the standard deviation by which to parameterize the distribution

  • mean: the mean by which to parameterize the distribution

  • type: the ArrayFire datatype for which to create the tensor

  • calc_grad: flag denoting whether gradient calculation on the resulting Variable should be enabled

namespace fl

Copyright (c) Facebook, Inc.

and its affiliates. All rights reserved.

This source code is licensed under the BSD-style license found in the LICENSE file in the root directory of this source tree.

and its affiliates. All rights reserved.

This source code is licensed under the BSD-style license found in the LICENSE file in the root directory of this source tree. Logging is a light, multi-level, compile time filterable, logging infrastructure that is similar to glog in output format. It defines two logging macros, one for any logging and the other for more verbose logging. Compile time filter is applied separately to each of the two.

Output format: LMMDD HH:MM:SS.uuuuuu tid filename:##] Log message … L: Log level (Fatal, Critical, Error, Warning, Info) MMDD: month, day HH:MM:SS.uuuuuu: time (24-hour format) with micro-seconds tid: thread ID filename:## the basename of the source file and line number of the LOG message

LOG use examples: LOG(INFO) << “foo bar n=” << 42; Output example: I0206 10:42:21.047293 87072 Logging.h:15 foo bar n=42 Note that LOG(level) only prints when level is <= from value set to Logging::setMaxLoggingLevel(level)

VLOG use example: VLOG(1) << “foo bar n=” << 42; Output example: vlog(1)0206 10:42:21.005439 87072 Logging.h:23 foo bar n=42 Note that VLOG(level) only prints when level is <= from value set to VerboseLogging::setMaxLoggingLevel(level)

and its affiliates. All rights reserved.

This source code is licensed under the BSD-style license found in the LICENSE file in the root directory of this source tree. The configurable memory allocator is obtained by calling: std::unique_ptr<MemoryAllocator> CreateMemoryAllocator(config) Config defines a a set of allocators assembled in a CompositeMemoryAllocator.

Functions

bool allClose(const Variable &a, const Variable &b, double absTolerance = 1e-5)

Returns true if two Variable are of same type and are element-wise equal within given tolerance limit.

Parameters
  • [ab]: input Variables to compare

  • absTolerance: absolute tolerance allowed