Common

enum common_defines::ReduceMode

Reduction mode to used for CrossEntropy, AdaptiveSoftMax etc …

Values:

NONE = 0
MEAN = 1
SUM = 2
enum common_defines::PoolingMode

Pooling method to be used.

Values:

MAX = 0

Use maximum value inside the pooling window.

AVG_INCLUDE_PADDING = 1

Use average value (including padding) inside the pooling window.

AVG_EXCLUDE_PADDING = 2

Use average value (excluding padding) inside the pooling window// Use average value (excluding padding) inside the pooling window.

enum common_defines::RnnMode

RNN network type.

Values:

RELU = 0
TANH = 1
LSTM = 2
GRU = 3
enum common_defines::PaddingMode

Values:

SAME = -1

Use smallest possible padding such that out_size = ceil(in_size/stride)

enum common_defines::DistributedBackend

Values:

GLOO = 0

https://github.com/facebookincubator/gloo

NCCL = 1

https://developer.nvidia.com/nccl

STUB = 2
enum common_defines::DistributedInit

Values:

MPI = 0
FILE_SYSTEM = 1
enum common_defines::OptimLevel

Optimization levels in flashlight.

These determine the computation behavior of autograd operator computation as well as how inputs and outputs of operators are cast.

Operator precision roughly follows those found in NVIDIA Apex:

Values:

DEFAULT = 0

All operations occur in default (f32 or f64) precision.

O1 = 1

Operations that perform reduction accumulation, including layer/batch normalization are performed in f32 - all other operations are in fp16.

To be used in a standard mixed-precision training setup.

O2 = 2

Only batch and layer normalization occur in f32 - all other operations occur in f16.

O3 = 3

All operations that support it use fp16.

constexpr std::size_t fl::kDynamicBenchmarkDefaultCount = 10
constexpr double fl::kAmpMinimumScaleFactorValue = 1e-4
class OptimMode
#include <Defines.h>

Singleton storing the current optimization level (OptimLevel) for flashlight.

Warning

doxygengroup: Cannot find namespace “common_utils” in doxygen xml output for project “flashlight” from directory: ../build/xml

class DevicePtr

DevicePtr provides an RAII wrapper for accessing the device pointer of a Flashlight Tensor array.

After calling device() on a Flashlight tensor to get a device pointer, its underlying memory is not free until unlock() is called - see fl::Tensor::unlock(). DevicePtr provides a std::unique_lock style API which calls the unlock() function in its destructor after getting device pointer. A DevicePtr is movable, but not copyable.

Example Usage :

auto A = Tensor({10, 10});
{
    DevicePtr devPtr(A); // calls `.device<>()` on array.
    void* ptr = devPtr.get();
}
// devPtr is destructed and A.unlock() is automatically called

Public Functions

DevicePtr()

Creates a null DevicePtr.

DevicePtr(const Tensor &in)

Parameters
  • in: input array to get device pointer

~DevicePtr()

.unlock() is called on the underlying array in destructor

DevicePtr(const DevicePtr &other)
DevicePtr &operator=(const DevicePtr &other)
DevicePtr(DevicePtr &&d)
DevicePtr &operator=(DevicePtr &&other)
bool operator==(const DevicePtr &other) const
void *get() const
template<typename T>
T *getAs() const
class ThreadPool

A simple C++11 Thread Pool implementation.

Source - https://github.com/progschj/ThreadPool

Basic usage:

// create thread pool with 4 worker threads
ThreadPool pool(4);

// enqueue and store future
auto result = pool.enqueue([](int answer) { return answer; }, 42);

// get result from future
std::cout << result.get() << std::endl;

Public Functions

ThreadPool(size_t threads, const std::function<void(size_t)> &initFn = nullptr)

the constructor just launches given amount of workers

Parameters
  • [in] threads: number of threads

  • [in] initFn: initialization code (if any) that will be run on all the threads

template<class F, class ...Args>
auto enqueue(F &&f, Args&&... args)

add new work item to the pool

Parameters
  • [in] f: function to be executed in threadpool

  • [in] args: varadic arguments for the function

~ThreadPool()

destructor joins all threads.