OmniSciDB  72c90bc290
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
ExecutorResourceMgr_Namespace::ExecutorResourcePool Class Reference

ExecutorResourcePool keeps track of available compute and memory resources and can be queried to get the min and max resources grantable (embodied in a ResourceGrant) for a request, given a ResourceRequest. More...

#include <ExecutorResourcePool.h>

Public Member Functions

 ExecutorResourcePool (const std::vector< std::pair< ResourceType, size_t >> &total_resources, const std::vector< ConcurrentResourceGrantPolicy > &concurrent_resource_grant_policies, const std::vector< ResourceGrantPolicy > &max_per_request_resource_grant_policies)
 
void log_parameters () const
 
std::vector< ResourceRequestGrantcalc_static_resource_grant_ranges_for_request (const std::vector< ResourceRequest > &resource_requests) const
 
std::pair< ResourceGrant,
ResourceGrant
calc_min_max_resource_grants_for_request (const RequestInfo &resource_request) const
 Given the provided resource_request, statically calculate the minimum and maximum grantable resources for that request. Note that the max resource grant may be less that requested by the query. More...
 
bool can_currently_satisfy_request (const ResourceGrant &min_resource_grant, const ChunkRequestInfo &chunk_request_info) const
 
std::pair< bool, ResourceGrantdetermine_dynamic_resource_grant (const ResourceGrant &min_resource_grant, const ResourceGrant &max_resource_grant, const ChunkRequestInfo &chunk_request_info, const double max_request_backoff_ratio) const
 Determines the actual resource grant to give a query (which will be somewhere between the provided min_resource_grant and max_resource_grant, unless it is determined that the request cannot be currently satisfied). More...
 
void allocate_resources (const ResourceGrant &resource_grant, const ChunkRequestInfo &chunk_request_info)
 Given a resource grant (assumed to be computed in determine_dynamic_resource_grant), actually allocate (reserve) the resources in the pool so other requestors (queries) cannot use those resources until returned to the pool. More...
 
void deallocate_resources (const ResourceGrant &resource_grant, const ChunkRequestInfo &chunk_request_info)
 Deallocates resources granted to a requestor such that they can be used for other requests. More...
 
std::pair< size_t, size_t > get_resource_info (const ResourceType resource_type) const
 Returns the allocated and total available amount of the resource specified. More...
 
ResourcePoolInfo get_resource_info () const
 Returns a struct detailing the allocated and total available resources of each type tracked in ExecutorResourcePool. More...
 
void set_resource (const ResourceType resource_type, const size_t resource_quantity)
 Sets the quantity of resource_type to resource_quantity. If pool has outstanding requests, will throw. Responsibility of allowing the pool to empty and preventing concurrent requests while this operation is running is left to the caller (in particular, ExecutorResourceMgr::set_resource pauses the process queue, which waits until all executing requests are finished before yielding to the caller, before calling this method). More...
 
ConcurrentResourceGrantPolicy get_concurrent_resource_grant_policy (const ResourceType resource_type) const
 
const ResourceGrantPolicyget_max_resource_grant_per_request_policy (const ResourceSubtype resource_subtype) const
 
void set_concurrent_resource_grant_policy (const ConcurrentResourceGrantPolicy &concurrent_resource_grant_policy)
 Resets the concurrent resource grant policy object, which specifies a ResourceType as well as normal and oversubscription concurrency policies. If pool has outstanding requests, will throw. Responsibility of allowing the pool to empty and preventing concurrent requests while this operation is running is left to the caller (in particular, ExecutorResourceMgr::set_concurent_resource_grant_policy pauses the process queue, which waits until all executing requests are finished before yielding to the caller, before calling this method). More...
 

Private Member Functions

void init (const std::vector< std::pair< ResourceType, size_t >> &total_resources, const std::vector< ConcurrentResourceGrantPolicy > &concurrent_resource_grant_policies, const std::vector< ResourceGrantPolicy > &max_per_request_resource_grant_policies)
 
void init_concurrency_policies ()
 
void init_max_resource_grants_per_requests ()
 
void throw_insufficient_resource_error (const ResourceSubtype resource_subtype, const size_t min_resource_requested) const
 
size_t calc_max_resource_grant_for_request (const size_t requested_resource_quantity, const size_t min_requested_resource_quantity, const size_t max_grantable_resource_quantity) const
 
std::pair< size_t, size_t > calc_min_dependent_resource_grant_for_request (const size_t min_requested_dependent_resource_quantity, const size_t min_requested_independent_resource_quantity, const size_t dependent_to_independent_resource_ratio) const
 
std::pair< size_t, size_t > calc_max_dependent_resource_grant_for_request (const size_t requested_dependent_resource_quantity, const size_t min_requested_dependent_resource_quantity, const size_t max_grantable_dependent_resource_quantity, const size_t min_requested_independent_resource_quantity, const size_t max_grantable_independent_resource_quantity, const size_t dependent_to_independent_resource_ratio) const
 
bool check_request_against_global_policy (const size_t resource_total, const size_t resource_allocated, const ConcurrentResourceGrantPolicy &concurrent_resource_grant_policy) const
 
bool check_request_against_policy (const size_t resource_request, const size_t resource_total, const size_t resource_allocated, const size_t global_outstanding_requests, const ConcurrentResourceGrantPolicy &concurrent_resource_grant_policy) const
 
bool can_currently_satisfy_request_impl (const ResourceGrant &min_resource_grant, const ChunkRequestInfo &chunk_request_info) const
 
bool can_currently_satisfy_chunk_request (const ResourceGrant &min_resource_grant, const ChunkRequestInfo &chunk_request_info) const
 
ChunkRequestInfo get_requested_chunks_not_in_pool (const ChunkRequestInfo &chunk_request_info) const
 
size_t get_chunk_bytes_not_in_pool (const ChunkRequestInfo &chunk_request_info) const
 
void add_chunk_requests_to_allocated_pool (const ResourceGrant &resource_grant, const ChunkRequestInfo &chunk_request_info)
 
void remove_chunk_requests_from_allocated_pool (const ResourceGrant &resource_grant, const ChunkRequestInfo &chunk_request_info)
 
size_t determine_dynamic_single_resource_grant (const size_t min_resource_requested, const size_t max_resource_requested, const size_t resource_allocated, const size_t total_resource, const double max_request_backoff_ratio) const
 
void sanity_check_requests_against_allocations () const
 
size_t get_total_allocated_buffer_pool_mem_for_level (const ExecutorDeviceType memory_pool_type) const
 
bool is_resource_valid (const ResourceType resource_type) const
 
size_t get_total_resource (const ResourceType resource_type) const
 
size_t get_allocated_resource_of_subtype (const ResourceSubtype resource_subtype) const
 
size_t get_allocated_resource_of_type (const ResourceType resource_type) const
 
size_t get_max_resource_grant_per_request (const ResourceSubtype resource_subtype) const
 
size_t get_total_per_resource_num_requests (const ResourceType resource_type) const
 
size_t increment_total_per_resource_num_requests (const ResourceType resource_type)
 
size_t decrement_total_per_resource_num_requests (const ResourceType resource_type)
 
size_t get_outstanding_per_resource_num_requests (const ResourceType resource_type) const
 
size_t increment_outstanding_per_resource_num_requests (const ResourceType resource_type)
 
size_t decrement_outstanding_per_resource_num_requests (const ResourceType resource_type)
 

Private Attributes

std::array< size_t,
ResourceTypeSize
total_resources_ {}
 
std::array< bool,
ResourceTypeSize
resource_type_validity_
 
std::array< size_t,
ResourceSubtypeSize
allocated_resources_ {}
 
std::array
< ResourceGrantPolicy,
ResourceSubtypeSize
max_resource_grants_per_request_policies_ {}
 
std::array< size_t,
ResourceSubtypeSize
max_resource_grants_per_request_ {}
 
std::array
< ConcurrentResourceGrantPolicy,
ResourceTypeSize
concurrent_resource_grant_policies_
 
size_t total_num_requests_ {0}
 
size_t outstanding_num_requests_ {0}
 
std::array< size_t,
ResourceTypeSize
total_per_resource_num_requests_ {}
 
std::array< size_t,
ResourceTypeSize
outstanding_per_resource_num_requests_ {}
 
BufferPoolChunkMap allocated_cpu_buffer_pool_chunks_
 
BufferPoolChunkMap allocated_gpu_buffer_pool_chunks_
 
const bool sanity_check_pool_state_on_deallocations_ {false}
 
std::shared_mutex resource_mutex_
 

Detailed Description

ExecutorResourcePool keeps track of available compute and memory resources and can be queried to get the min and max resources grantable (embodied in a ResourceGrant) for a request, given a ResourceRequest.

ExecutorResourcePool keeps track of logical resources available to the executor, categorized and typed by the ResourceType enum. Current valid categories of ResourceType include CPU_SLOTS, GPU_SLOTS, CPU_RESULT_MEM, CPU_BUFFER_POOL_MEM, and GPU_BUFFER_POOL_MEM. Furthermore, a ResourceSubtype enum is used to represent more granular sub-categories of the above. Namely, there exists ResourceSubtype PINNED_CPU_BUFFER_POOL_MEM and PINNED_GPU_BUFFER_POOL_MEM to represent non-pageable memory (specifcally for kernel results), and PAGEABLE_CPU_BUFFER_POOL_MEM and PAGEABLE_GPU_BUFFER_POOL_MEM to represent data that could be evicted as neccessary.

Currently, a singleton ExecutorResourcePool is managed by ExecutorResourceMgr and is initialized by the latter in the ExecutorResourceMgr constructor. Various parameters driving behavior of ExecutorResourcePool are passed to its constructor, comprising the total resources available in the pool in each of the above categories, policies around concurrent requests to the pool for each of the resources (embodied in a vector of ConcurrentResourceGrantPolicy), and policies around limits to individual resource grants (embodied in a vector of ResourceGrantPolicy).

Generally for a given resource request, the following lifecycle is prescribed, as can be seen in the various invocations of ExecutorResourcePool methods by ExecutorResourceMgr:

  1. call_min_max_resource_grants_for_request: Get min and max possible resource grant, given a resource_request. If it is determined to be impossible to grant even the minimum requests specified in resource_request, this will throw an error.
  2. determine_dynamic_resource_grant: Given the min and max possible resource grants determined from #1, the ExecutorResourcePool calculates an actual grant to give a query based on current resource availability in the pool.
  3. allocate_resources: Allocate the actual resource grant computed in #2 from the ExecutorResourcePool, marking the resources as used/allocated so they cannot be used by other queries/requestors until deallocated
  4. deallocate_resources: Ultimately invoked from the destructor of the resource handle given to the executing thread, this returns the allocated resources to the pool for use by other queries.

Definition at line 237 of file ExecutorResourcePool.h.

Constructor & Destructor Documentation

ExecutorResourceMgr_Namespace::ExecutorResourcePool::ExecutorResourcePool ( const std::vector< std::pair< ResourceType, size_t >> &  total_resources,
const std::vector< ConcurrentResourceGrantPolicy > &  concurrent_resource_grant_policies,
const std::vector< ResourceGrantPolicy > &  max_per_request_resource_grant_policies 
)

Definition at line 48 of file ExecutorResourcePool.cpp.

References init(), and log_parameters().

51  {
52  init(total_resources,
53  concurrent_resource_grant_policies,
54  max_per_request_resource_grant_policies);
56 }
void init(const std::vector< std::pair< ResourceType, size_t >> &total_resources, const std::vector< ConcurrentResourceGrantPolicy > &concurrent_resource_grant_policies, const std::vector< ResourceGrantPolicy > &max_per_request_resource_grant_policies)

+ Here is the call graph for this function:

Member Function Documentation

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::add_chunk_requests_to_allocated_pool ( const ResourceGrant resource_grant,
const ChunkRequestInfo chunk_request_info 
)
private

Definition at line 714 of file ExecutorResourcePool.cpp.

References allocated_cpu_buffer_pool_chunks_, allocated_gpu_buffer_pool_chunks_, allocated_resources_, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_for_given_slots, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_gated_per_slot, CHECK, CHECK_LE, ExecutorResourceMgr_Namespace::ChunkRequestInfo::chunks_with_byte_sizes, CPU, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::debug_print(), ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type, ExecutorResourceMgr_Namespace::ENABLE_DEBUG_PRINTING, logger::EXECUTOR, format_num_bytes(), get_allocated_resource_of_subtype(), get_total_allocated_buffer_pool_mem_for_level(), get_total_resource(), ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, LOG, ExecutorResourceMgr_Namespace::ChunkRequestInfo::num_chunks, ExecutorResourceMgr_Namespace::PAGEABLE_CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::PINNED_CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::PINNED_GPU_BUFFER_POOL_MEM, and ExecutorResourceMgr_Namespace::ChunkRequestInfo::total_bytes.

Referenced by allocate_resources().

716  {
717  // Expects lock on resource_mutex_ already taken
718 
719  if (resource_grant.buffer_mem_gated_per_slot) {
720  CHECK(chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU);
722  chunk_request_info.device_memory_pool_type) +
723  resource_grant.buffer_mem_for_given_slots,
725  allocated_resources_[static_cast<size_t>(
727  resource_grant.buffer_mem_for_given_slots;
728 
729  const std::string& pool_level_string =
730  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU ? "CPU"
731  : "GPU";
732  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
733  << " allocated_temp chunk addition: "
734  << format_num_bytes(resource_grant.buffer_mem_for_given_slots);
735  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
736  << " pool state: Transient Allocations: "
739  << " Total Allocations: "
741  chunk_request_info.device_memory_pool_type));
742  return;
743  }
744 
745  BufferPoolChunkMap& chunk_map_for_memory_level =
746  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
749  size_t& pinned_buffer_mem_for_memory_level =
750  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
751  ? allocated_resources_[static_cast<size_t>(
753  : allocated_resources_[static_cast<size_t>(
755  const size_t total_buffer_mem_for_memory_level =
756  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
758  : get_total_resource(ResourceType::GPU_BUFFER_POOL_MEM);
759 
760  // Following variables are for logging
761  const size_t pre_pinned_chunks_for_memory_level = chunk_map_for_memory_level.size();
762  const size_t pre_pinned_buffer_mem_for_memory_level =
763  pinned_buffer_mem_for_memory_level;
764 
765  for (const auto& requested_chunk : chunk_request_info.chunks_with_byte_sizes) {
766  auto chunk_itr = chunk_map_for_memory_level.find(requested_chunk.first);
767  if (chunk_itr == chunk_map_for_memory_level.end()) {
768  pinned_buffer_mem_for_memory_level += requested_chunk.second;
769  chunk_map_for_memory_level.insert(
770  std::make_pair(requested_chunk.first,
771  std::make_pair(size_t(1) /* initial reference count */,
772  requested_chunk.second)));
773  } else {
774  if (requested_chunk.second > chunk_itr->second.second) {
775  pinned_buffer_mem_for_memory_level +=
776  requested_chunk.second - chunk_itr->second.second;
777  chunk_itr->second.second = requested_chunk.second;
778  }
779  chunk_itr->second.first += 1; // Add reference count
780  }
781  }
782  const size_t post_pinned_chunks_for_memory_level = chunk_map_for_memory_level.size();
783  const size_t net_new_allocated_chunks =
784  post_pinned_chunks_for_memory_level - pre_pinned_chunks_for_memory_level;
785  const size_t net_new_allocated_memory =
786  pinned_buffer_mem_for_memory_level - pre_pinned_buffer_mem_for_memory_level;
787 
788  const std::string& pool_level_string =
789  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU ? "CPU"
790  : "GPU";
791  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
792  << " chunk allocation: " << chunk_request_info.num_chunks << " chunks | "
793  << format_num_bytes(chunk_request_info.total_bytes);
794  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
795  << " pool delta: " << net_new_allocated_chunks << " chunks added | "
796  << format_num_bytes(net_new_allocated_memory);
797  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
798  << " pool state: " << post_pinned_chunks_for_memory_level << " chunks | "
800  chunk_request_info.device_memory_pool_type));
801 
802  if (ENABLE_DEBUG_PRINTING) {
803  debug_print("After chunk allocation: ",
804  format_num_bytes(pinned_buffer_mem_for_memory_level),
805  " of ",
806  format_num_bytes(total_buffer_mem_for_memory_level),
807  ", with ",
808  chunk_map_for_memory_level.size(),
809  " chunks.");
810  }
811  CHECK_LE(pinned_buffer_mem_for_memory_level, total_buffer_mem_for_memory_level);
812 }
size_t get_total_resource(const ResourceType resource_type) const
std::array< size_t, ResourceSubtypeSize > allocated_resources_
#define LOG(tag)
Definition: Logger.h:285
size_t get_allocated_resource_of_subtype(const ResourceSubtype resource_subtype) const
ResourceType
Stores the resource type for a ExecutorResourcePool request.
std::map< ChunkKey, std::pair< size_t, size_t >> BufferPoolChunkMap
size_t get_total_allocated_buffer_pool_mem_for_level(const ExecutorDeviceType memory_pool_type) const
std::string format_num_bytes(const size_t bytes)
#define CHECK_LE(x, y)
Definition: Logger.h:304
#define CHECK(condition)
Definition: Logger.h:291
void debug_print(Ts &&...print_args)

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::allocate_resources ( const ResourceGrant resource_grant,
const ChunkRequestInfo chunk_request_info 
)

Given a resource grant (assumed to be computed in determine_dynamic_resource_grant), actually allocate (reserve) the resources in the pool so other requestors (queries) cannot use those resources until returned to the pool.

Note that the chunk requests do not and should not neccessarily match the state of the BufferMgrs (where evictions can happen etc), but are just used to keep track of what chunks are pledged for running queries. In the future we may try to get all of this info from the BufferMgr directly, but would need to add a layer of state there that would keep track of both what is currently allocated and what is pledged to queries. For now, this effort was not deemed worth the complexity and risk it would introduce.

Parameters
resource_grant- Granted resource_grant, assumed to be determined previously in determine_dynamic_resource_grant
chunk_request_info- The DataMgr chunk keys and other associated info needed by this query. The ExecutorResourcePool must keep track of chunks that are in the pool so it can properly determine whether queries can execute (given chunks can be shared resources across requestors/queries).

Definition at line 1022 of file ExecutorResourcePool.cpp.

References add_chunk_requests_to_allocated_pool(), allocated_resources_, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_gated_per_slot, can_currently_satisfy_request_impl(), CHECK, CPU, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_result_mem, ExecutorResourceMgr_Namespace::CPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_slots, ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type, logger::EXECUTOR, format_num_bytes(), get_allocated_resource_of_type(), get_outstanding_per_resource_num_requests(), get_total_resource(), GPU, ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::GPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::gpu_slots, increment_outstanding_per_resource_num_requests(), increment_total_per_resource_num_requests(), LOG, ExecutorResourceMgr_Namespace::ChunkRequestInfo::num_chunks, outstanding_num_requests_, resource_mutex_, ExecutorResourceMgr_Namespace::ChunkRequestInfo::total_bytes, and total_num_requests_.

Referenced by ExecutorResourceMgr_Namespace::ExecutorResourceMgr::process_queue_loop().

1024  {
1025  std::unique_lock<std::shared_mutex> resource_write_lock(resource_mutex_);
1026 
1027  // Caller (ExecutorResourceMgr) should never request resource allocation for a request
1028  // it knows cannot be granted, however use below as a sanity check Use unlocked
1029  // internal method as we already hold lock above
1030  const bool can_satisfy_request =
1031  can_currently_satisfy_request_impl(resource_grant, chunk_request_info);
1032  CHECK(can_satisfy_request);
1033 
1034  allocated_resources_[static_cast<size_t>(ResourceSubtype::CPU_SLOTS)] +=
1035  resource_grant.cpu_slots;
1036  allocated_resources_[static_cast<size_t>(ResourceSubtype::GPU_SLOTS)] +=
1037  resource_grant.gpu_slots;
1038  allocated_resources_[static_cast<size_t>(ResourceSubtype::CPU_RESULT_MEM)] +=
1039  resource_grant.cpu_result_mem;
1040 
1043  if (resource_grant.cpu_slots > 0) {
1046  }
1047  if (resource_grant.gpu_slots > 0) {
1050  }
1051  if (resource_grant.cpu_result_mem > 0) {
1054  }
1055  if (chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU) {
1056  if (resource_grant.buffer_mem_gated_per_slot ||
1057  (chunk_request_info.num_chunks > 0 && chunk_request_info.total_bytes > 0)) {
1060  }
1061  } else if (chunk_request_info.device_memory_pool_type == ExecutorDeviceType::GPU) {
1062  if (resource_grant.buffer_mem_gated_per_slot ||
1063  (chunk_request_info.num_chunks > 0 && chunk_request_info.total_bytes > 0)) {
1066  }
1067  }
1068 
1069  LOG(EXECUTOR) << "ExecutorResourcePool allocation: " << outstanding_num_requests_
1070  << " requests ("
1072  << " CPU | "
1074  << " GPU)";
1075  LOG(EXECUTOR) << "ExecutorResourcePool state: CPU slots: "
1077  << get_total_resource(ResourceType::CPU_SLOTS) << " | GPU slots: "
1079  << get_total_resource(ResourceType::GPU_SLOTS) << " CPU result mem: "
1080  << format_num_bytes(
1082  << " of "
1084  add_chunk_requests_to_allocated_pool(resource_grant, chunk_request_info);
1085 }
size_t get_total_resource(const ResourceType resource_type) const
std::array< size_t, ResourceSubtypeSize > allocated_resources_
#define LOG(tag)
Definition: Logger.h:285
bool can_currently_satisfy_request_impl(const ResourceGrant &min_resource_grant, const ChunkRequestInfo &chunk_request_info) const
void add_chunk_requests_to_allocated_pool(const ResourceGrant &resource_grant, const ChunkRequestInfo &chunk_request_info)
size_t increment_outstanding_per_resource_num_requests(const ResourceType resource_type)
size_t increment_total_per_resource_num_requests(const ResourceType resource_type)
std::string format_num_bytes(const size_t bytes)
size_t get_outstanding_per_resource_num_requests(const ResourceType resource_type) const
#define CHECK(condition)
Definition: Logger.h:291
size_t get_allocated_resource_of_type(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

std::pair< size_t, size_t > ExecutorResourceMgr_Namespace::ExecutorResourcePool::calc_max_dependent_resource_grant_for_request ( const size_t  requested_dependent_resource_quantity,
const size_t  min_requested_dependent_resource_quantity,
const size_t  max_grantable_dependent_resource_quantity,
const size_t  min_requested_independent_resource_quantity,
const size_t  max_grantable_independent_resource_quantity,
const size_t  dependent_to_independent_resource_ratio 
) const
private

Definition at line 266 of file ExecutorResourcePool.cpp.

References calc_min_dependent_resource_grant_for_request(), CHECK_GE, and CHECK_LE.

Referenced by calc_min_max_resource_grants_for_request(), and determine_dynamic_resource_grant().

272  {
273  CHECK_LE(min_requested_dependent_resource_quantity,
274  requested_dependent_resource_quantity);
275  CHECK_LE(min_requested_independent_resource_quantity,
276  max_grantable_independent_resource_quantity);
277 
278  if (requested_dependent_resource_quantity <=
279  max_grantable_dependent_resource_quantity) {
280  // Dependent resource request falls under max grantable limit, grant requested
281  // resource
282  return std::make_pair(requested_dependent_resource_quantity,
283  max_grantable_independent_resource_quantity);
284  }
285  // First member of pair returned is min resource grant, second is min dependent
286  // resource grant
287  const auto adjusted_min_dependent_and_independent_resource_grant =
289  min_requested_dependent_resource_quantity,
290  min_requested_independent_resource_quantity,
291  dependent_to_independent_resource_ratio);
292 
293  if (adjusted_min_dependent_and_independent_resource_grant.first >
294  max_grantable_dependent_resource_quantity) {
295  // If here the min grantable dependent resource is greater than what was to provided
296  // to the function as grantable of the dependent resource
297  return std::make_pair(static_cast<size_t>(0), static_cast<size_t>(0));
298  }
299 
300  const size_t adjusted_max_independent_resource_quantity = std::min(
301  max_grantable_dependent_resource_quantity / dependent_to_independent_resource_ratio,
302  max_grantable_independent_resource_quantity);
303 
304  CHECK_GE(adjusted_max_independent_resource_quantity,
305  adjusted_min_dependent_and_independent_resource_grant.second);
306 
307  const size_t granted_dependent_resource_quantity =
308  dependent_to_independent_resource_ratio *
309  adjusted_max_independent_resource_quantity;
310  return std::make_pair(granted_dependent_resource_quantity,
311  adjusted_max_independent_resource_quantity);
312 }
#define CHECK_GE(x, y)
Definition: Logger.h:306
#define CHECK_LE(x, y)
Definition: Logger.h:304
std::pair< size_t, size_t > calc_min_dependent_resource_grant_for_request(const size_t min_requested_dependent_resource_quantity, const size_t min_requested_independent_resource_quantity, const size_t dependent_to_independent_resource_ratio) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::calc_max_resource_grant_for_request ( const size_t  requested_resource_quantity,
const size_t  min_requested_resource_quantity,
const size_t  max_grantable_resource_quantity 
) const
private

Definition at line 235 of file ExecutorResourcePool.cpp.

Referenced by calc_min_max_resource_grants_for_request(), and calc_static_resource_grant_ranges_for_request().

238  {
239  if (requested_resource_quantity <= max_grantable_resource_quantity) {
240  return requested_resource_quantity;
241  }
242  if (min_requested_resource_quantity <= max_grantable_resource_quantity) {
243  return max_grantable_resource_quantity;
244  }
245  return static_cast<size_t>(0);
246 }

+ Here is the caller graph for this function:

std::pair< size_t, size_t > ExecutorResourceMgr_Namespace::ExecutorResourcePool::calc_min_dependent_resource_grant_for_request ( const size_t  min_requested_dependent_resource_quantity,
const size_t  min_requested_independent_resource_quantity,
const size_t  dependent_to_independent_resource_ratio 
) const
private

Definition at line 249 of file ExecutorResourcePool.cpp.

Referenced by calc_max_dependent_resource_grant_for_request(), and calc_min_max_resource_grants_for_request().

252  {
253  const size_t adjusted_min_independent_resource_quantity =
254  std::max(static_cast<size_t>(
255  ceil(static_cast<double>(min_requested_dependent_resource_quantity) /
256  dependent_to_independent_resource_ratio)),
257  min_requested_independent_resource_quantity);
258  const size_t adjusted_min_dependent_resource_quantity =
259  adjusted_min_independent_resource_quantity *
260  dependent_to_independent_resource_ratio;
261  return std::make_pair(adjusted_min_dependent_resource_quantity,
262  adjusted_min_independent_resource_quantity);
263 }

+ Here is the caller graph for this function:

std::pair< ResourceGrant, ResourceGrant > ExecutorResourceMgr_Namespace::ExecutorResourcePool::calc_min_max_resource_grants_for_request ( const RequestInfo resource_request) const

Given the provided resource_request, statically calculate the minimum and maximum grantable resources for that request. Note that the max resource grant may be less that requested by the query.

Note that this method only looks at static total available resources as well as the ideal and minimum resources requested (in resource_request) to determine the max grants, and does not evaluate the current state of resource use in the pool. That is done in a later call, detrmine_dynamic_resource_grant.

Parameters
resource_request- Details the resources a query would like to have as well as the minimum resources it can run with
Returns
std::pair<ResourceGrant, ResourceGrant> - A pair of the minimum and maximum grantable resources that could be potentially granted for this request,

Definition at line 365 of file ExecutorResourcePool.cpp.

References ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_for_given_slots, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_gated_per_slot, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_per_slot, calc_max_dependent_resource_grant_for_request(), calc_max_resource_grant_for_request(), calc_min_dependent_resource_grant_for_request(), CHECK, CHECK_EQ, CHECK_GE, CHECK_GT, CHECK_LE, ExecutorResourceMgr_Namespace::RequestInfo::chunk_request_info, CPU, ExecutorResourceMgr_Namespace::RequestInfo::cpu_result_mem, ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_result_mem, ExecutorResourceMgr_Namespace::RequestInfo::cpu_slots, ExecutorResourceMgr_Namespace::CPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_slots, get_max_resource_grant_per_request(), ExecutorResourceMgr_Namespace::RequestInfo::gpu_slots, ExecutorResourceMgr_Namespace::GPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::gpu_slots, ExecutorResourceMgr_Namespace::RequestInfo::min_cpu_result_mem, ExecutorResourceMgr_Namespace::RequestInfo::min_cpu_slots, ExecutorResourceMgr_Namespace::RequestInfo::min_gpu_slots, ExecutorResourceMgr_Namespace::PAGEABLE_CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::PAGEABLE_GPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::PINNED_CPU_BUFFER_POOL_MEM, and ExecutorResourceMgr_Namespace::PINNED_GPU_BUFFER_POOL_MEM.

Referenced by ExecutorResourceMgr_Namespace::ExecutorResourceMgr::request_resources_with_timeout().

366  {
367  ResourceGrant min_resource_grant, max_resource_grant;
368 
369  CHECK_LE(request_info.min_cpu_slots, request_info.cpu_slots);
370  CHECK_LE(request_info.min_gpu_slots, request_info.gpu_slots);
371  CHECK_LE(request_info.min_cpu_result_mem, request_info.cpu_result_mem);
372 
373  max_resource_grant.cpu_slots = calc_max_resource_grant_for_request(
374  request_info.cpu_slots,
375  request_info.min_cpu_slots,
377  if (max_resource_grant.cpu_slots == 0 && request_info.min_cpu_slots > 0) {
378  throw QueryNeedsTooManyCpuSlots(
380  request_info.min_cpu_slots);
381  }
382 
383  max_resource_grant.gpu_slots = calc_max_resource_grant_for_request(
384  request_info.gpu_slots,
385  request_info.min_gpu_slots,
387  if (max_resource_grant.gpu_slots == 0 && request_info.min_gpu_slots > 0) {
388  throw QueryNeedsTooManyGpuSlots(
390  request_info.min_gpu_slots);
391  }
392 
393  // Todo (todd): Modulate number of CPU threads launched to ensure that
394  // query can fit in max grantable CPU result memory (if possible)
395  max_resource_grant.cpu_result_mem = calc_max_resource_grant_for_request(
396  request_info.cpu_result_mem,
397  request_info.min_cpu_result_mem,
399  if (max_resource_grant.cpu_result_mem == 0 && request_info.min_cpu_result_mem > 0) {
400  throw QueryNeedsTooMuchCpuResultMem(
402  request_info.min_cpu_result_mem);
403  }
404 
405  const auto& chunk_request_info = request_info.chunk_request_info;
406 
407  const size_t max_pinned_buffer_pool_grant_for_memory_level =
409  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
412 
413  if (chunk_request_info.total_bytes > max_pinned_buffer_pool_grant_for_memory_level) {
414  if (!chunk_request_info.bytes_scales_per_kernel) {
415  throw QueryNeedsTooMuchBufferPoolMem(max_pinned_buffer_pool_grant_for_memory_level,
416  chunk_request_info.total_bytes,
417  chunk_request_info.device_memory_pool_type);
418  }
419  // If here we have bytes_needed_scales_per_kernel
420  // For now, this can only be for a CPU request, but that may be relaxed down the
421  // road
422  const size_t max_pageable_buffer_pool_grant_for_memory_level =
424  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
427  CHECK(chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU);
428  const auto max_chunk_memory_and_cpu_slots_grant =
430  chunk_request_info.total_bytes, // requested_dependent_resource_quantity
431  chunk_request_info
432  .max_bytes_per_kernel, // min_requested_dependent_resource_quantity
433  max_pageable_buffer_pool_grant_for_memory_level, // max_grantable_dependent_resource_quantity
434  request_info.min_cpu_slots, // min_requested_independent_resource_quantity
435  max_resource_grant.cpu_slots, // max_grantable_indepndent_resource_quantity
436  chunk_request_info
437  .max_bytes_per_kernel); // dependent_to_independent_resource_ratio
438 
439  CHECK_LE(max_chunk_memory_and_cpu_slots_grant.second, max_resource_grant.cpu_slots);
440  if (max_chunk_memory_and_cpu_slots_grant.first == size_t(0)) {
441  // Make sure cpu_slots is 0 as well
442  CHECK_EQ(max_chunk_memory_and_cpu_slots_grant.second, size_t(0));
443  // Get what min grant would have been if it was grantable so that we can present a
444  // meaningful error message
445  const auto adjusted_min_chunk_memory_and_cpu_slots_grant =
447  chunk_request_info
448  .max_bytes_per_kernel, // min_requested_dependent_resource_quantity
449  request_info.min_cpu_slots, // min_requested_independent_resource_quantity
450  chunk_request_info
451  .max_bytes_per_kernel); // dependent_to_independent_resource_ratio
452  // Ensure we would not have been able to satisfy this grant
453  CHECK_GT(adjusted_min_chunk_memory_and_cpu_slots_grant.first,
454  max_pageable_buffer_pool_grant_for_memory_level);
455  // The logic for calc_min_dependent_resource_grant_for_request is constrained to
456  // at least return at least the min dependent resource quantity requested, here
457  // CPU slots
458  CHECK_GE(adjusted_min_chunk_memory_and_cpu_slots_grant.second,
459  request_info.min_cpu_slots);
460 
461  // May need additional error message as we could fail even though bytes per kernel
462  // < total buffer pool bytes, if cpu slots < min requested cpu slots
463  throw QueryNeedsTooMuchBufferPoolMem(
464  max_pageable_buffer_pool_grant_for_memory_level,
465  adjusted_min_chunk_memory_and_cpu_slots_grant
466  .first, // min chunk memory grant (without chunk grant constraints)
467  chunk_request_info.device_memory_pool_type);
468  }
469  // If here query is allowed but cpu slots are gated to gate number of chunks
470  // simultaneously pinned We should have been gated to a minimum of our request's
471  // min_cpu_slots
472  CHECK_GE(max_chunk_memory_and_cpu_slots_grant.second, request_info.min_cpu_slots);
473  max_resource_grant.cpu_slots = max_chunk_memory_and_cpu_slots_grant.second;
474  max_resource_grant.buffer_mem_gated_per_slot = true;
475  min_resource_grant.buffer_mem_gated_per_slot = true;
476  max_resource_grant.buffer_mem_per_slot = chunk_request_info.max_bytes_per_kernel;
477  min_resource_grant.buffer_mem_per_slot = chunk_request_info.max_bytes_per_kernel;
478  max_resource_grant.buffer_mem_for_given_slots =
479  chunk_request_info.max_bytes_per_kernel * max_resource_grant.cpu_slots;
480  min_resource_grant.buffer_mem_for_given_slots =
481  chunk_request_info.max_bytes_per_kernel * request_info.min_cpu_slots;
482  }
483 
484  min_resource_grant.cpu_slots = request_info.min_cpu_slots;
485  min_resource_grant.gpu_slots = request_info.min_gpu_slots;
486  min_resource_grant.cpu_result_mem = request_info.cpu_result_mem;
487 
488  return std::make_pair(min_resource_grant, max_resource_grant);
489 }
#define CHECK_EQ(x, y)
Definition: Logger.h:301
#define CHECK_GE(x, y)
Definition: Logger.h:306
size_t calc_max_resource_grant_for_request(const size_t requested_resource_quantity, const size_t min_requested_resource_quantity, const size_t max_grantable_resource_quantity) const
#define CHECK_GT(x, y)
Definition: Logger.h:305
std::pair< size_t, size_t > calc_max_dependent_resource_grant_for_request(const size_t requested_dependent_resource_quantity, const size_t min_requested_dependent_resource_quantity, const size_t max_grantable_dependent_resource_quantity, const size_t min_requested_independent_resource_quantity, const size_t max_grantable_independent_resource_quantity, const size_t dependent_to_independent_resource_ratio) const
#define CHECK_LE(x, y)
Definition: Logger.h:304
std::pair< size_t, size_t > calc_min_dependent_resource_grant_for_request(const size_t min_requested_dependent_resource_quantity, const size_t min_requested_independent_resource_quantity, const size_t dependent_to_independent_resource_ratio) const
#define CHECK(condition)
Definition: Logger.h:291
size_t get_max_resource_grant_per_request(const ResourceSubtype resource_subtype) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

std::vector< ResourceRequestGrant > ExecutorResourceMgr_Namespace::ExecutorResourcePool::calc_static_resource_grant_ranges_for_request ( const std::vector< ResourceRequest > &  resource_requests) const

Definition at line 337 of file ExecutorResourcePool.cpp.

References calc_max_resource_grant_for_request(), CHECK, CHECK_EQ, CHECK_LE, get_max_resource_grant_per_request(), ExecutorResourceMgr_Namespace::INVALID_SUBTYPE, ExecutorResourceMgr_Namespace::ResourceRequest::max_quantity, ExecutorResourceMgr_Namespace::ResourceRequest::resource_subtype, and throw_insufficient_resource_error().

338  {
339  std::vector<ResourceRequestGrant> resource_request_grants;
340 
341  std::array<ResourceRequestGrant, ResourceSubtypeSize> all_resource_grants;
342  for (const auto& resource_request : resource_requests) {
343  CHECK(resource_request.resource_subtype != ResourceSubtype::INVALID_SUBTYPE);
344  CHECK_LE(resource_request.min_quantity, resource_request.max_quantity);
345 
346  ResourceRequestGrant resource_grant;
347  resource_grant.resource_subtype = resource_request.resource_subtype;
348  resource_grant.max_quantity = calc_max_resource_grant_for_request(
349  resource_request.max_quantity,
350  resource_request.min_quantity,
351  get_max_resource_grant_per_request(resource_request.resource_subtype));
352  if (resource_grant.max_quantity < resource_request.min_quantity) {
353  // Current implementation should always return 0 if it cannot grant requested amount
354  CHECK_EQ(resource_grant.max_quantity, size_t(0));
355  throw_insufficient_resource_error(resource_request.resource_subtype,
356  resource_request.min_quantity);
357  }
358  all_resource_grants[static_cast<size_t>(resource_grant.resource_subtype)] =
359  resource_grant;
360  }
361  return resource_request_grants;
362 }
#define CHECK_EQ(x, y)
Definition: Logger.h:301
size_t calc_max_resource_grant_for_request(const size_t requested_resource_quantity, const size_t min_requested_resource_quantity, const size_t max_grantable_resource_quantity) const
#define CHECK_LE(x, y)
Definition: Logger.h:304
#define CHECK(condition)
Definition: Logger.h:291
size_t get_max_resource_grant_per_request(const ResourceSubtype resource_subtype) const
ResourceRequest ResourceRequestGrant
Alias of ResourceRequest to ResourceRequestGrant to better semantically differentiate between resourc...
void throw_insufficient_resource_error(const ResourceSubtype resource_subtype, const size_t min_resource_requested) const

+ Here is the call graph for this function:

bool ExecutorResourceMgr_Namespace::ExecutorResourcePool::can_currently_satisfy_chunk_request ( const ResourceGrant min_resource_grant,
const ChunkRequestInfo chunk_request_info 
) const
private

Definition at line 676 of file ExecutorResourcePool.cpp.

References ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_gated_per_slot, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_per_slot, CHECK, CHECK_GT, CHECK_LE, CPU, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_slots, ExecutorResourceMgr_Namespace::debug_print(), ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type, ExecutorResourceMgr_Namespace::ENABLE_DEBUG_PRINTING, format_num_bytes(), get_chunk_bytes_not_in_pool(), get_total_allocated_buffer_pool_mem_for_level(), get_total_resource(), ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, and ExecutorResourceMgr_Namespace::ChunkRequestInfo::total_bytes.

Referenced by can_currently_satisfy_request_impl().

678  {
679  // Expects lock on resource_mutex_ already taken
680 
681  const size_t total_buffer_mem_for_memory_level =
682  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
684  : get_total_resource(ResourceType::GPU_BUFFER_POOL_MEM);
685  const size_t allocated_buffer_mem_for_memory_level =
687  chunk_request_info.device_memory_pool_type);
688 
689  if (min_resource_grant.buffer_mem_gated_per_slot) {
690  CHECK_GT(min_resource_grant.buffer_mem_per_slot, size_t(0));
691  // We only allow scaling back slots to cap buffer pool memory required on CPU
692  // currently
693  CHECK(chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU);
694  const size_t min_buffer_pool_mem_required =
695  min_resource_grant.cpu_slots * min_resource_grant.buffer_mem_per_slot;
696  // Below is a sanity check... we'll never be able to run the query if minimum pool
697  // memory required is not <= the total buffer pool memory
698  CHECK_LE(min_buffer_pool_mem_required, total_buffer_mem_for_memory_level);
699  return allocated_buffer_mem_for_memory_level + min_buffer_pool_mem_required <=
700  total_buffer_mem_for_memory_level;
701  }
702 
703  // CHECK and not exception as parent should have checked this, can re-evaluate whether
704  // should be exception
705  CHECK_LE(chunk_request_info.total_bytes, total_buffer_mem_for_memory_level);
706  const size_t chunk_bytes_not_in_pool = get_chunk_bytes_not_in_pool(chunk_request_info);
707  if (ENABLE_DEBUG_PRINTING) {
708  debug_print("Chunk bytes not in pool: ", format_num_bytes(chunk_bytes_not_in_pool));
709  }
710  return chunk_bytes_not_in_pool + allocated_buffer_mem_for_memory_level <=
711  total_buffer_mem_for_memory_level;
712 }
size_t get_total_resource(const ResourceType resource_type) const
size_t get_chunk_bytes_not_in_pool(const ChunkRequestInfo &chunk_request_info) const
ResourceType
Stores the resource type for a ExecutorResourcePool request.
#define CHECK_GT(x, y)
Definition: Logger.h:305
size_t get_total_allocated_buffer_pool_mem_for_level(const ExecutorDeviceType memory_pool_type) const
std::string format_num_bytes(const size_t bytes)
#define CHECK_LE(x, y)
Definition: Logger.h:304
#define CHECK(condition)
Definition: Logger.h:291
void debug_print(Ts &&...print_args)

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

bool ExecutorResourceMgr_Namespace::ExecutorResourcePool::can_currently_satisfy_request ( const ResourceGrant min_resource_grant,
const ChunkRequestInfo chunk_request_info 
) const

Definition at line 910 of file ExecutorResourcePool.cpp.

References can_currently_satisfy_request_impl(), and resource_mutex_.

912  {
913  std::shared_lock<std::shared_mutex> resource_read_lock(resource_mutex_);
914  return can_currently_satisfy_request_impl(min_resource_grant, chunk_request_info);
915 }
bool can_currently_satisfy_request_impl(const ResourceGrant &min_resource_grant, const ChunkRequestInfo &chunk_request_info) const

+ Here is the call graph for this function:

bool ExecutorResourceMgr_Namespace::ExecutorResourcePool::can_currently_satisfy_request_impl ( const ResourceGrant min_resource_grant,
const ChunkRequestInfo chunk_request_info 
) const
private

Definition at line 555 of file ExecutorResourcePool.cpp.

References can_currently_satisfy_chunk_request(), check_request_against_global_policy(), check_request_against_policy(), ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_result_mem, ExecutorResourceMgr_Namespace::CPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_slots, get_allocated_resource_of_type(), get_concurrent_resource_grant_policy(), get_max_resource_grant_per_request(), get_total_resource(), ExecutorResourceMgr_Namespace::GPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::gpu_slots, and outstanding_num_requests_.

Referenced by allocate_resources(), can_currently_satisfy_request(), and determine_dynamic_resource_grant().

557  {
558  // Currently expects to be protected by mutex from ExecutorResourceMgr
559 
560  // Arguably exceptions below shouldn't happen as resource_grant,
561  // if generated by ExecutorResourcePool per design, should be within
562  // per query max limits. But since this is an external class api call and
563  // the input could be anything provided by the caller, and we may want
564  // to allow for dynamic per query limits, throwing instead of CHECKing
565  // for now, but may re-evaluate.
566 
567  if (min_resource_grant.cpu_slots >
569  throw QueryNeedsTooManyCpuSlots(
571  min_resource_grant.cpu_slots);
572  }
573  if (min_resource_grant.gpu_slots >
575  throw QueryNeedsTooManyGpuSlots(
577  min_resource_grant.gpu_slots);
578  }
579  if (min_resource_grant.cpu_result_mem >
581  throw QueryNeedsTooMuchCpuResultMem(
583  min_resource_grant.cpu_result_mem);
584  }
585 
586  // First check if request is in violation of any global
587  // ALLOW_SINGLE_GLOBAL_REQUEST policies
588 
593  return false;
594  }
599  return false;
600  }
605  return false;
606  }
607 
608  const bool can_satisfy_cpu_slots_request = check_request_against_policy(
609  min_resource_grant.cpu_slots,
614 
615  const bool can_satisfy_gpu_slots_request = check_request_against_policy(
616  min_resource_grant.gpu_slots,
621 
622  const bool can_satisfy_cpu_result_mem_request = check_request_against_policy(
623  min_resource_grant.cpu_result_mem,
628 
629  // Short circuit before heavier chunk check operation
630  if (!(can_satisfy_cpu_slots_request && can_satisfy_gpu_slots_request &&
631  can_satisfy_cpu_result_mem_request)) {
632  return false;
633  }
634 
635  return can_currently_satisfy_chunk_request(min_resource_grant, chunk_request_info);
636 }
size_t get_total_resource(const ResourceType resource_type) const
bool can_currently_satisfy_chunk_request(const ResourceGrant &min_resource_grant, const ChunkRequestInfo &chunk_request_info) const
bool check_request_against_global_policy(const size_t resource_total, const size_t resource_allocated, const ConcurrentResourceGrantPolicy &concurrent_resource_grant_policy) const
size_t get_max_resource_grant_per_request(const ResourceSubtype resource_subtype) const
bool check_request_against_policy(const size_t resource_request, const size_t resource_total, const size_t resource_allocated, const size_t global_outstanding_requests, const ConcurrentResourceGrantPolicy &concurrent_resource_grant_policy) const
size_t get_allocated_resource_of_type(const ResourceType resource_type) const
ConcurrentResourceGrantPolicy get_concurrent_resource_grant_policy(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

bool ExecutorResourceMgr_Namespace::ExecutorResourcePool::check_request_against_global_policy ( const size_t  resource_total,
const size_t  resource_allocated,
const ConcurrentResourceGrantPolicy concurrent_resource_grant_policy 
) const
private

Definition at line 491 of file ExecutorResourcePool.cpp.

References ExecutorResourceMgr_Namespace::ALLOW_SINGLE_REQUEST_GLOBALLY, ExecutorResourceMgr_Namespace::ConcurrentResourceGrantPolicy::concurrency_policy, and ExecutorResourceMgr_Namespace::ConcurrentResourceGrantPolicy::oversubscription_concurrency_policy.

Referenced by can_currently_satisfy_request_impl().

494  {
495  if (concurrent_resource_grant_policy.concurrency_policy ==
497  resource_allocated > 0) {
498  return false;
499  }
500  if (concurrent_resource_grant_policy.oversubscription_concurrency_policy ==
502  resource_allocated > resource_total) {
503  return false;
504  }
505  return true;
506 }

+ Here is the caller graph for this function:

bool ExecutorResourceMgr_Namespace::ExecutorResourcePool::check_request_against_policy ( const size_t  resource_request,
const size_t  resource_total,
const size_t  resource_allocated,
const size_t  global_outstanding_requests,
const ConcurrentResourceGrantPolicy concurrent_resource_grant_policy 
) const
private

Definition at line 508 of file ExecutorResourcePool.cpp.

References ExecutorResourceMgr_Namespace::ALLOW_CONCURRENT_REQUESTS, ExecutorResourceMgr_Namespace::ALLOW_SINGLE_REQUEST, ExecutorResourceMgr_Namespace::ALLOW_SINGLE_REQUEST_GLOBALLY, ExecutorResourceMgr_Namespace::ConcurrentResourceGrantPolicy::concurrency_policy, ExecutorResourceMgr_Namespace::DISALLOW_REQUESTS, ExecutorResourceMgr_Namespace::ConcurrentResourceGrantPolicy::oversubscription_concurrency_policy, and UNREACHABLE.

Referenced by can_currently_satisfy_request_impl().

513  {
514  auto test_request_against_policy =
515  [min_resource_request, resource_allocated, global_outstanding_requests](
516  const ResourceConcurrencyPolicy& resource_concurrency_policy) {
517  switch (resource_concurrency_policy) {
519  // DISALLOW_REQUESTS for undersubscription policy doesn't make much sense as
520  // a resource pool-wide policy (unless we are using it as a sanity check for
521  // something like CPU mode), but planning to implement per-query or priority
522  // level policies so will leave for now
523  return min_resource_request == 0;
524  }
526  // redundant with check_request_against_global_policy,
527  // so considered CHECKing instead that the following cannot
528  // be true, but didn't want to couple the two functions
529  return global_outstanding_requests == 0;
530  }
532  return min_resource_request == 0 || resource_allocated == 0;
533  }
535  return true;
536  }
537  default:
538  UNREACHABLE();
539  }
540  return false;
541  };
542 
543  if (!test_request_against_policy(concurrent_resource_grant_policy.concurrency_policy)) {
544  return false;
545  }
546  if (min_resource_request + resource_allocated <= resource_total) {
547  return true;
548  }
549  return test_request_against_policy(
550  concurrent_resource_grant_policy.oversubscription_concurrency_policy);
551 }
ResourceConcurrencyPolicy
Specifies whether grants for a specified resource can be made concurrently (ALLOW_CONCURRENT_REQEUSTS...
#define UNREACHABLE()
Definition: Logger.h:338

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::deallocate_resources ( const ResourceGrant resource_grant,
const ChunkRequestInfo chunk_request_info 
)

Deallocates resources granted to a requestor such that they can be used for other requests.

Parameters
resource_grant- Resources granted to the request that should be deallocated
chunk_request_info- The DataMgr chunk keys (and other associated info) granted to this query that should be deallocated.

Definition at line 1087 of file ExecutorResourcePool.cpp.

References allocated_resources_, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_gated_per_slot, CHECK_LE, CPU, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_result_mem, ExecutorResourceMgr_Namespace::CPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_slots, decrement_outstanding_per_resource_num_requests(), ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type, logger::EXECUTOR, format_num_bytes(), get_allocated_resource_of_type(), get_outstanding_per_resource_num_requests(), get_total_resource(), GPU, ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::GPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::gpu_slots, LOG, ExecutorResourceMgr_Namespace::ChunkRequestInfo::num_chunks, outstanding_num_requests_, remove_chunk_requests_from_allocated_pool(), resource_mutex_, sanity_check_pool_state_on_deallocations_, sanity_check_requests_against_allocations(), and ExecutorResourceMgr_Namespace::ChunkRequestInfo::total_bytes.

Referenced by ExecutorResourceMgr_Namespace::ExecutorResourceMgr::release_resources().

1089  {
1090  std::unique_lock<std::shared_mutex> resource_write_lock(resource_mutex_);
1091 
1092  // Caller (ExecutorResourceMgr) should never request resource allocation for a request
1093  // it knows cannot be granted, however use below as a sanity check
1094 
1095  CHECK_LE(resource_grant.cpu_slots,
1097  CHECK_LE(resource_grant.gpu_slots,
1099  CHECK_LE(resource_grant.cpu_result_mem,
1101 
1102  allocated_resources_[static_cast<size_t>(ResourceSubtype::CPU_SLOTS)] -=
1103  resource_grant.cpu_slots;
1104  allocated_resources_[static_cast<size_t>(ResourceSubtype::GPU_SLOTS)] -=
1105  resource_grant.gpu_slots;
1106  allocated_resources_[static_cast<size_t>(ResourceSubtype::CPU_RESULT_MEM)] -=
1107  resource_grant.cpu_result_mem;
1108 
1110  if (resource_grant.cpu_slots > 0) {
1112  }
1113  if (resource_grant.gpu_slots > 0) {
1115  }
1116  if (resource_grant.cpu_result_mem > 0) {
1118  }
1119  if (chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU) {
1120  if (resource_grant.buffer_mem_gated_per_slot ||
1121  (chunk_request_info.num_chunks > 0 && chunk_request_info.total_bytes > 0)) {
1123  }
1124  } else if (chunk_request_info.device_memory_pool_type == ExecutorDeviceType::GPU) {
1125  if (resource_grant.buffer_mem_gated_per_slot ||
1126  (chunk_request_info.num_chunks > 0 && chunk_request_info.total_bytes > 0)) {
1128  }
1129  }
1130 
1131  LOG(EXECUTOR) << "ExecutorResourcePool de-allocation: " << outstanding_num_requests_
1132  << " requests ("
1134  << " CPU | "
1136  << " GPU)";
1137  LOG(EXECUTOR) << "ExecutorResourcePool state: CPU slots: "
1139  << get_total_resource(ResourceType::CPU_SLOTS) << " | GPU slots: "
1141  << get_total_resource(ResourceType::GPU_SLOTS) << " CPU result mem: "
1142  << format_num_bytes(
1144  << " of "
1146  remove_chunk_requests_from_allocated_pool(resource_grant, chunk_request_info);
1147 
1150  }
1151 }
size_t get_total_resource(const ResourceType resource_type) const
std::array< size_t, ResourceSubtypeSize > allocated_resources_
#define LOG(tag)
Definition: Logger.h:285
std::string format_num_bytes(const size_t bytes)
#define CHECK_LE(x, y)
Definition: Logger.h:304
void remove_chunk_requests_from_allocated_pool(const ResourceGrant &resource_grant, const ChunkRequestInfo &chunk_request_info)
size_t get_outstanding_per_resource_num_requests(const ResourceType resource_type) const
size_t decrement_outstanding_per_resource_num_requests(const ResourceType resource_type)
size_t get_allocated_resource_of_type(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::decrement_outstanding_per_resource_num_requests ( const ResourceType  resource_type)
inlineprivate

Definition at line 519 of file ExecutorResourcePool.h.

References outstanding_per_resource_num_requests_.

Referenced by deallocate_resources().

520  {
521  return --outstanding_per_resource_num_requests_[static_cast<size_t>(resource_type)];
522  }
std::array< size_t, ResourceTypeSize > outstanding_per_resource_num_requests_

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::decrement_total_per_resource_num_requests ( const ResourceType  resource_type)
inlineprivate

Definition at line 504 of file ExecutorResourcePool.h.

References total_per_resource_num_requests_.

505  {
506  return --total_per_resource_num_requests_[static_cast<size_t>(resource_type)];
507  }
std::array< size_t, ResourceTypeSize > total_per_resource_num_requests_
std::pair< bool, ResourceGrant > ExecutorResourceMgr_Namespace::ExecutorResourcePool::determine_dynamic_resource_grant ( const ResourceGrant min_resource_grant,
const ResourceGrant max_resource_grant,
const ChunkRequestInfo chunk_request_info,
const double  max_request_backoff_ratio 
) const

Determines the actual resource grant to give a query (which will be somewhere between the provided min_resource_grant and max_resource_grant, unless it is determined that the request cannot be currently satisfied).

Generally the resources granted of each type are computed independently, but if buffer_mem_gated_per_slot is set on min_resource_grant, other resources such as threads granted may be scaled back to match the amount of buffer pool mem available.

Parameters
min_resource_grant- The min resource grant allowable for this request, determined in calc_min_max_resource_grants_for_request
max_resource_grant- The max resource grant possible for this request, determined in calc_min_max_resource_grants_for_request
chunk_request_info- The DataMgr chunks with associated sizes needed for this query
max_request_backoff_ratio- The fraction from 0 to 1 of each resource we will leave in the pool, even if the resources are available to satisfy the max_resource_grant (so that there will be resources available for other queries).
Returns
std::pair<bool, ResourceGrant> - the first boolean member of the pair specifies whether the request can currently be satisfied given current resources in the pool, the second is the actual resource grant a requestor will receive.

Definition at line 936 of file ExecutorResourcePool.cpp.

References ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_for_given_slots, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_gated_per_slot, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_per_slot, calc_max_dependent_resource_grant_for_request(), can_currently_satisfy_request_impl(), CHECK, CHECK_EQ, CHECK_GE, CHECK_LE, CPU, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_result_mem, ExecutorResourceMgr_Namespace::CPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::cpu_slots, determine_dynamic_single_resource_grant(), ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type, get_allocated_resource_of_type(), get_total_resource(), ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::GPU_SLOTS, ExecutorResourceMgr_Namespace::ResourceGrant::gpu_slots, ExecutorResourceMgr_Namespace::ChunkRequestInfo::max_bytes_per_kernel, resource_mutex_, and ExecutorResourceMgr_Namespace::ChunkRequestInfo::total_bytes.

Referenced by ExecutorResourceMgr_Namespace::ExecutorResourceMgr::choose_next_request().

940  {
941  std::unique_lock<std::shared_mutex> resource_write_lock(resource_mutex_);
942  CHECK_LE(max_request_backoff_ratio, 1.0);
943  const bool can_satisfy_request =
944  can_currently_satisfy_request_impl(min_resource_grant, chunk_request_info);
945  ResourceGrant actual_resource_grant;
946  if (!can_satisfy_request) {
947  return std::make_pair(false, actual_resource_grant);
948  }
949  actual_resource_grant.cpu_slots = determine_dynamic_single_resource_grant(
950  min_resource_grant.cpu_slots,
951  max_resource_grant.cpu_slots,
954  max_request_backoff_ratio);
955  actual_resource_grant.gpu_slots = determine_dynamic_single_resource_grant(
956  min_resource_grant.gpu_slots,
957  max_resource_grant.gpu_slots,
960  max_request_backoff_ratio);
961  // Todo (todd): Modulate number of CPU threads launched to ensure that
962  // query can fit in currently available CPU result memory
963  actual_resource_grant.cpu_result_mem = determine_dynamic_single_resource_grant(
964  min_resource_grant.cpu_result_mem,
965  max_resource_grant.cpu_result_mem,
968  max_request_backoff_ratio);
969  if (min_resource_grant.buffer_mem_gated_per_slot) {
970  CHECK(chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU);
971  // Below is quite redundant, but can revisit
972  CHECK_EQ(chunk_request_info.max_bytes_per_kernel,
973  min_resource_grant.buffer_mem_per_slot);
974  CHECK_EQ(chunk_request_info.max_bytes_per_kernel,
975  max_resource_grant.buffer_mem_per_slot);
976 
977  const size_t allocated_buffer_mem_for_memory_level =
978  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
980  : get_allocated_resource_of_type(ResourceType::GPU_BUFFER_POOL_MEM);
981  const size_t total_buffer_mem_for_memory_level =
982  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
984  : get_total_resource(ResourceType::GPU_BUFFER_POOL_MEM);
985 
986  CHECK_LE(allocated_buffer_mem_for_memory_level, total_buffer_mem_for_memory_level);
987 
988  const size_t remaining_buffer_mem_for_memory_level =
989  total_buffer_mem_for_memory_level - allocated_buffer_mem_for_memory_level;
990 
991  CHECK_LE(min_resource_grant.buffer_mem_for_given_slots,
992  remaining_buffer_mem_for_memory_level);
993  const size_t max_grantable_mem =
994  std::min(remaining_buffer_mem_for_memory_level,
995  max_resource_grant.buffer_mem_for_given_slots);
996  const auto granted_buffer_mem_and_cpu_slots =
998  chunk_request_info.total_bytes, // requested_dependent_resource_quantity
999  min_resource_grant
1000  .buffer_mem_for_given_slots, // min_requested_dependent_resource_quantity
1001  max_grantable_mem, // max_grantable_dependent_resource_quantity
1002  min_resource_grant.cpu_slots, // min_requested_independent_resource_quantity
1003  max_resource_grant.cpu_slots, // max_grantable_independent_resource_quantity
1004  chunk_request_info
1005  .max_bytes_per_kernel); // dependent_to_independent_resource_ratio
1006  const size_t granted_buffer_mem = granted_buffer_mem_and_cpu_slots.first;
1007  const size_t granted_cpu_slots = granted_buffer_mem_and_cpu_slots.second;
1008  CHECK_EQ(granted_buffer_mem,
1009  granted_cpu_slots * chunk_request_info.max_bytes_per_kernel);
1010  CHECK_GE(granted_cpu_slots, min_resource_grant.cpu_slots);
1011  CHECK_LE(granted_cpu_slots, max_resource_grant.cpu_slots);
1012  actual_resource_grant.buffer_mem_gated_per_slot = true;
1013  actual_resource_grant.buffer_mem_per_slot = chunk_request_info.max_bytes_per_kernel;
1014  actual_resource_grant.buffer_mem_for_given_slots = granted_buffer_mem;
1015  actual_resource_grant.cpu_slots =
1016  granted_cpu_slots; // Override cpu slots with restricted dependent resource
1017  // calc
1018  }
1019  return std::make_pair(true, actual_resource_grant);
1020 }
size_t get_total_resource(const ResourceType resource_type) const
#define CHECK_EQ(x, y)
Definition: Logger.h:301
#define CHECK_GE(x, y)
Definition: Logger.h:306
ResourceType
Stores the resource type for a ExecutorResourcePool request.
bool can_currently_satisfy_request_impl(const ResourceGrant &min_resource_grant, const ChunkRequestInfo &chunk_request_info) const
size_t determine_dynamic_single_resource_grant(const size_t min_resource_requested, const size_t max_resource_requested, const size_t resource_allocated, const size_t total_resource, const double max_request_backoff_ratio) const
std::pair< size_t, size_t > calc_max_dependent_resource_grant_for_request(const size_t requested_dependent_resource_quantity, const size_t min_requested_dependent_resource_quantity, const size_t max_grantable_dependent_resource_quantity, const size_t min_requested_independent_resource_quantity, const size_t max_grantable_independent_resource_quantity, const size_t dependent_to_independent_resource_ratio) const
#define CHECK_LE(x, y)
Definition: Logger.h:304
#define CHECK(condition)
Definition: Logger.h:291
size_t get_allocated_resource_of_type(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::determine_dynamic_single_resource_grant ( const size_t  min_resource_requested,
const size_t  max_resource_requested,
const size_t  resource_allocated,
const size_t  total_resource,
const double  max_request_backoff_ratio 
) const
private

Definition at line 917 of file ExecutorResourcePool.cpp.

References CHECK_LE.

Referenced by determine_dynamic_resource_grant().

922  {
923  CHECK_LE(min_resource_requested, max_resource_requested);
924  if (min_resource_requested + resource_allocated >= total_resource) {
925  return min_resource_requested;
926  }
927  // The below is safe in unsigned math as we know that resource_allocated <
928  // total_resource from the above conditional
929  const size_t resource_remaining = total_resource - resource_allocated;
930  return std::max(min_resource_requested,
931  std::min(max_resource_requested,
932  static_cast<size_t>(
933  round(max_request_backoff_ratio * resource_remaining))));
934 }
#define CHECK_LE(x, y)
Definition: Logger.h:304

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_allocated_resource_of_subtype ( const ResourceSubtype  resource_subtype) const
inlineprivate

Definition at line 482 of file ExecutorResourcePool.h.

References allocated_resources_.

Referenced by add_chunk_requests_to_allocated_pool(), get_allocated_resource_of_type(), get_resource_info(), and remove_chunk_requests_from_allocated_pool().

483  {
484  return allocated_resources_[static_cast<size_t>(resource_subtype)];
485  }
std::array< size_t, ResourceSubtypeSize > allocated_resources_

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_allocated_resource_of_type ( const ResourceType  resource_type) const
private

Definition at line 169 of file ExecutorResourcePool.cpp.

References get_allocated_resource_of_subtype(), and ExecutorResourceMgr_Namespace::map_resource_type_to_resource_subtypes().

Referenced by allocate_resources(), can_currently_satisfy_request_impl(), deallocate_resources(), determine_dynamic_resource_grant(), get_resource_info(), get_total_allocated_buffer_pool_mem_for_level(), remove_chunk_requests_from_allocated_pool(), and sanity_check_requests_against_allocations().

170  {
171  const auto resource_subtypes = map_resource_type_to_resource_subtypes(resource_type);
172  size_t resource_type_allocation_sum{0};
173  for (const auto& resource_subtype : resource_subtypes) {
174  resource_type_allocation_sum += get_allocated_resource_of_subtype(resource_subtype);
175  }
176  return resource_type_allocation_sum;
177 }
size_t get_allocated_resource_of_subtype(const ResourceSubtype resource_subtype) const
std::vector< ResourceSubtype > map_resource_type_to_resource_subtypes(const ResourceType resource_type)
Returns the 1-or-more ResourceSubtypes associated with a given ResourceType.

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_chunk_bytes_not_in_pool ( const ChunkRequestInfo chunk_request_info) const
private

Definition at line 658 of file ExecutorResourcePool.cpp.

References allocated_cpu_buffer_pool_chunks_, allocated_gpu_buffer_pool_chunks_, ExecutorResourceMgr_Namespace::ChunkRequestInfo::chunks_with_byte_sizes, CPU, and ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type.

Referenced by can_currently_satisfy_chunk_request().

659  {
660  const BufferPoolChunkMap& chunk_map_for_memory_level =
661  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
664  size_t chunk_bytes_not_in_pool{0};
665  for (const auto& requested_chunk : chunk_request_info.chunks_with_byte_sizes) {
666  const auto chunk_itr = chunk_map_for_memory_level.find(requested_chunk.first);
667  if (chunk_itr == chunk_map_for_memory_level.end()) {
668  chunk_bytes_not_in_pool += requested_chunk.second;
669  } else if (requested_chunk.second > chunk_itr->second.second) {
670  chunk_bytes_not_in_pool += requested_chunk.second - chunk_itr->second.second;
671  }
672  }
673  return chunk_bytes_not_in_pool;
674 }
std::map< ChunkKey, std::pair< size_t, size_t >> BufferPoolChunkMap

+ Here is the caller graph for this function:

ConcurrentResourceGrantPolicy ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_concurrent_resource_grant_policy ( const ResourceType  resource_type) const
inline

Definition at line 369 of file ExecutorResourcePool.h.

References concurrent_resource_grant_policies_.

Referenced by can_currently_satisfy_request_impl(), ExecutorResourceMgr_Namespace::ExecutorResourceMgr::get_concurrent_resource_grant_policy(), init_max_resource_grants_per_requests(), log_parameters(), and ExecutorResourceMgr_Namespace::ExecutorResourceMgr::set_concurrent_resource_grant_policy().

370  {
371  return concurrent_resource_grant_policies_[static_cast<size_t>(resource_type)];
372  }
std::array< ConcurrentResourceGrantPolicy, ResourceTypeSize > concurrent_resource_grant_policies_

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_max_resource_grant_per_request ( const ResourceSubtype  resource_subtype) const
inlineprivate

Definition at line 489 of file ExecutorResourcePool.h.

References max_resource_grants_per_request_.

Referenced by calc_min_max_resource_grants_for_request(), calc_static_resource_grant_ranges_for_request(), can_currently_satisfy_request_impl(), and throw_insufficient_resource_error().

490  {
491  return max_resource_grants_per_request_[static_cast<size_t>(resource_subtype)];
492  }
std::array< size_t, ResourceSubtypeSize > max_resource_grants_per_request_

+ Here is the caller graph for this function:

const ResourceGrantPolicy& ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_max_resource_grant_per_request_policy ( const ResourceSubtype  resource_subtype) const
inline

Definition at line 374 of file ExecutorResourcePool.h.

References max_resource_grants_per_request_policies_.

Referenced by log_parameters().

375  {
376  return max_resource_grants_per_request_policies_[static_cast<size_t>(
377  resource_subtype)];
378  }
std::array< ResourceGrantPolicy, ResourceSubtypeSize > max_resource_grants_per_request_policies_

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_outstanding_per_resource_num_requests ( const ResourceType  resource_type) const
inlineprivate

Definition at line 509 of file ExecutorResourcePool.h.

References outstanding_per_resource_num_requests_.

Referenced by allocate_resources(), deallocate_resources(), get_resource_info(), and sanity_check_requests_against_allocations().

510  {
511  return outstanding_per_resource_num_requests_[static_cast<size_t>(resource_type)];
512  }
std::array< size_t, ResourceTypeSize > outstanding_per_resource_num_requests_

+ Here is the caller graph for this function:

ChunkRequestInfo ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_requested_chunks_not_in_pool ( const ChunkRequestInfo chunk_request_info) const
private

Definition at line 638 of file ExecutorResourcePool.cpp.

References allocated_cpu_buffer_pool_chunks_, allocated_gpu_buffer_pool_chunks_, ExecutorResourceMgr_Namespace::ChunkRequestInfo::chunks_with_byte_sizes, CPU, ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type, ExecutorResourceMgr_Namespace::ChunkRequestInfo::num_chunks, and ExecutorResourceMgr_Namespace::ChunkRequestInfo::total_bytes.

639  {
640  const BufferPoolChunkMap& chunk_map_for_memory_level =
641  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
644  ChunkRequestInfo missing_chunk_info;
645  missing_chunk_info.device_memory_pool_type = chunk_request_info.device_memory_pool_type;
646  std::vector<std::pair<ChunkKey, size_t>> missing_chunks_with_byte_sizes;
647  for (const auto& requested_chunk : chunk_request_info.chunks_with_byte_sizes) {
648  if (chunk_map_for_memory_level.find(requested_chunk.first) ==
649  chunk_map_for_memory_level.end()) {
650  missing_chunk_info.chunks_with_byte_sizes.emplace_back(requested_chunk);
651  missing_chunk_info.total_bytes += requested_chunk.second;
652  }
653  }
654  missing_chunk_info.num_chunks = missing_chunk_info.chunks_with_byte_sizes.size();
655  return missing_chunk_info;
656 }
std::map< ChunkKey, std::pair< size_t, size_t >> BufferPoolChunkMap
std::pair< size_t, size_t > ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_resource_info ( const ResourceType  resource_type) const

Returns the allocated and total available amount of the resource specified.

Returns
std::pair<size_t, size_t> - First member is the allocated amount of the resource, the second member is the total amount of the resource (including allocated and available)

Definition at line 179 of file ExecutorResourcePool.cpp.

References get_allocated_resource_of_type(), get_total_resource(), and resource_mutex_.

Referenced by ExecutorResourceMgr_Namespace::ExecutorResourceMgr::get_resource_info().

180  {
181  std::shared_lock<std::shared_mutex> resource_read_lock(resource_mutex_);
182  return std::make_pair(get_allocated_resource_of_type(resource_type),
183  get_total_resource(resource_type));
184 }
size_t get_total_resource(const ResourceType resource_type) const
size_t get_allocated_resource_of_type(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

ResourcePoolInfo ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_resource_info ( ) const

Returns a struct detailing the allocated and total available resources of each type tracked in ExecutorResourcePool.

Returns
ResourcePoolInfo - Struct detailining the allocaated and total available resources of each typed tracked in ExecutorResourcePool

Definition at line 186 of file ExecutorResourcePool.cpp.

References allocated_cpu_buffer_pool_chunks_, allocated_gpu_buffer_pool_chunks_, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::CPU_SLOTS, get_allocated_resource_of_subtype(), get_allocated_resource_of_type(), get_outstanding_per_resource_num_requests(), get_total_resource(), ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::GPU_SLOTS, outstanding_num_requests_, ExecutorResourceMgr_Namespace::PAGEABLE_CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::PAGEABLE_GPU_BUFFER_POOL_MEM, resource_mutex_, and total_num_requests_.

186  {
187  std::shared_lock<std::shared_mutex> resource_read_lock(resource_mutex_);
188  return ResourcePoolInfo(
210 }
size_t get_total_resource(const ResourceType resource_type) const
size_t get_allocated_resource_of_subtype(const ResourceSubtype resource_subtype) const
size_t get_outstanding_per_resource_num_requests(const ResourceType resource_type) const
size_t get_allocated_resource_of_type(const ResourceType resource_type) const

+ Here is the call graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_total_allocated_buffer_pool_mem_for_level ( const ExecutorDeviceType  memory_pool_type) const
inlineprivate

Definition at line 467 of file ExecutorResourcePool.h.

References CPU, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, get_allocated_resource_of_type(), and ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM.

Referenced by add_chunk_requests_to_allocated_pool(), and can_currently_satisfy_chunk_request().

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_total_per_resource_num_requests ( const ResourceType  resource_type) const
inlineprivate

Definition at line 494 of file ExecutorResourcePool.h.

References total_per_resource_num_requests_.

495  {
496  return total_per_resource_num_requests_[static_cast<size_t>(resource_type)];
497  }
std::array< size_t, ResourceTypeSize > total_per_resource_num_requests_
size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::get_total_resource ( const ResourceType  resource_type) const
inlineprivate

Definition at line 478 of file ExecutorResourcePool.h.

References total_resources_.

Referenced by add_chunk_requests_to_allocated_pool(), allocate_resources(), can_currently_satisfy_chunk_request(), can_currently_satisfy_request_impl(), deallocate_resources(), determine_dynamic_resource_grant(), get_resource_info(), init_max_resource_grants_per_requests(), log_parameters(), and remove_chunk_requests_from_allocated_pool().

478  {
479  return total_resources_[static_cast<size_t>(resource_type)];
480  }
std::array< size_t, ResourceTypeSize > total_resources_

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::increment_outstanding_per_resource_num_requests ( const ResourceType  resource_type)
inlineprivate

Definition at line 514 of file ExecutorResourcePool.h.

References outstanding_per_resource_num_requests_.

Referenced by allocate_resources().

515  {
516  return ++outstanding_per_resource_num_requests_[static_cast<size_t>(resource_type)];
517  }
std::array< size_t, ResourceTypeSize > outstanding_per_resource_num_requests_

+ Here is the caller graph for this function:

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::increment_total_per_resource_num_requests ( const ResourceType  resource_type)
inlineprivate

Definition at line 499 of file ExecutorResourcePool.h.

References total_per_resource_num_requests_.

Referenced by allocate_resources().

500  {
501  return ++total_per_resource_num_requests_[static_cast<size_t>(resource_type)];
502  }
std::array< size_t, ResourceTypeSize > total_per_resource_num_requests_

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::init ( const std::vector< std::pair< ResourceType, size_t >> &  total_resources,
const std::vector< ConcurrentResourceGrantPolicy > &  concurrent_resource_grant_policies,
const std::vector< ResourceGrantPolicy > &  max_per_request_resource_grant_policies 
)
private

Definition at line 58 of file ExecutorResourcePool.cpp.

References concurrent_resource_grant_policies_, init_concurrency_policies(), init_max_resource_grants_per_requests(), ExecutorResourceMgr_Namespace::INVALID_SUBTYPE, ExecutorResourceMgr_Namespace::INVALID_TYPE, max_resource_grants_per_request_policies_, resource_type_validity_, and total_resources_.

Referenced by ExecutorResourcePool(), set_concurrent_resource_grant_policy(), and set_resource().

61  {
62  for (const auto& total_resource : total_resources) {
63  if (total_resource.first == ResourceType::INVALID_TYPE) {
64  continue;
65  }
66  total_resources_[static_cast<size_t>(total_resource.first)] = total_resource.second;
67  resource_type_validity_[static_cast<size_t>(total_resource.first)] = true;
68  }
69 
70  for (const auto& concurrent_resource_grant_policy :
71  concurrent_resource_grant_policies) {
72  const ResourceType resource_type = concurrent_resource_grant_policy.resource_type;
73  if (resource_type == ResourceType::INVALID_TYPE) {
74  continue;
75  }
76  concurrent_resource_grant_policies_[static_cast<size_t>(resource_type)] =
77  concurrent_resource_grant_policy;
78  }
79 
80  for (const auto& max_resource_grant_per_request_policy :
81  max_resource_grants_per_request_policies) {
82  const ResourceSubtype resource_subtype =
83  max_resource_grant_per_request_policy.resource_subtype;
84  if (resource_subtype == ResourceSubtype::INVALID_SUBTYPE) {
85  continue;
86  }
87  max_resource_grants_per_request_policies_[static_cast<size_t>(resource_subtype)] =
88  max_resource_grant_per_request_policy;
89  }
90 
93 }
std::array< bool, ResourceTypeSize > resource_type_validity_
std::array< size_t, ResourceTypeSize > total_resources_
ResourceType
Stores the resource type for a ExecutorResourcePool request.
ResourceSubtype
Stores the resource sub-type for a ExecutorResourcePool request.
std::array< ConcurrentResourceGrantPolicy, ResourceTypeSize > concurrent_resource_grant_policies_
std::array< ResourceGrantPolicy, ResourceSubtypeSize > max_resource_grants_per_request_policies_

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::init_concurrency_policies ( )
private

Definition at line 95 of file ExecutorResourcePool.cpp.

References ExecutorResourceMgr_Namespace::ALLOW_SINGLE_REQUEST, CHECK, concurrent_resource_grant_policies_, ExecutorResourceMgr_Namespace::DISALLOW_REQUESTS, ExecutorResourceMgr_Namespace::INVALID_TYPE, and is_resource_valid().

Referenced by init().

95  {
96  size_t resource_type_idx = 0;
97  for (auto& concurrent_resource_grant_policy : concurrent_resource_grant_policies_) {
98  const auto resource_type = static_cast<ResourceType>(resource_type_idx);
99  const auto concurrency_policy_resource_type =
100  concurrent_resource_grant_policy.resource_type;
101  CHECK(resource_type == concurrency_policy_resource_type ||
102  concurrency_policy_resource_type == ResourceType::INVALID_TYPE);
103  if (is_resource_valid(resource_type)) {
104  if (concurrency_policy_resource_type == ResourceType::INVALID_TYPE) {
105  concurrent_resource_grant_policy.resource_type = resource_type;
106  concurrent_resource_grant_policy.concurrency_policy =
108  concurrent_resource_grant_policy.oversubscription_concurrency_policy =
110  }
111  } else {
112  concurrent_resource_grant_policy.resource_type = ResourceType::INVALID_TYPE;
113  }
114  resource_type_idx++;
115  }
116 }
ResourceType
Stores the resource type for a ExecutorResourcePool request.
bool is_resource_valid(const ResourceType resource_type) const
std::array< ConcurrentResourceGrantPolicy, ResourceTypeSize > concurrent_resource_grant_policies_
#define CHECK(condition)
Definition: Logger.h:291

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::init_max_resource_grants_per_requests ( )
private

Definition at line 118 of file ExecutorResourcePool.cpp.

References CHECK, ExecutorResourceMgr_Namespace::DISALLOW_REQUESTS, get_concurrent_resource_grant_policy(), get_total_resource(), ExecutorResourceMgr_Namespace::INVALID_SUBTYPE, is_resource_valid(), ExecutorResourceMgr_Namespace::map_resource_subtype_to_resource_type(), max_resource_grants_per_request_, max_resource_grants_per_request_policies_, and ExecutorResourceMgr_Namespace::UNLIMITED.

Referenced by init().

118  {
119  size_t resource_subtype_idx = 0;
120  for (auto& max_resource_grant_per_request_policy :
122  const auto resource_subtype = static_cast<ResourceSubtype>(resource_subtype_idx);
123  const auto resource_type = map_resource_subtype_to_resource_type(resource_subtype);
124  const auto policy_resource_subtype =
125  max_resource_grant_per_request_policy.resource_subtype;
126  CHECK(resource_subtype == policy_resource_subtype ||
127  policy_resource_subtype == ResourceSubtype::INVALID_SUBTYPE);
128  if (is_resource_valid(resource_type)) {
129  if (policy_resource_subtype == ResourceSubtype::INVALID_SUBTYPE) {
130  max_resource_grant_per_request_policy.resource_subtype = resource_subtype;
131  max_resource_grant_per_request_policy.policy_size_type =
133  }
134  max_resource_grants_per_request_[static_cast<size_t>(
135  max_resource_grant_per_request_policy.resource_subtype)] =
136  max_resource_grant_per_request_policy.get_grant_quantity(
137  get_total_resource(resource_type),
139  .oversubscription_concurrency_policy ==
141  } else {
142  max_resource_grant_per_request_policy.resource_subtype =
144  }
145  resource_subtype_idx++;
146  }
147 }
size_t get_total_resource(const ResourceType resource_type) const
bool is_resource_valid(const ResourceType resource_type) const
ResourceSubtype
Stores the resource sub-type for a ExecutorResourcePool request.
ResourceType map_resource_subtype_to_resource_type(const ResourceSubtype resource_subtype)
Returns the ResourceType associated with a given ResourceSubtype
#define CHECK(condition)
Definition: Logger.h:291
std::array< size_t, ResourceSubtypeSize > max_resource_grants_per_request_
std::array< ResourceGrantPolicy, ResourceSubtypeSize > max_resource_grants_per_request_policies_
ConcurrentResourceGrantPolicy get_concurrent_resource_grant_policy(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

bool ExecutorResourceMgr_Namespace::ExecutorResourcePool::is_resource_valid ( const ResourceType  resource_type) const
inlineprivate

Definition at line 474 of file ExecutorResourcePool.h.

References resource_type_validity_.

Referenced by init_concurrency_policies(), init_max_resource_grants_per_requests(), and log_parameters().

474  {
475  return resource_type_validity_[static_cast<size_t>(resource_type)];
476  }
std::array< bool, ResourceTypeSize > resource_type_validity_

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::log_parameters ( ) const

Definition at line 149 of file ExecutorResourcePool.cpp.

References logger::EXECUTOR, get_concurrent_resource_grant_policy(), get_max_resource_grant_per_request_policy(), get_total_resource(), is_resource_valid(), LOG, ExecutorResourceMgr_Namespace::map_resource_type_to_resource_subtypes(), ExecutorResourceMgr_Namespace::resource_type_to_string(), ExecutorResourceMgr_Namespace::ResourceTypeSize, ExecutorResourceMgr_Namespace::ResourceGrantPolicy::to_string(), and ExecutorResourceMgr_Namespace::ConcurrentResourceGrantPolicy::to_string().

Referenced by ExecutorResourcePool().

149  {
150  for (size_t resource_idx = 0; resource_idx < ResourceTypeSize; ++resource_idx) {
151  const ResourceType resource_type = static_cast<ResourceType>(resource_idx);
152  if (!is_resource_valid(resource_type)) {
153  continue;
154  }
155  const auto total_resource = get_total_resource(resource_type);
156  const auto resource_type_str = resource_type_to_string(resource_type);
157  LOG(EXECUTOR) << "Resource: " << resource_type_str << ": " << total_resource;
158  LOG(EXECUTOR) << "Concurrency Policy for " << resource_type_str << ": "
160  LOG(EXECUTOR) << "Max per-request resource grants for sub-types:";
161  const auto resource_subtypes = map_resource_type_to_resource_subtypes(resource_type);
162  for (const auto& resource_subtype : resource_subtypes) {
163  LOG(EXECUTOR)
165  }
166  }
167 }
size_t get_total_resource(const ResourceType resource_type) const
#define LOG(tag)
Definition: Logger.h:285
static constexpr size_t ResourceTypeSize
const ResourceGrantPolicy & get_max_resource_grant_per_request_policy(const ResourceSubtype resource_subtype) const
ResourceType
Stores the resource type for a ExecutorResourcePool request.
std::vector< ResourceSubtype > map_resource_type_to_resource_subtypes(const ResourceType resource_type)
Returns the 1-or-more ResourceSubtypes associated with a given ResourceType.
bool is_resource_valid(const ResourceType resource_type) const
std::string resource_type_to_string(const ResourceType resource_type)
ConcurrentResourceGrantPolicy get_concurrent_resource_grant_policy(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::remove_chunk_requests_from_allocated_pool ( const ResourceGrant resource_grant,
const ChunkRequestInfo chunk_request_info 
)
private

Definition at line 814 of file ExecutorResourcePool.cpp.

References allocated_cpu_buffer_pool_chunks_, allocated_gpu_buffer_pool_chunks_, allocated_resources_, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_for_given_slots, ExecutorResourceMgr_Namespace::ResourceGrant::buffer_mem_gated_per_slot, CHECK, CHECK_GE, ExecutorResourceMgr_Namespace::ChunkRequestInfo::chunks_with_byte_sizes, CPU, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::debug_print(), ExecutorResourceMgr_Namespace::ChunkRequestInfo::device_memory_pool_type, ExecutorResourceMgr_Namespace::ENABLE_DEBUG_PRINTING, logger::EXECUTOR, format_num_bytes(), get_allocated_resource_of_subtype(), get_allocated_resource_of_type(), get_total_resource(), ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, LOG, ExecutorResourceMgr_Namespace::ChunkRequestInfo::num_chunks, ExecutorResourceMgr_Namespace::PAGEABLE_CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::PINNED_CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::PINNED_GPU_BUFFER_POOL_MEM, and ExecutorResourceMgr_Namespace::ChunkRequestInfo::total_bytes.

Referenced by deallocate_resources().

816  {
817  // Expects lock on resource_mutex_ already taken
818 
819  if (resource_grant.buffer_mem_gated_per_slot) {
820  CHECK(chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU);
821  CHECK_GE(
823  resource_grant.buffer_mem_for_given_slots);
824  CHECK_GE(
826  resource_grant.buffer_mem_for_given_slots);
827  allocated_resources_[static_cast<size_t>(
829  resource_grant.buffer_mem_for_given_slots;
830  const std::string& pool_level_string =
831  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU ? "CPU"
832  : "GPU";
833  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
834  << " allocated_temp chunk removal: "
835  << format_num_bytes(resource_grant.buffer_mem_for_given_slots);
836  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
837  << " pool state: Transient Allocations: "
840  << " Total Allocations: "
843  return;
844  }
845 
846  BufferPoolChunkMap& chunk_map_for_memory_level =
847  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
850  size_t& pinned_buffer_mem_for_memory_level =
851  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
852  ? allocated_resources_[static_cast<size_t>(
854  : allocated_resources_[static_cast<size_t>(
856 
857  // Following variables are for logging
858  const size_t pre_remove_allocated_chunks_for_memory_level =
859  chunk_map_for_memory_level.size();
860  const size_t pre_remove_allocated_buffer_mem_for_memory_level =
861  pinned_buffer_mem_for_memory_level;
862 
863  for (const auto& requested_chunk : chunk_request_info.chunks_with_byte_sizes) {
864  auto chunk_itr = chunk_map_for_memory_level.find(requested_chunk.first);
865  // Chunk must exist in pool
866  CHECK(chunk_itr != chunk_map_for_memory_level.end());
867  chunk_itr->second.first -= 1;
868  if (chunk_itr->second.first == size_t(0)) {
869  pinned_buffer_mem_for_memory_level -= chunk_itr->second.second;
870  chunk_map_for_memory_level.erase(chunk_itr);
871  }
872  }
873  const size_t total_buffer_mem_for_memory_level =
874  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU
876  : get_total_resource(ResourceType::GPU_BUFFER_POOL_MEM);
877 
878  const size_t post_remove_allocated_chunks_for_memory_level =
879  chunk_map_for_memory_level.size();
880  const size_t net_removed_allocated_chunks =
881  pre_remove_allocated_chunks_for_memory_level -
882  post_remove_allocated_chunks_for_memory_level;
883  const size_t net_removed_allocated_memory =
884  pre_remove_allocated_buffer_mem_for_memory_level -
885  pinned_buffer_mem_for_memory_level;
886 
887  const std::string& pool_level_string =
888  chunk_request_info.device_memory_pool_type == ExecutorDeviceType::CPU ? "CPU"
889  : "GPU";
890  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
891  << " chunk removal: " << chunk_request_info.num_chunks << " chunks | "
892  << format_num_bytes(chunk_request_info.total_bytes);
893  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
894  << " pool delta: " << net_removed_allocated_chunks << " chunks removed | "
895  << format_num_bytes(net_removed_allocated_memory);
896  LOG(EXECUTOR) << "ExecutorResourePool " << pool_level_string
897  << " pool state: " << post_remove_allocated_chunks_for_memory_level
898  << " chunks | " << format_num_bytes(pinned_buffer_mem_for_memory_level);
899 
900  if (ENABLE_DEBUG_PRINTING) {
901  debug_print("After chunk removal: ",
902  format_num_bytes(pinned_buffer_mem_for_memory_level) + " of ",
903  format_num_bytes(total_buffer_mem_for_memory_level),
904  ", with ",
905  chunk_map_for_memory_level.size(),
906  " chunks.");
907  }
908 }
size_t get_total_resource(const ResourceType resource_type) const
std::array< size_t, ResourceSubtypeSize > allocated_resources_
#define LOG(tag)
Definition: Logger.h:285
size_t get_allocated_resource_of_subtype(const ResourceSubtype resource_subtype) const
#define CHECK_GE(x, y)
Definition: Logger.h:306
ResourceType
Stores the resource type for a ExecutorResourcePool request.
std::map< ChunkKey, std::pair< size_t, size_t >> BufferPoolChunkMap
std::string format_num_bytes(const size_t bytes)
#define CHECK(condition)
Definition: Logger.h:291
size_t get_allocated_resource_of_type(const ResourceType resource_type) const
void debug_print(Ts &&...print_args)

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::sanity_check_requests_against_allocations ( ) const
private

Definition at line 1153 of file ExecutorResourcePool.cpp.

References allocated_cpu_buffer_pool_chunks_, allocated_gpu_buffer_pool_chunks_, CHECK, CHECK_EQ, CHECK_LE, ExecutorResourceMgr_Namespace::CPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::CPU_SLOTS, get_allocated_resource_of_type(), get_outstanding_per_resource_num_requests(), ExecutorResourceMgr_Namespace::GPU_BUFFER_POOL_MEM, ExecutorResourceMgr_Namespace::GPU_SLOTS, outstanding_num_requests_, and total_num_requests_.

Referenced by deallocate_resources().

1153  {
1154  const size_t sum_resource_requests =
1158 
1160  CHECK_LE(outstanding_num_requests_, sum_resource_requests);
1161  const bool has_outstanding_resource_requests = sum_resource_requests > 0;
1162  const bool has_outstanding_num_requests_globally = outstanding_num_requests_ > 0;
1163  CHECK_EQ(has_outstanding_resource_requests, has_outstanding_num_requests_globally);
1164 
1169 
1174 
1179 
1180  CHECK_EQ(
1183 
1184  CHECK_EQ(
1187 
1188  if (outstanding_num_requests_ == static_cast<size_t>(0)) {
1190  size_t(0));
1192  size_t(0));
1195  }
1196 }
#define CHECK_EQ(x, y)
Definition: Logger.h:301
#define CHECK_LE(x, y)
Definition: Logger.h:304
size_t get_outstanding_per_resource_num_requests(const ResourceType resource_type) const
#define CHECK(condition)
Definition: Logger.h:291
size_t get_allocated_resource_of_type(const ResourceType resource_type) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::set_concurrent_resource_grant_policy ( const ConcurrentResourceGrantPolicy concurrent_resource_grant_policy)

Resets the concurrent resource grant policy object, which specifies a ResourceType as well as normal and oversubscription concurrency policies. If pool has outstanding requests, will throw. Responsibility of allowing the pool to empty and preventing concurrent requests while this operation is running is left to the caller (in particular, ExecutorResourceMgr::set_concurent_resource_grant_policy pauses the process queue, which waits until all executing requests are finished before yielding to the caller, before calling this method).

Currently only used for testing, but a SQL interface to live-change concurrency policies for the pool could be added.

Parameters
concurrent_resource_grant_policy- new concurrent resource policy (which encompasses the type of resource)

Definition at line 224 of file ExecutorResourcePool.cpp.

References CHECK, init(), ExecutorResourceMgr_Namespace::INVALID_TYPE, outstanding_num_requests_, and ExecutorResourceMgr_Namespace::ConcurrentResourceGrantPolicy::resource_type.

Referenced by ExecutorResourceMgr_Namespace::ExecutorResourceMgr::set_concurrent_resource_grant_policy().

225  {
226  CHECK(concurrent_resource_grant_policy.resource_type != ResourceType::INVALID_TYPE);
228  throw std::runtime_error(
229  "Executor Pool must be clear of requests to change resource concurrent resource "
230  "grant policies.");
231  }
232  init({}, {concurrent_resource_grant_policy}, {});
233 }
void init(const std::vector< std::pair< ResourceType, size_t >> &total_resources, const std::vector< ConcurrentResourceGrantPolicy > &concurrent_resource_grant_policies, const std::vector< ResourceGrantPolicy > &max_per_request_resource_grant_policies)
#define CHECK(condition)
Definition: Logger.h:291

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::set_resource ( const ResourceType  resource_type,
const size_t  resource_quantity 
)

Sets the quantity of resource_type to resource_quantity. If pool has outstanding requests, will throw. Responsibility of allowing the pool to empty and preventing concurrent requests while this operation is running is left to the caller (in particular, ExecutorResourceMgr::set_resource pauses the process queue, which waits until all executing requests are finished before yielding to the caller, before calling this method).

Currently only used for testing, but a SQL interface to live-change resources available in the pool could be added.

Parameters
resource_type- type of resource to change the quanity of
resource_quantity- new quantity of resource for given resource_type

Definition at line 212 of file ExecutorResourcePool.cpp.

References CHECK, init(), ExecutorResourceMgr_Namespace::INVALID_TYPE, and outstanding_num_requests_.

Referenced by ExecutorResourceMgr_Namespace::ExecutorResourceMgr::set_resource().

213  {
214  CHECK(resource_type != ResourceType::INVALID_TYPE);
216  throw std::runtime_error(
217  "Executor Pool must be clear of requests to change resources available.");
218  }
219  const std::vector<std::pair<ResourceType, size_t>> total_resources_vec = {
220  std::make_pair(resource_type, resource_quantity)};
221  init(total_resources_vec, {}, {});
222 }
void init(const std::vector< std::pair< ResourceType, size_t >> &total_resources, const std::vector< ConcurrentResourceGrantPolicy > &concurrent_resource_grant_policies, const std::vector< ResourceGrantPolicy > &max_per_request_resource_grant_policies)
#define CHECK(condition)
Definition: Logger.h:291

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

void ExecutorResourceMgr_Namespace::ExecutorResourcePool::throw_insufficient_resource_error ( const ResourceSubtype  resource_subtype,
const size_t  min_resource_requested 
) const
private

Definition at line 314 of file ExecutorResourcePool.cpp.

References ExecutorResourceMgr_Namespace::CPU_RESULT_MEM, ExecutorResourceMgr_Namespace::CPU_SLOTS, get_max_resource_grant_per_request(), and ExecutorResourceMgr_Namespace::GPU_SLOTS.

Referenced by calc_static_resource_grant_ranges_for_request().

316  {
317  const size_t max_resource_grant_per_request =
318  get_max_resource_grant_per_request(resource_subtype);
319 
320  switch (resource_subtype) {
322  throw QueryNeedsTooManyCpuSlots(max_resource_grant_per_request,
323  min_resource_requested);
325  throw QueryNeedsTooManyGpuSlots(max_resource_grant_per_request,
326  min_resource_requested);
328  throw QueryNeedsTooMuchCpuResultMem(max_resource_grant_per_request,
329  min_resource_requested);
330  default:
331  throw std::runtime_error(
332  "Insufficient resources for request"); // todo: just placeholder
333  }
334 }
size_t get_max_resource_grant_per_request(const ResourceSubtype resource_subtype) const

+ Here is the call graph for this function:

+ Here is the caller graph for this function:

Member Data Documentation

std::array<size_t, ResourceSubtypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::allocated_resources_ {}
private
std::array<ConcurrentResourceGrantPolicy, ResourceTypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::concurrent_resource_grant_policies_
private
std::array<size_t, ResourceSubtypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::max_resource_grants_per_request_ {}
private
std::array<ResourceGrantPolicy, ResourceSubtypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::max_resource_grants_per_request_policies_ {}
private
size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::outstanding_num_requests_ {0}
private
std::array<size_t, ResourceTypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::outstanding_per_resource_num_requests_ {}
private
std::shared_mutex ExecutorResourceMgr_Namespace::ExecutorResourcePool::resource_mutex_
mutableprivate
std::array<bool, ResourceTypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::resource_type_validity_
private
Initial value:
{
false}

Definition at line 526 of file ExecutorResourcePool.h.

Referenced by init(), and is_resource_valid().

const bool ExecutorResourceMgr_Namespace::ExecutorResourcePool::sanity_check_pool_state_on_deallocations_ {false}
private

Definition at line 548 of file ExecutorResourcePool.h.

Referenced by deallocate_resources().

size_t ExecutorResourceMgr_Namespace::ExecutorResourcePool::total_num_requests_ {0}
private
std::array<size_t, ResourceTypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::total_per_resource_num_requests_ {}
private
std::array<size_t, ResourceTypeSize> ExecutorResourceMgr_Namespace::ExecutorResourcePool::total_resources_ {}
private

Definition at line 525 of file ExecutorResourcePool.h.

Referenced by get_total_resource(), and init().


The documentation for this class was generated from the following files: